id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2303.17902
Analytic modelling of Quantum Capacitance and Carrier Concentration for $β_{12}$-Borophene FET based Gas Sensor
In this work, we investigate the physical and electronic properties of $\beta_{12}$-borophene FET-based gas sensor using a theoretical quantum capacitance model based on tight-binding approach. We study the impact of adsorbed NH$_3$, NO, NO$_2$ and CO gas molecule on its density of states, carrier concentration, quantum capacitance and I-V characteristics. We found a remarkable variation in the energy band structure and the density of states (DOS) of the $\beta_{12}$-borophene in the presence of the adsorbed gas molecule. The appearance of non-identical Van-Hove singularities in the DOS in the presence of adsorbed gas molecules strongly indicates the high sensitivity of $\beta_{12}$-borophene. We found a significant increase in the carrier concentration for NH$_3$ gas while it decreases for all other gases. Moreover, a drastic change in quantum capacitance and current-voltage relation is also observed in adsorbed gases. The different properties of the given gas molecules are compared with the pristine borophene and found to exhibit distinct wrinkles in each case, thereby indicating the strong selectivity of our proposed gas sensor. Though $\beta_{12}$ - borophene is found to be highly sensitive for all studied gases, the NO gas is found to be most sensitive compared to the others.
Nimisha Dutta, Reeta Devi, Arindam Boruah, Saumen Acharjee
2023-03-31T09:00:54Z
http://arxiv.org/abs/2303.17902v1
Analytic modelling of Quantum Capacitance and Carrier Concentration for \(\beta_{12}\) - Borophene FET based Gas Sensor ###### Abstract In this work, we investigate the physical and electronic properties of \(\beta_{12}\) - borophene FET-based gas sensor using a theoretical quantum capacitance model based on tight - binding approach. We study the impact of adsorbed NH\({}_{3}\), NO, NO\({}_{2}\) and CO gas molecule on its density of states, carrier concentration, quantum capacitance and I-V characteristics. We found a remarkable variation in the energy band structure and the density of states (DOS) of the \(\beta_{12}\) - borophene in the presence of the adsorbed gas molecule. The appearance of non-identical Van-Hove singularities in the DOS in the presence of adsorbed gas molecules strongly indicates the high sensitivity of \(\beta_{12}\) - borophene. We found a significant increase in the carrier concentration for NH\({}_{3}\) gas while it decreases for all other gases. Moreover, a drastic change in quantum capacitance and current-voltage relation is also observed in adsorbed gases. The different properties of the given gas molecules are compared with the pristine borophene and found to exhibit distinct wrinkles in each case, thereby indicating the strong selectivity of our proposed gas sensor. Though \(\beta_{12}\) - borophene is found to be highly sensitive for all studied gases, the NO gas is found to be most sensitive compared to the others. pacs: 72.80.Vp, 84.37.+q, 73.63.-b ## I Introduction During the last two decades, two-dimensional (2D) material has garnered enormous interest due to their unique physical properties, making them a promising candidate for next-generation electronics and energy conversion devices [1; 2; 3; 4; 5; 6; 7; 8; 9]. As the first 2D material with significantly different electronic [10; 11; 12; 13], thermal [13; 14], mechanical [15] and optical properties [15], graphene-based devices have been widely explored. However, the absence of band gap, low on/off ratio and extremely high carrier mobility in graphene restrict its use as a semiconducting system [16; 17; 18]. So, there is a need for new alternative materials with similar but more advanced electronic properties than graphene. Thus a class of new 2D materials, like phosphorene [19], silicene [20], stanene [21], antimonene [22], bismuthene [23], germanine [24], molybdenum disulfide [25] etc., have been synthesized as alternate candidates for graphene in recent times. Depending on their stacking and composition, these 2D materials can possess distinctive physicochemical properties, such as electrical conductivity, thermal conductivity, and band structure around the Fermi level, making them suitable for various applications [19; 20; 21; 22; 23; 24; 25]. Alongside the other promising candidates, Borophene, a single element 2D sheet of boron, has also been synthesized on Ag (111) substrate in recent times [26; 27; 28]. Unlike graphene, borophene has a triangular and hexagonal array of atoms exhibiting in-plane elasticity and can be more flexible in some configurations. Apart from that, borophene has high mobility, electrical and thermal conductivity and also possesses high mechanical strength, with Young's modulus much higher than its rivals [29; 30; 31]. Thus borophene has excellent potential in advanced electrical information, sensing and other optical, mechanical and thermal applications [32]. Moreover, previous works indicate that the borophene-based systems are active to hazardous gases, thus making them suitable for gas sensing applications [33; 34; 35]. Significant attention also has been given to the optimization of the borophene/MoS\({}_{2}\) heterostructure using density functional theory (DFT) implemented in the VASP package [36]. Although there exist several boron clusters with fascinating properties, it was found that the allotrope \(\beta_{12}\) - borophene is thermodynamically most stable as compared to the other members of the borophene family [28; 37]. It is observed that \(\beta_{12}\) - borophene can display semiconducting and semi-metallic properties in the presence of an applied electric field and charged impurity [38]. Furthermore, this boron cluster can also display anisotropic Kubo conductivity [39] in the presence of an applied electric field, making them unique from other members. Though several attempts have been made to understand the adsorption properties in different boron-based clusters both theoretical as well as experimentally [33; 34; 35; 40], the study of gas sensing in \(\beta_{12}\) - borophene is limited. So, in this work, we have focused on \(\beta_{12}\) phase of borophene to investigate the adsorption of the gas molecules. Recently, quantum capacitance has gained much attention in studying 2D electron systems to reveal interesting many-body effects [41; 42; 43]. It also carries information regarding the ground state of the system, revealing the effect of electron-electron interaction and quantum correlation. The Quantum capacitance of a system carries essential information regarding the density of states. Recently quantum capacitance model has been used to measure the Van-Hove singularities and Luttinger parameters in a one-dimensional system [44; 45]. Moreover, the quantum capacitance measurements have also shown the linear density of states of topological insulator Bi\({}_{2}\)Se\({}_{3}\)[46] and monolayer graphene [47; 48; 49]. Thus, it is necessary to have knowledge about the quantum capacitance for understanding the system properly. Though quantum capacitance has been widely studied theoretically and ex perimentally in graphene, such studies have been limited to borophene until now. An attempt has been made to explore the quantum capacitance effect in graphene chemical sensors and \(\delta_{6}\) - borophene [50; 51] to induce changes in the capacitance values upon the adsorption of molecules on the surface. But quantum capacitance of \(\beta_{12}\) - borophene has not been explored so far. Therefore, in this work, we investigated the quantum capacitance of \(\beta_{12}\) - borophene in the presence of adsorbed gas molecules. The organisation of the paper is as follows: we present a theoretical framework based on tight-binding (TB) Hamiltonian and also present a quantum capacitance model of \(\beta_{12}\) - borophene-based FET based gas sensor in Section II. In Section III, we study the density of states, carrier concentration, quantum capacitance and I-V characteristics to understand the gas sensing property of \(\beta_{12}\) - borophene-based FET. A summary of our work is presented in Section IV. ## II Theoretical framework The schematic illustration of \(\beta_{12}\) - borophene field effect transistor-based gas sensor is shown in Fig. 1(a). The honeycomb lattice arrangement and the unit cell of \(\beta_{12}\) - borophene is shown in Fig. 1(b). The unit cell comprises of five boron atoms labelled a, b, c, d and e. The \(\beta_{12}\) - borophene lattice can be generated using the lattice basis vectors (\(\vec{a},\vec{b}\)) \(=(\sqrt{3}a_{0}\hat{e}_{x},3a_{0}\hat{e}_{y})\) where \(a_{0}=1.74\AA\) is the boron-boron atom distance [42]. It is to be noted that borophene does not allow the formation of in-plane \(\sigma\) bonds due to the lack of one electron in the boron atom compared to the carbon atom. So, the C-atoms can be treated as a perfect donor in filling the in-plane hexagonal \(\sigma\)-bonds. Borophene has the energy levels involving \(s\), \(p_{x}\), \(p_{y}\) and \(p_{z}\) orbitals which are \(sp^{2}\) - hybridized. However, only \(p_{z}\) - orbitals can contribute to the carrier dynamics. It is due to the reason that the wave function at the Fermi energy has a vanishing amplitude at site c resulting in phase cancellation at the six-fold B-atoms. The band structure of \(\beta_{12}\) - borophene in the presence of adsorbed gas molecules can be obtained using the tight binding approach considering only the nearest neighbour approximation. To model the molecular adsorption effect, we consider a \(\beta_{12}\) - borophene sheet consisting of N sites. The gas adsorption in the \(\beta_{12}\) - borophene lattice and the \(n^{\text{th}}\) unit cell with its nearest neighbours is shown in Figs. 1(c) and 1(d). The matrix equation for the \(n^{\text{th}}\) unit cell according to tight binding model can be written as [48; 49] \[\sum_{m}\left[\mathcal{H}_{nm}\right]\left\{\Psi_{m}\right\}=E\{\Psi_{n}\} \tag{1}\] where \(\{\Psi_{n}\}\) is a (\(b\times 1\)) column matrix corresponds to the wavefunction in the unit cell \(n\). It is to be noted that the Hamiltonian matrix for pristine \(\beta_{12}\) - borophene with five atoms in its unit cell will be a (\(5\times 5\)) matrix. To characterize the effect of gas adsorption, we consider that each gas molecule is adsorbed only in each A site of the unit cell of \(\beta_{12}\) - borophene as shown in Fig. 1(d). Thus, the Hamiltonian of the \(\beta_{12}\) - borophene in the presence of a gas molecule will be a matrix of (\(6\times 6\)) dimension. We consider a plane waveform of the wave function, i.e., \(\Psi_{n}=\Psi_{0}e^{i\vec{k}.\vec{d}_{n}}\), with \(\vec{k}\) and \(\vec{d}_{n}\) being the plane wave vector and distance between a source with the \(n^{\text{th}}\) unit-cell respectively. The band structure of the \(\beta_{12}\) - borophene can be calculated by solving a matrix eigenvalue equation of the form [48; 49] \[\left[h(k)\right]=\sum_{nm}\left[\mathcal{H}_{nm}\right]e^{i\vec{k}.(\vec{d} _{n}-\vec{d}_{m})} \tag{2}\] Figure 1: (a) Schematic illustration of \(\beta_{12}\) - borophene field effect transistor (FET) based gas sensor. (b) Top view of the geometry structure of \(\beta_{12}\) - borophene. The unit cell (shaded green region) is one-half of the honeycomb lattice and consists of five boron atoms. The lattice parameters of \(\beta_{12}\) - borophene are \(a=|\vec{a}|=\sqrt{3}a_{0}\) and \(b=|\vec{b}|=3a_{0}\) where \(a_{0}\) is the boron-boron atom distance. (c) Top view of the adsorption of the gas molecule on \(\beta_{12}\) - borophene geometry. (d) 3D arrangement of the adsorption gas molecule on \(a\) site of \(\beta_{12}\) - borophene unit cell. where \(\{\Psi_{n}\}\) is a (\(b\times 1\)) column matrix corresponding to the wavefunction in the unit cell \(n\). It is to be noted that the Hamiltonian matrix for pristine \(\beta_{12}\) - borophene with five atoms in its unit cell will be a (\(5\times 5\)) matrix. To characterize the effect of gas adsorption, we consider that each gas molecule is adsorbed only in each A site of the unit cell of \(\beta_{12}\) - borophene as shown in Fig. 1(d). Thus, the Hamiltonian of the \(\beta_{12}\) - borophene in the presence of a gas molecule will be a matrix of (\(6\times 6\)) dimension. \[[h(k)]_{nm}=\left(\begin{array}{cccccc}0&t_{\text{ab}}\delta_{k}&0&0&0&0\\ 0&0&0&0&0&0\\ 0&t_{\text{cb}}\delta_{k}&0&t_{\text{cd}}\delta_{k}&0&0\\ 0&0&0&0&0&0\\ 0&0&0&t_{\text{end}}\delta_{k}&0&0\\ +\left(\begin{array}{cccccc}0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ t_{\text{ea}}&0&0&0&0\\ 0&0&0&0&0\\ \end{array}\right)+\left(\begin{array}{cccccc}0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ t_{\text{ea}}&0&0&0&0\\ 0&0&0&0&0\\ \end{array}\right)+\left(\begin{array}{cccccc}0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&t_{\text{ac}}\delta_{k}^{*}&0&t_{\text{dc}}\delta_{k}^{*}&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ \end{array}\right)+\left(\begin{array}{cccccc}0&0&0&0&0&0\\ 0&0&0&t_{\text{ae}}&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ \end{array}\right)+\left(\begin{array}{cccccc}0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ \end{array}\right)+\left(\begin{array}{cccccc}0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ \end{array}\right)+\left(\begin{array}{cccccc}0&0&0&0&t_{\text{ae}}&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ \end{array}\right) \tag{3}\] where, \(m=1,2,3,4\) correspond to the four nearest unit cells of the \(n^{\text{th}}\) unit cell. Here, \(t_{\text{ij}}\) are the hopping energy parameter of \(i^{\text{th}}\) and \(j^{\text{th}}\) boron atoms. The hopping parameters for the homogeneous model are \(t_{\text{ij}}=-2\) eV [34], while in the inversion non-symmetric (INS) model, the hopping parameters are: \(t_{\text{ab}}=t_{\text{de}}=-2.04\) eV, \(t_{\text{ac}}=t_{\text{ce}}=-1.79\) eV, \(t_{\text{bc}}=t_{\text{cd}}=-1.84\) eV, \(t_{\text{bd}}=-1.91\) eV, \(t_{\text{ad}}=0\) eV and \(t_{\text{ae}}=-2.12\) eV [34]. Here, we define \(\delta_{k}\equiv\exp\left[-\frac{ie_{0}k_{e}}{2}\right]\). It is to be noted that one may use the lattice parameters (\(\bar{a},\bar{b}\)) for the calculation of distance \(\vec{d}_{n}\). However, in this work, the distance between the lattice points (\(\vec{d}_{m}-\vec{d}_{n}\)) are obtained by converting the lattice parameters in terms of bond length \(a_{0}\) for simplicity of the calculations. In a similar way, the interaction of the \(n^{\text{th}}\) unit cell with the adsorbed gas molecule can be written as \[[h(k)]_{mn}=\left(\begin{array}{cccccc}\varepsilon_{\text{a}}&t_{\text{ab} }&t_{\text{ac}}&0&0&t^{\prime}\\ t_{\text{tu}}&\varepsilon_{\text{b}}&t_{\text{bc}}&t_{\text{bd}}&0&0\\ t_{\text{ca}}&t_{\text{cb}}&\varepsilon_{\text{c}}&t_{\text{cd}}&t_{\text{ce}} &0\\ 0&t_{\text{db}}&t_{\text{dc}}&\varepsilon_{\text{d}}&t_{\text{de}}&0\\ 0&0&t_{\text{ce}}&t_{\text{ed}}&\varepsilon_{\text{e}}&0\\ t^{\prime}&0&0&0&0&\varepsilon^{\prime}\\ \end{array}\right) \tag{4}\] where, \(\varepsilon_{i}\) are the corresponding on-site energy of the \(i^{\text{th}}\) atom. The on-site energies in homogeneous model are \(\varepsilon_{i}=0\) eV while in the INS model on-site energies are: \(\varepsilon_{\text{a}}=\varepsilon_{\text{d}}=0.196\) eV, \(\varepsilon_{\text{b}}=\varepsilon_{\text{e}}=-0.058\) eV and \(\varepsilon_{\text{c}}=-0.845\) eV [34]. The parameters \(t^{\prime}\) and \(\varepsilon^{\prime}\) characterize the hopping energy between the Boron atom and the adsorbed molecule and the on-site energies, respectively. Thus, the total tight binding Hamiltonian of the \(\beta_{12}\)-borophene system in presence of adsorbed gas in view of Eq. (3) and Eq. (4) can be expressed as \[[h(k)]=\left(\begin{array}{cccccc}\varepsilon_{\text{a}}&f_{k}^{*}t_{\text{ ab}}&t_{\text{ac}}&0&t_{\text{ae}}&t^{\prime}\\ f_{k}t_{\text{ab}}&\varepsilon_{\text{e}}&f_{k}t_{\text{bc}}&t_{\text{bd}}&0&0\\ t_{\text{ac}}&f_{k}^{*}t_{\text{bc}}&\varepsilon_{\text{c}}&f_{k}^{*}t_{\text{cd }}&t_{\text{ce}}&0\\ 0&t_{\text{bd}}&f_{k}t_{\text{cd}}&\varepsilon_{\text{d}}&t_{\text{de}}f_{k}&0 \\ t_{\text{ae}}&0&t_{\text{ce}}&f_{k}^{*}t_{\text{dc}}&\varepsilon_{\text{e}}&0\\ t^{\prime}&0&0&0&0&\varepsilon^{\prime}\\ \end{array}\right) \tag{5}\] where, \(f_{k}\equiv 1+\exp\left[\frac{ia_{0}k_{e}}{2}\right]\). The band structure and electronic dispersion of pristine \(\beta_{12}\) - borophene can be obtained by diagonalizing the Hamiltonian. For non-trivial solutions, we can write, \[det[h(k)-E_{k}\hat{I}]=0,\quad\lambda\in\{1,....5\} \tag{6}\] where, \(E_{k}\) are the energy eigen values of the \([h(k)]\). The determinant of the matrix can be expressed for pristine \(\beta_{12}\) - borophene as \[E_{k}^{5}+\mathcal{Q}_{4}E_{k}^{4}+\mathcal{Q}_{3}E_{k}^{3}+\mathcal{Q}_{2}E_{k }^{2}+\mathcal{Q}_{1}E_{k}+\mathcal{R}=0 \tag{7}\] where \(\mathcal{Q}_{1}\), \(\mathcal{Q}_{2}\), \(\mathcal{Q}_{3}\), \(\mathcal{Q}_{4}\) and \(\mathcal{R}\) are the \(k\) - dependent variable. An extensive form of the same is displayed in Appendix A. The energy band and the brillouin zones of \(\beta_{12}\) - borophene can be obtained by solving the dispersion relation Eq. (7). Fig. 2, presents the band structure of \(\beta_{12}\) - borophene in both homogeneous and the INS model. It is evident from Fig. 2 that \(\beta_{12}\) - borophene has three conduction and two valence bands. So, \(\beta_{12}\) - borophene has five bands. It is to be noted that the band edges touch at the high symmetry \(K^{\prime}\) and Figure 2: Band structure of \(\beta_{12}\) - borophene using (a) homogenous and (b) the INS model. The plot (c) represents the band structure of \(\beta_{12}\) - borophene in presence of NH\({}_{3}\) using the INS model. points with coordinates \((-2\pi/3a_{0},0)\) and \((2\pi/3a_{0},0)\) indicating the presence of Dirac fermions at \(E_{k}=0\). Also, triplet fermions at X and M points appear at \(E_{k}\neq 0\). Moreover, a direct energy gap exists between \(\sim 0.25\) eV at \(K^{\prime}\) and \(K\) points for the INS model of \(\beta_{12}\) - borophene. To understand the molecular gas adsorption effect on borophene energy, we have plotted the band structure of \(\beta_{12}\) - borophene in the presence of NH\({}_{3}\) gas in Fig. 1 (c). It is evident that the band gap at \(K^{\prime}\), \(K\) and \(X\) points increases in the presence of adsorbed NH\({}_{3}\) molecule. Moreover, it is to be noted that the widening of the band gap is dependent on the hopping energy \(t^{\prime}\) of the boron atom with the adsorbed gas molecule. The hopping energies can be calculated by using the relation [49] \[t_{xy}=t\left(\frac{a_{0}}{d_{xy}}\right) \tag{8}\] where, \(t_{xy}\) and \(d_{xy}\) are the hopping energy and the distance between \(\beta_{12}\) - borophene surface and the gas molecule respectively. The density of states (DOS) has an energy dependence which signifies the number of allowed states per unit area at a given energy range \(E\) and \(E+dE\) and can be obtained by using the relation [48]. \[D(E)=\frac{\Delta n}{A\Delta E} \tag{9}\] where, A is the area of the \(\beta_{12}\) - borophene surface and \(n\) is the carrier concentration of the electrons. Although we obtain an analytic expression for the DOS using Eq. (7) and Eq. (9), in presence of adsorbed gas molecules it is too large to represent. The carrier concentration in the \(\beta_{12}\) - borophene based gas sensor can be obtained by using the standard relation [48] \[n(E)=\int_{0}^{\infty}D(E)f(E)dE \tag{10}\] where, \(f(E)=\frac{1}{1+\exp\left(\frac{E-E_{\text{B}}}{E_{\text{B}}T}\right)}\) is the Fermi-Dirac distribution function, \(E_{\text{F}}\) is the Fermi energy and \(k_{\text{B}}\) is the Boltzmann constant. ## III Results and discussions ### Density of states The density of states (DOS) of \(\beta_{12}\) - borophene in presence of adsorbed gas molecules is studied using Eq. (10) in Fig. 3. The hopping parameters for different gases are calculated using Eq. (6) and given in Tab. 1. We observe a significant change in the DOS spectra due to the adsorption of different gases. It is to be noted that the peak position in DOS spectra correspond to the flatter band energies in the \(E-k\) diagram. Moreover, each extremum point can also be visualized as a Van-Hove singularity (VHS) in the DOS. At the same time, the smooth zones in the band structure depict the presence of localized electrons. As observed earlier from Fig. 2(c) that the presence of adsorbed NH\({}_{3}\) molecule drastically changes the band structure resulting in a significant change in DOS also. The change in the peak positions in DOS indicates the high sensitive gas sensing performance of \(\beta_{12}\) - borophene. A similar characteristic is also observed for the DOS for pristine \(\beta_{12}\) - borophene in the presence of adsorbed gases as depicted in Fig. 3. We consider the INS model for all our analysis, and the effect of the adsorbed gas molecule on DOS can be understood via the hopping energy parameter \(t_{xy}\) from Eq. (8). It is to be noted that the presence of more number of bands in the \(E-k\) diagram indicates the presence of more degenerate bands in the DOS spectra. A similar characteristic in DOS is also for \(\beta_{12}\) - borophene in the presence of NO molecule. However, in this case, the VHS at \(E\sim 6.5\) eV shifted towards a higher energy region, as seen from Fig. 3(b). However, the DOS is found to be quite identical with \(\beta_{12}\) - borophene for \begin{table} \begin{tabular}{|c|c|c|c|} \hline Adsorbed & Adsorbate distance & Hopping energy & On-site Energy \\ Molecule & from Borophene & \(t_{xy}\) (eV) & \(\varepsilon^{\prime}\) (eV) \\ & surface \(d_{xy}\) (Å) & & \\ \hline NH\({}_{3}\) & 1.63 & -2.113 & 1.11 \\ \hline NO & 1.38 & -2.496 & 0.95 \\ \hline NO\({}_{2}\) & 1.57 & -2.194 & 1.75 \\ \hline CO & 1.48 & -2.167 & 1.19 \\ \hline \end{tabular} \end{table} Table 1: Hopping parameter for gas molecules adsorbed on \(\beta_{12}\) - Borophene surface Figure 3: DOS for \(\beta_{12}\) - borophene using INS model in absence and in presence of (a) NH\({}_{3}\), (b) NO (c) NO\({}_{2}\) and (d) CO gases. \(E<0\) regions. The adsorption of NO\({}_{2}\) molecule significantly changes the band structure, resulting in a significant change in the DOS. In this case, the VHS at \(E\sim 3.5\) eV disappears completely, as seen in Fig. 3(c). The DOS for \(\beta_{12}\) - borophene in the presence of CO molecule is similar to that for \(\beta_{12}\) - borophene with adsorbed NO molecule. However, the peak positions shifted towards higher energy for adsorption of CO molecule, as seen from Figs. 3(d). ### Carrier Concentration and Quantum Capacitance To obtain an expression for quantum capacitance we consider the device is in quasi-equilibrium and the carrier distribution shifted by the local electrostatic potential. Then the charge density \(Q\) of the electrons can be written as [48] \[Q=q\int_{0}^{\infty}D(E)f\left(E+E_{\text{g}}+qV_{\text{a}}\right)dE \tag{11}\] where, \(q\) is the magnitude of the electronic charge and \(V_{\text{a}}\) is the local electrostatic potential and \(E_{\text{g}}\) is the band gap. The quantum capacitance \(C_{\text{Q}}\) of the system is defined as [43; 44; 45] \[C_{\text{Q}} =\frac{\partial Q}{\partial V_{\text{a}}}\] \[=\frac{q}{4k_{\text{B}}T}\int_{0}^{\infty}D(E)\text{sech}\left( \frac{E+E_{\text{g}}+qV_{\text{a}}}{2k_{\text{B}}T}\right)dE \tag{12}\] In Fig. 4(a), we plot the quantum capacitance (\(C_{\text{Q}}\)) of \(\beta_{12}\) - borophene FET against the gate-source voltage (\(V_{\text{gs}}\)) for various adsorbed gas molecules, considering their different hopping energy parameters. It is observed that at zero gate voltage the \(C_{\text{Q}}\) is minimum while it increases gradually with increase or decrease in \(V_{\text{gs}}\). The charge transfer from the adsorbed molecules results in the modification of the DOS of \(\beta_{12}\) - borophene resulting in the change in the number of charge carrier concentration. This change in charge carrier in turn can change the \(C_{\text{Q}}\) between the gate electrode and the \(\beta_{12}\) - borophene surface. It is seen from the Fig. 4(b) that the \(C_{\text{Q}}\) slightly increased when the adsorbed molecule was NH\({}_{3}\) while it decreased for NO, NO\({}_{2}\) and CO. This change in \(C_{\text{Q}}\) will result in the variations in I-V characteristics of the \(\beta_{12}\) - borophene. ### I-V characteristics The I-V characteristics of the \(\beta_{12}\) - borophene FET based sensor can be understood from quantum capacitance using the expression [43; 44; 45] \[I=\mu C_{\text{Q}}V_{\text{ds}}E \tag{13}\] where, \(\mu\) is the mobility of the electrons and \(V_{\text{ds}}\) is the drain-source voltage. The FET based gas sensors respond to gas adsorption by adjusting the gate bias which controls the carrier concentration in the channel thus tuning the amount of charge carriers in exchange with the absorbed gas molecules. In order to confirm the influence of carrier concentration on the gas response, the response curves of FET-based sensors with \(V_{\text{ds}}\) is shown in Fig 5. It has been clearly seen that the NO response on the surface, increases with the increase of \(V_{\text{ds}}\). The current - voltage (I-V) characteristics are calculated using our developed formalism, the results being shown in Fig 5. It reflects the effect of molecular adsorption of NH\({}_{3}\), NO, NO\({}_{2}\) and CO on the quantum capacitance of borophene. The sensitivities of the proposed FET-based sensor for NH\({}_{3}\), NO, NO\({}_{2}\) and CO increases with the increase in \(V_{\text{ds}}\). Among the four gas molecules, the sensitivity of NH\({}_{3}\) is found to be highest for given \(V_{\text{ds}}\), showing better sensing performance. The change in current has been observed after the adsorption of the gas molecules. The adsorption of the gas molecules on the \(\beta_{12}\) - borophene surface could also change the energy bandgap of the \(\beta_{12}\) - borophene which in turn would change the conductivity of the \(\beta_{12}\) - borophene sensor. On the other hand, the modulation of the concentration of the charge carriers on the borophene surface occurs due to the charge transfer between borophene and gas molecules. The I-V characteristics are calculated for pristine \(\beta_{12}\) - borophene and in presence Figure 4: (a) Variation of carrier concentration with \(V_{\text{ds}}\) of \(\beta_{12}\) - borophene in absence and in presence for different adsorbed gases. (b) Quantum conductance of \(\beta_{12}\) - borophene as a function of \(V_{\text{gs}}\) in absence and in presence of adsorbed gas molecules. of adsorbed gas molecules using Eq. (13) and the results are shown in Fig. 5. It reflects the effect of molecular adsorption of NH\({}_{3}\), NO, NO\({}_{2}\), CO on the quantum capacitance of borophene. The sensitivities of the proposed FET-based sensor for NH\({}_{3}\), NO, NO\({}_{2}\), CO increase with the increase in \(V_{\rm ds}\). Among the four gas molecules, the sensitivity of NO is found to be highest for given \(V_{\rm ds}\), showing better sensing performance. The change in current has been observed after the adsorption of the gas molecules. The adsorption of the gas molecules on the borophene surface could also change the energy bandgap of the borophene which in turn would change the conductivity of the borophene sensor. On the other hand, the modulation of the concentration of the charge carriers on the borophene surface occurs due to the charge transfer between borophene and gas molecules. ## IV Conclusions In this work, we have studied the effects of molecular adsorption on the physical and electrical properties of \(\beta_{12}\) - borophene. We propose a theoretical model based on tight binding technique to study the impact of various adsorbed gases on the number concentration, quantum capacitance and I-V characteristics of the FET based \(\beta_{12}\) - borophene gas sensor. The study reveals the variation of borophene energy band gap near Fermi energy influenced by the adsorption of gas molecules. Furthermore, we study the density of states in the presence of different gases through the carrier concentration and developed the quantum capacitance model of FET based \(\beta_{12}\) - borophene gas sensor. The existence and shifting of the Van-Hove singularities at different energy sites in presence of gas molecules indicate the sensing performance of ability of \(\beta_{12}\) - borophene. The study was mainly focused on the adsorption of NH\({}_{3}\), NO\({}_{2}\), NO and CO gas molecules and the effects on the quantum capacitance and the I-V characteristics of the borophene sensor was investigated. The present study specifies a significant variation in the band gap and quantum capacitance after gas adsorption contributing a change in the conductance, carrier concentration and current in the borophene gas sensor. This work paves the way for future studies of borophene as gas sensor due to its attractive gas sensing properties. ## Appendix A Extended form of \(\mathcal{Q}\)'s and \(\mathcal{R}\) In this section we present an extended form of \(\mathcal{Q}\)'s and \(\mathcal{R}\) appearing in Eq. (7). For a pristine \(\beta_{12}\) - borophene i.e., for \(t^{\prime}=0\) and \(\varepsilon^{\prime}=0\), the \(\mathcal{Q}\)'s and \(\mathcal{R}\) are defined as \[\mathcal{Q}_{1}=\{16t_{\rm ab}^{2}(t_{\rm cd}^{2}+t_{\rm dc}^{2}) +16t_{\rm dc}^{2}t_{\rm dc}^{2}\}\cos^{4}\left(\frac{ka}{4}\right)+\left[8t_{ \rm ab}t_{\rm ac}\{t_{\rm bc}(\varepsilon_{\rm d}+\varepsilon_{\rm e})-t_{\rm bd }t_{\rm cd}\}-4\varepsilon_{\rm e}\varepsilon_{\rm b}t_{\rm cd}^{2}-4 \varepsilon_{\rm a}\varepsilon_{\rm b}t_{\rm dc}^{2}+8\varepsilon_{\rm a}t_{ \rm bc}t_{\rm bd}t_{\rm cd}-4t_{\rm bc}^{2}\varepsilon_{\rm e}(\varepsilon_{ \rm a})\] \[+\varepsilon_{\rm d})-4\varepsilon_{\rm a}t_{\rm bc}^{2} \varepsilon_{\rm d}-4\varepsilon_{\rm a}\varepsilon_{\rm c}t_{\rm dc}^{2}+8 \varepsilon_{\rm a}t_{\rm cd}t_{\rm ce}t_{\rm dc}-4\varepsilon_{\rm a}t_{\rm cd }^{2}\varepsilon_{\rm e}-8t_{\rm ab}t_{\rm ae}(t_{\rm bc}t_{\rm ce}+t_{\rm bd }t_{\rm de})-4t_{\rm ab}^{2}\varepsilon_{\rm e}(\varepsilon_{\rm c}+ \varepsilon_{\rm d})-4t_{\rm ab}^{2}\varepsilon_{\rm c}\varepsilon_{\rm d}+4t _{\rm ab}^{2}t_{\rm ce}^{2}-8t_{\rm ac}t_{\rm ac}t_{\rm dc}t_{\rm de}+4t_{ \rm ac}^{2}t_{\rm dc}^{2}\] \[+4t_{\rm ae}^{2}(t_{\rm bc}^{2}+t_{\rm cd}^{2})-4\varepsilon_{\rm b }\varepsilon_{\rm c}t_{\rm db}^{2}+8\varepsilon_{\rm b}t_{\rm cd}t_{\rm cd}t_{ \rm de}-4\varepsilon_{\rm b}t_{\rm cd}^{2}\varepsilon_{\rm e}+8t_{\rm bc}t_{ \rm bd}t_{\rm cd}\varepsilon_{\rm e}-8t_{\rm bc}t_{\rm bd}t_{\rm ce}t_{\rm de }]\cos^{2}\left(\frac{ka}{4}\right)-[\varepsilon_{\rm d}\varepsilon_{\rm e} \{\varepsilon_{\rm c}(\varepsilon_{\rm a}+\varepsilon_{\rm b})+\varepsilon_ {\rm a}\varepsilon_{\rm b}\}+\varepsilon_{\rm a}\varepsilon_{\rm b}\varepsilon_ {\rm c}\varepsilon_{\rm c}\varepsilon_{\rm d}\] \[+\varepsilon_{\rm a}\varepsilon_{\rm b}\varepsilon_{\rm c} \varepsilon_{\rm e}+\varepsilon_{\rm a}\varepsilon_{\rm b}t_{\rm ce}^{2}-t_{\rm bd }^{2}\varepsilon_{\rm e}(\varepsilon_{\rm a}+\varepsilon_{\rm c})-\varepsilon _{\rm a}t_{\rm bd}^{2}\varepsilon_{\rm c}-\varepsilon_{\rm a}t_{\rm ce}^{2} \varepsilon_{\rm c}\varepsilon_{\rm d}+2t_{\rm ac}t_{\rm ae}t_{\rm ce}( \varepsilon_{\rm b}+\varepsilon_{\rm d})-t_{\rm ac}^{2}\varepsilon_{\rm bc} \varepsilon_{\rm d}-t_{\rm ac}^{2}\varepsilon_{\rm b}\varepsilon_{\rm d}-t_{\rm ae }^{2}\varepsilon_{\rm d}(\varepsilon_{\rm b}+\varepsilon_{\rm c})-t_{\rm ae}^{2 }\varepsilon_{\rm b}\varepsilon_{\rm c}\] \[+t_{\rm ae}^{2}t_{\rm bd}^{2}-\varepsilon_{\rm b}t_{\rm ce}^{2} \varepsilon_{\rm d}+t_{\rm bd}^{2}t_{\rm ce}^{2}] \tag{11}\] \[\mathcal{Q}_{2}=\{4t_{\rm bc}^{2}(\varepsilon_{\rm a}+\varepsilon_{\rm d }+\varepsilon_{\rm e})+4\varepsilon_{\rm a}t_{\rm cd}^{2}+4\varepsilon_{\rm a}t_ {\rm de}^{2}-8t_{\rm ab}t_{\rm ac}t_{\rm bc}+4t_{\rm ab}^{2}(\varepsilon_{\rm c }+\varepsilon_{\rm d}+\varepsilon_{\rm e})+4\varepsilon_{\rm b}t_{\rm cd}^{2}+4 \varepsilon_{\rm b}t_{\rm dc}^{2}-8t_{\rm bc}t_{\rm bd}t_{\rm cd}+4\varepsilon_ {\rm c}t_{\rm de}^{2}-8t_{\rm cd}t_{\rm ce}t_{\rm de}\] \[+4t_{\rm cd}^{2}\varepsilon_{\rm e}\}\cos^{2}\left(\frac{ka}{4} \right)-\varepsilon_{\rm a}\varepsilon_{\rm e}(\varepsilon_{\rm b}+ \varepsilon_{\rm c}+\varepsilon_{\rm d})-\varepsilon_{\rm a}\varepsilon_{\rm b} \varepsilon_{\rm c}-\varepsilon_{\rm a}\varepsilon_{\rm c}\varepsilon_{\rm d}+t _{\rm bd}^{2}(\varepsilon_{\rm a}+\varepsilon_{\rm c}+\varepsilon_{\rm e})- \varepsilon_{\rm a}\varepsilon_{\rm c}\varepsilon_{\rm d}+\varepsilon_{\rm a}t_ {\rm ce}^{2}-2t_{\rm ac}t_{\rm ae}t_{\rm ce}+t_{\rm ae}^{2}(\varepsilon_{\rm b }+\varepsilon_{\rm d}+\varepsilon_{\rm e})+t_{\rm ae}^{2}(\varepsilon_{\rm b}\] \[+\varepsilon_{\rm c}+\varepsilon_{\rm d})-\varepsilon_{\rm d} \varepsilon_{\rm e}(\varepsilon_{\rm b}+\varepsilon_{\rm c})-\varepsilon_{\rm b} \varepsilon_{\rm c}\varepsilon_{\rm d}-\varepsilon_{\rm b}\varepsilon_{\rm c} \varepsilon_{\rm e}+\varepsilon_{\rm b}t_{\rm ce}^{2}+t_{\rm ce}^{2}\varepsilon_{ \rm d} \tag{12}\] \[\mathcal{Q}_{3}=-4\cos^{2}\left(\frac{ka}{4}\right)\left(t_{\rm ab}^{2}+t_{\rm bc }^{2}+t_{\rm cd}^{2}+t_{\rm de}^{2}\right)-t_{\rm ae}^{2}-t_{\rm ae}^{2}-t_{ \rm bd}^{2}-t_{\rm ce}^{2}+\varepsilon_{\rm a}\varepsilon_{\rm b}+ \varepsilon_{\rm a}\varepsilon_{\rm c}+\varepsilon_{\rm a}\varepsilon_{\rm e}+ \varepsilon_{\rm b}\varepsilon_{\rm c}+\varepsilon_{\rm b}\varepsilon_{\rm d}+ \varepsilon_{\rm b}\varepsilon_{\rm c}+\varepsilon_{\rm c}\varepsilon_{\rm d}+ \varepsilon_{\rm c}\varepsilon_{\rm e}+\varepsilon_{\rm c}\varepsilon_{\rm d}+ \varepsilon_{\rm c}\varepsilon_{\rm e}\] \[+\varepsilon_{\rm d}\varepsilon_{\rm e} \tag{13}\] Figure 5: I-V characteristics of the \(\beta_{12}\) - borophene FET in absence and in presence of different adsorbed gas molecules. \[\mathcal{Q}_{4}=-\left(\varepsilon_{\rm a}+\varepsilon_{\rm b}+ \varepsilon_{\rm c}+\varepsilon_{\rm d}+\varepsilon_{\rm e}\right) \tag{10}\] \[\mathcal{R}=\cos^{2}\left(\frac{ka}{4}\right)\left\{4\varepsilon_{ \rm a}\varepsilon_{\rm b}\varepsilon_{\rm c}t_{\rm d}^{2}-8\varepsilon_{\rm a }\varepsilon_{\rm b}t_{\rm cd}t_{\rm ce}t_{\rm de}+4\varepsilon_{\rm a} \varepsilon_{\rm b}t_{\rm cd}^{2}\varepsilon_{\rm e}-8\varepsilon_{\rm a}t_{ \rm bc}t_{\rm bd}t_{\rm cd}\varepsilon_{\rm e}+8\varepsilon_{\rm a}t_{\rm bc }t_{\rm bd}t_{\rm ce}t_{\rm de}+4\varepsilon_{\rm a}t_{\rm bc}^{2}\varepsilon _{\rm d}\varepsilon_{\rm e}-8t_{\rm ab}t_{\rm ac}t_{\rm bc}\varepsilon_{\rm d }\varepsilon_{\rm e}\] \[+8t_{\rm ab}t_{\rm ac}t_{\rm bd}(t_{\rm cd}\varepsilon_{\rm e}- t_{\rm ce}t_{\rm de})+8t_{\rm ab}t_{\rm ae}t_{\rm bc}t_{\rm ce}\varepsilon_{\rm d }+8t_{\rm ab}t_{\rm ae}t_{\rm bd}(\varepsilon_{\rm c}t_{\rm de}-t_{\rm cd}t_{ \rm ce})-4t_{\rm ab}^{2}\varepsilon_{\rm d}(t_{\rm ce}^{2}-\varepsilon_{\rm c }\varepsilon_{\rm e})+8t_{\rm ac}t_{\rm ae}\varepsilon_{\rm b}t_{\rm cd}t_{ \rm de}-8t_{\rm ac}t_{\rm ae}t_{\rm bc}t_{\rm bd}\varepsilon_{\rm d}-4t_{\rm ac }^{2}\varepsilon_{\rm b}t_{\rm de}^{2}\] \[-4t_{\rm ac}^{2}\varepsilon_{\rm b}t_{\rm cd}^{2}+8t_{\rm ae}^{2} t_{\rm bc}t_{\rm bd}t_{\rm cd}-4t_{\rm ac}^{2}t_{\rm bc}^{2}\varepsilon_{\rm d }\}+\cos^{2}\left(\frac{ka}{4}\right)^{-2}16\varepsilon_{\rm a}t_{\rm bc}^{2} t_{\rm de}^{2}+32t_{\rm ab}t_{\rm ac}t_{\rm bc}t_{\rm de}^{2}-32t_{\rm ab}t_{\rm ac}t_{\rm bc }t_{\rm cd}t_{\rm de}-16t_{\rm ab}^{2}\varepsilon_{\rm c}t_{\rm de}^{2}+32t_{ \rm ab}^{2}t_{\rm cd}t_{\rm ce}t_{\rm de}\] \[-16t_{\rm ab}^{2}t_{\rm cd}^{2}\varepsilon_{\rm e})+\varepsilon_ {\rm a}\varepsilon_{\rm c}\varepsilon_{\rm e}(t_{\rm bd}^{2}-\varepsilon_{\rm b }\varepsilon_{\rm d})+\varepsilon_{\rm a}\varepsilon_{\rm c}\varepsilon_{\rm e }t_{\rm de}^{2}\varepsilon_{\rm d}-\varepsilon_{\rm d}t_{\rm bd}^{2}t_{\rm ce }^{2}-2t_{\rm ac}t_{\rm ae}\varepsilon_{\rm b}t_{\rm ce}\varepsilon_{\rm d} +2t_{\rm ac}t_{\rm ae}t_{\rm bd}^{2}\varepsilon_{\rm ce}+t_{\rm ac}^{2} \varepsilon_{\rm e}(\varepsilon_{\rm b}\varepsilon_{\rm d}-t_{\rm bd}^{2})+t_{ \rm ac}^{2}\varepsilon_{\rm c}(\varepsilon_{\rm b}\varepsilon_{\rm d}-t_{\rm bd }^{2}). \tag{11}\]
2309.13281
Automatic Reverse Engineering: Creating computer-aided design (CAD) models from multi-view images
Generation of computer-aided design (CAD) models from multi-view images may be useful in many practical applications. To date, this problem is usually solved with an intermediate point-cloud reconstruction and involves manual work to create the final CAD models. In this contribution, we present a novel network for an automated reverse engineering task. Our network architecture combines three distinct stages: A convolutional neural network as the encoder stage, a multi-view pooling stage and a transformer-based CAD sequence generator. The model is trained and evaluated on a large number of simulated input images and extensive optimization of model architectures and hyper-parameters is performed. A proof-of-concept is demonstrated by successfully reconstructing a number of valid CAD models from simulated test image data. Various accuracy metrics are calculated and compared to a state-of-the-art point-based network. Finally, a real world test is conducted supplying the network with actual photographs of two three-dimensional test objects. It is shown that some of the capabilities of our network can be transferred to this domain, even though the training exclusively incorporates purely synthetic training data. However to date, the feasible model complexity is still limited to basic shapes.
Henrik Jobczyk, Hanno Homann
2023-09-23T06:42:09Z
http://arxiv.org/abs/2309.13281v1
# Automatic Reverse Engineering: Creating computer-aided design (CAD) models from multi-view images ###### Abstract Generation of computer-aided design (CAD) models from multi-view images may be useful in many practical applications. To date, this problem is usually solved with an intermediate point-cloud reconstruction and involves manual work to create the final CAD models. In this contribution, we present a novel network for an automated reverse engineering task. Our network architecture combines three distinct stages: A convolutional neural network as the encoder stage, a multi-view pooling stage and a transformer-based CAD sequence generator. The model is trained and evaluated on a large number of simulated input images and extensive optimization of model architectures and hyperparameters is performed. A proof-of-concept is demonstrated by successfully reconstructing a number of valid CAD models from simulated test image data. Various accuracy metrics are calculated and compared to a state-of-the-art point-based network. Finally, a real world test is conducted supplying the network with actual photographs of two three-dimensional test objects. It is shown that some of the capabilities of our network can be transferred to this domain, even though the training exclusively incorporates purely synthetic training data. However to date, the feasible model complexity is still limited to basic shapes. Keywords:computer-aided design (CAD) multi-view reconstruction encoder-decoder network. ## 1 Introduction Ever since the invention of 3D-printing in the middle of the 20th century, it stimulates the imagination of laypersons and engineers alike. Nowadays this technology is an integral part of the product development cycle in many industries and its application often goes beyond the production of mere prototypes. Even though online 3D printing services increase availability at affordable prices, their use in everyday life is not straightforward. This work is focuses on the central problem of 3D-printing: The generation of digital 3D objects is a skill requiring specialized technical expertise and training, posing a significant barrier for consumer adoption. To give a practical example, a simple mechanical part within a bigger and more expensive appliance such as a washing machine or dryer fails and renders the device unusable. The point of failure is identified but the manufacturer can not offer a spare part. If the user could simply take a few photos using a smartphone camera and have a computer-aided design (CAD) model created automatically by software, the problem could be solved in a short time at minimal financial and environmental cost. This work proposes an end-to-end solution for this reverse engineering problem, which is to our knowledge the first of its kind. Our network architecture is illustrated in Figure 1 and will be described in detail further below after revisiting the state-of-the-art. For proof-of-concept, our model was trained on a large number of renderings from simulated CAD objects. Our results indicate that the image-based approach may outperform a current point-based method. Finally, two real world objects were photographed and reconstructed. Our main contributions are: (1) We present the first end-to-end model to generate CAD sequences from multi-view images, (2) comparison of two different multi-view fusion strategies, and (3) initial results on real-world photos. Figure 1: ARE-Net architecture: Input images taken from multiple view angles are fed into an encoder-decoder network to generate CAD sequence file. Multi-view fusion is facilitated (a) using a fully-connected network (FCN) or (b) using a gated recurrent unit (GRU) to allow varying numbers of input images. The decoder part of the DeepCAD auto-encoder is employed as the generative decoder. ## 2 Related work ### Traditional photogrammetry approaches to reconstructing CAD models Photogrammetry is frequently deployed as an image-based technique to measure three-dimensional shapes using inexpensive cameras. The most common monocular approaches are based on the Structure from Motion (SfM) method first described in [35]. Here, the software is provided with several images from different perspectives and then computes a point-cloud of the object of interest. Automatically extracting a CAD model from a point-cloud is however not straight-forward. For example, the professional AutoCAD software can import but not post-process point clouds as of today [3]. Thus far, CAD model creation mostly remains a manual task. Kim et al. [15] proposed 3D registration of given CAD model using the iterative closest point (ICP) method. Budroni et al. [5] have demonstrated the fitting of planar surfaces to point clouds for reconstructing of 3D models of interior rooms. More recently, Lui [19] proposed automatic reverse-engineering of CAD models from points clouds by iteratively fitting primitive models based on the RANSAC algorithm. In conclusion, there are few existing approaches which are however domain-specific. Instead, a neural-network based approach might generalize better in the long term. ### Learning-based object reconstruction Detection of 3D objects from multiple view perspectives has been addressed by Rukhovich et al. [33]. Similar to [39], they used a fully convolutional network. Notably the number of monocular images in their multi-view input can vary from inference to inference, offering high versatility. This is achieved by extracting features with conventional a Convolutional Neural Network (CNN), followed by pooling and back-projecting into a 3D volumetric space. In this space, bounding boxes are predicted by three-dimensional convolutions. For 3D surface reconstruction, deep learning models have been suggested for different kinds of object representations, including point clouds [1, 12, 8, 48, 49, 6, 21], triangle meshes [38, 10, 23, 26], voxel grids [18, 42, 47], cubic blocks [46], parametric surfaces [34, 40, 13, 16, 17, 45], and signed distance fields (SDFs) [28, 14]. The majority of the studies above (e.g. [1, 26, 28, 14]) use auto-encoders, with a feature bottleneck between an encoder and a decoder stage. This network architecture also allows to simplify training by separating the two stages. To date, discrete CAD models have not been investigated for 3D surface representation. ### Multi-view convolutional networks A 3D multi-view CNN (MVCNN) for object classification was introduced by Su et al. [36]. They provide a certain number of images taken of the object as input to a common CNN and pool the extracted features using an element-wise maximum operation. The pooled information is then processed by a second CNN and a final prediction is made. Notably they conclude that inputting 12 evenly spaced perspectives offers the best trade-off between prediction accuracy and memory as well as time resources. Their concept as been studied for classifying 3D shapes from point clouds [22, 31]. In general, working with MVCNNs seems to be a viable approach for extracting information from 3D scenes. Leal et al. [37] compared different 3D shape classifiers, identifying MVCNNs as superior to other methods due to better generalizability and outperforming several point-based and voxel-based approaches. Consequently, this approach will be followed in this work. ### Recurrent convolutional networks While MVCNNs showed good results for classification tasks, the simple pooling methods (e.g. element-wise max-pooling [36]) might allow a single view to over-rule all other views. Geometric information not visible in some images might be lost for a 3D reconstruction task. Hence, we alternatively consider Recurrent CNNs as a more preservative information extractor. Zreik et al. [50] used a Recurrent Neural Network (RNN) for spacial aggregation of extracted information from 3D angiography images after pre-processing by a 3D-CNN. Liu et al. [20] combined a traditional 2D CNN backbone and an RNN to synthesize multi-view features for a prediction of plant classes and conditions. After extensive experiments, they conclude that a combination of MobileNet as a backbone and a Gated Recurrent Unit (GRU) delivers the best trade-off of classification accuracy and computational overhead. Hence, GRUs will be evaluated in this study for multi-view pooling. ### Generation of CAD representations Even though most methods described above generate three-dimensional data, none of them directly attempts CAD file generation by means of generating a construction sequence comparable to manual CAD design. Thus their resulting models cannot easily be modified by an average user. However, recent research has started to address the direct generation of parametric 2D CAD models: Willis et al. [41] first proposed generative models for CAD sketches, producing curve primitives and explicitly considering topology. SketchGen [27] generates CAD sketches in a graph representation, with nodes representing shape primitives and edges embodying the constraints. Similarly, Ganin et al. [9] utilized off-the-shelf data serialization protocols to embed construction sequences parsed from the online CAD editor Onshape [24]. DeepCAD by Wu et al. [44] was the first approach going beyond the 2D domain of CAD sketch generation. They formulated CAD modeling as a generation of command sequences, specifically tailored as an input to a transformer-based auto-encoder. The publicly available Onshape API was used to build a large dataset of 3D object models for training. Each object is represented by a CAD sequence, consisting of three common types of commands: (1) Creation of a closed curve profile ("sketch") on a 2D plane, (2) 3D extrusions of such sketches and (3) boolean operations between the resulting 3D objects. Each of the CAD commands supports a number of parameters, which may be a mixture of continuous and discrete values. To conform with their neural network, Wu et al. sort each command's parameters into a generalized parameter vector and all continuous parameters are quantized to 8-bits. The maximum number of commands in a given CAD construction sequence was limited to 60, corresponding to the longest sequence length in the dataset. These CAD sequences are processed by an auto-encoder, trained to compress a given CAD model into a latent vector (dimension of 256) and then to reconstruct the original model from that embedding. This means, a random but valid CAD object can be constructed using a given 256-dimensional latent vector. In this work, chose the decoder part of DeepCAD as the generative stage of our new model as introduced next. ## 3 Methods ### Network architecture We introduce a novel network architecture for end-to-end generation of CAD models from multiple input images. The network is composed of three stages: (1) a CNN encoder backbone to extract information from each input image individually, (2) a pooling network that aggregates this information into a common latent vector, and (3) a generative decoder network constructing the output CAD sequences. This network structure is illustrated in Figure 1. Considering its successful track record in object detection and classification as well as its small size, we chose the residual network architecture (ResNet) [11] as our encoder backbone. As the visual complexity of our input images is relatively low, we assumed that a smaller, more shallow variant of the network should suffice. Thus only its smallest two variants were evaluated, namely ResNet-18 and ResNet-34. The input image size is adjustable by means of ResNet's adaptive average pooling layer. In this work, we used 128x128 monochrome as well as 224x224 RGB input images. The output of the last fully connected layer, a vector of fixed length 512, is fed into the pooling network. All input views are processed by the backbone network individually but share the same parameters. The task of the multi-view pooling stage is to combine the information from multiple views. We evaluated two different network architectures during the experiments: (a) a simple feed-forward fully connected network (FCN) as a baseline model and (b) a gated recurrent unit (GRU). Following [7] and [20], we assume that a recurrent pooling approach should perform favorable, even though its training is inherently more challenging [29] because of the possible vanishing and exploding gradient problems. The FCN pooling network concatenates the outputs of all backbone CNNs and propagates them through a numbers of layers (1 to 6 layers were evaluated) of linearly decreasing size with a final layer size of 256. This forms the latent vector compatible to the subsequent DeepCAD decoder network. Unlike the FCN pooling which processes all input views simultaneously, the alternative GRU pooling receives the input views from the backbone CNN sequentially one after the other. This makes it more suitable for varying numbers of images. For evaluation of the GRU pooling stage, we tested different numbers of layers (1 to 8) of identical dimension, different temporal pooling strategies (mean, max, last) and different layer dimensions (64, 128, 256, 512, 1024, 2048). A single fully connected layer is used to achieve the latent vector size of 256. Both pooling network variants use rectified linear units (ReLU) as their non-linear activation function in all layers except the last. The final layer generates the latent vector. Here the hyperbolic tangent function (\(tanh\)) is utilized as it provides output in the range \([-1,1]\) as required for the DeepCAD decoder network. The final stage of the ARE-Net is formed by the decoder from the DeepCAD library [43] which generates CAD construction sequences from the 256-dimensional latent vector. ### Two-stage training Training was performed in two stages: First, the full DeepCAD auto-encoder was pre-trained as described in [44]. After this training, the final latent vector of each CAD object from the training set was saved. Second, simulated image views were rendered from the ground truth CAD sequences and used to train our backbone and multi-view pooling networks. As the loss function, we used the mean-squared error between the predicted latent vectors of the simulated images and the ground-truth latent vectors from the first training stage. We employed the ADAM-optimizer, using 10 epochs during hyper-parameter optimization and 140 epochs for the final model. ## 4 Experimental setup ### Training data Training images were generated from the DeepCAD dataset consisting of 178,238 CAD models. From each CAD sequence, a 3D mesh object and two different projection datasets were generated: (1) A simple dataset of 128x128 grayscale images from 10 fixed and evenly spaced view angles as shown in Figure 2. (2) A complex dataset of 256x256 RGB images with random but uniform object color from 24 randomly spaced viewing angles. In the second dataset the photogrammetry ground-plane from [4] was used as a base on which each model rests. It is composed of non-repeating patterns and is used as a turntable for real objects during the final real world test. The intention is to provide the model with additional information on orientation and scale of the objects, otherwise lost due to the random viewing angles. For training on the simple dataset, all 10 images were used. When training on the complex dataset, a random selection of 5 to 20 images was chosen. To allow an unbiased comparison of our network to former work by the DeepCAD researchers, the same training-, validation- and testing-split (90%-5%-5%) used in [44] was applied. ### Hyper-parameter Optimization Our model contains several hyper-parameters requiring optimization. General parameters are the learning rate, drop out ratio, weight decay and the number of ResNet backbone layers. The parameters of the two pooling networks are the number and dimensions of layers. For the GRU network, the temporal pooling strategy (mean, max, last) also needed investigation. In order to identify suitable hyper-parameters such as network attributes and training parameters which remain constant during any given training run an incremental experimentation procedure is followed. For hyper-parameter optimization, the Optuna library [25] was used. It allows for efficient search through the high dimensional search space and automatically records results and useful statistics. ### Accuracy metrics To compare the accuracy of the predicted CAD models, three different metrics were employed: The command accuracy \(ACC_{cmd}\) measures the agreement of the predicted CAD command type \(\hat{t}_{i}\) with the ground truth command type \(t_{i}\) for a CAD construction sequence of \(N_{c}\) steps: Figure 2: Example of training images from one CAD model: (top row) central view from four sides, (middle row) elevated view, (bottom row) top and bottom views. \[ACC_{cmd}=\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\left(t_{i}==\hat{t}_{i}\right) \tag{1}\] While \(ACC_{cmd}\) measures that fraction of correct commands, the correctness of the continuous parameters of each command shall also be evaluated. The parameter accuracy \(ACC_{param}\) quantifies the agreement of a predicted 8-bit CAD parameter \(\hat{p}_{i,j}\) to its ground-truth counterpart \(p_{i,j}\). Only correctly predicted commands \(N_{c2}\leq N_{c}\) were evaluated and a threshold of \(\eta=3\) was used, as suggested in [44]: \[ACC_{param}=\frac{1}{N_{c2}}\sum_{i=1}^{N_{c2}}\sum_{j=1}^{|\hat{p}_{i}|}\left( |p_{i,j}-\hat{p}_{i,j}|\leq\eta\right) \tag{2}\] For geometric evaluation of the 3D model, the so-called Chamfer Distance \(CD\) was used [30, 2]. It computes the shortest distance of one point \(x\) on the surface \(S_{1}\) of the predicted object to the closest point \(y\) on the surface \(S_{2}\) of the ground-truth object. This is carried out in both directions. In this work, 2000 surface points were evaluated per model. \[CD=\frac{1}{S_{1}}\sum_{x\in S_{1}}\min_{y\in S_{2}}||x-y||\,_{2}^{2}+\frac{1} {S_{2}}\sum_{x\in S_{2}}\min_{y\in S_{1}}||y-x||\,_{2}^{2} \tag{3}\] ### Benchmark comparison As no other method generating CAD models is known to us, comparison is performed using the following two methods: (1) The original DeepCAD auto-encoder is fed with ground-truth CAD-sequences to encode a latent vector and decoded again. The predicted CAD sequence is then evaluated by the accuracy metrics described above. By using loss-less input CAD sequences, this approach represents the ideally achievable results in our comparison and will be referred to as the "baseline". (2) For a more realistic comparison, the PointNet++ encoder [32] was evaluated as a state-of-the-art method. Point-clouds were sampled from the ground-truth 3D objects. The PointNet++ encoder was used to map the point-clouds into a latent vector and then processed by the Deep-CAD decoder as proposed by [44]. ### Reconstruction from photographic images For an initial assessment of the performance of our method on real world images, two test objects were chosen: a cardboard box representing a very simple case and a camera mount as a more complex example. Both are intentionally of uniform color to match the simulated objects seen during training. The objects were placed on a paper version of the photogrammetry ground plane. Then 20 pictures from varying perspectives were taken by a smartphone camera while changing the inclination angle relative to the ground plane and rotating a turntable underneath the object. The image background behind the objects was then cropped away manually. All pictures were sized down to 224x224 pixels and passed into the Automatic Reverse Engineering Network (ARE-Net) with GRU pooling as trained on the simulated complex dataset. ## 5 Results The best performing hyper-parameters are summarized in Table 1. On the simple dataset the GRU with a shallow ResNet18 backbone had sufficient distinguishing power, whereas ResNet34 performed better for the simpler FCN network as well as for the GRU for the complex dataset. Three FC layers were optimal for FCN pooling, but more than one layer didn't increase performance of the GRU pooling stages. As for the GRU-specific parameters, sightly larger networks proved favorable for the complex dataset. Table 2 compares the accuracy metrics of our models using the optimized hyper-parameters. It stands out that the GRU pooling network trained on the simple dataset achieved the best overall performance. It reaches an \(ACC_{cmd}\) of 92.8%, an \(ACC_{param}\) of 78.8% and a median CD of 1.75\(\cdot 10^{3}\). However, the fraction of 18.4% of CAD models that could not be constructed is notably worse than for the point cloud encoder. The percentage of invalid CAD topologies is reported as "CAD model invalid". An invalid sequence may occur, for example, if a curve sketch command is not followed by a 3D extrusion. This tends to occur more often for longer command sequences. The ARE-Net models trained on the simple datasets surpass the one trained on the complex data. The random variation of perspectives and number of input images during training represent a harder problem which did not provide an advantage in this comparison. The accuracy on the test set of the ARE-Net with GRU pooling is plotted in Figure 3 as a function of the number of input images. Above 13 images the \begin{table} \begin{tabular}{l|l l l} Pooling network & FCN & GRU & GRU \\ Dataset & simple & simple & complex \\ \hline Learning rate & \(1.3\cdot 10^{-4}\) & \(4.8\cdot 10^{-4}\) & \(1.5\cdot 10^{-4}\) \\ Drop out & 4.8\% & 16.1\% & 17.2\% \\ Weight decay & \(5.45\cdot 10^{-5}\) & \(3.18\cdot 10^{-6}\) & \(4.38\cdot 10^{-6}\) \\ Backbone & ResNet34 & ResNet18 & ResNet34 \\ \hline FC layers & 3 & 1 & 1 \\ GRU pooling & & \(max\) & \(last\) \\ GRU layers & & 1 & 2 \\ GRU dimension & & 1024 & 2048 \\ \end{tabular} \end{table} Table 1: Best hyper-parameters found by our optimization. accuracy barely increases, which is in line with [36] describing 12 images as a useful lower bound, beyond which the accuracy of their network levels. Figure 4 compares the reconstructed geometries. The following observations can be made: A variety of reconstructions is quite successful. Often times the network seems to "comprehend" the basic form of the shape present, but lacks the ability to exactly reproduce it quantitatively. For example, regarding the yellow object in the bottom right corner of Figure 4, it is clear that the model has recognized the basic shape of the plate and manages to reproduce it quite well. It also extracts the correct number of holes but still fails to reproduce their size and exact position. Conversely, a fraction of about 18% of more complex ground-truth models could not be successfully reconstructed, some examples are show in Figure 5. Visual comparison shows that these models are generally more complex than their valid counterparts, e.g. containing holes of different diameters or extrusion into different spatial directions. \begin{table} \begin{tabular}{l|c c c c} Method & \(ACC_{cmd}\uparrow ACC_{param}\uparrow median\)\(CD\downarrow CAD\)\(model\)\(invalid\downarrow\) \\ \hline ARE-Net FC (simple data) & 92.14\% & 74.2\% & 4.21\(\cdot 10^{3}\) & 18.1\% \\ ARE-Net GRU (simple data) & **92.83\%** & **78.8\%** & **1.75\(\cdot 10^{3}\)** & 18.4\% \\ ARE-Net GRU (complex data) & 92.78\% & 74.6\% & 4.07\(\cdot 10^{3}\) & 18.8\% \\ DeepCAD PointNet++ & 84.95\% & 74.2\% & 10.3\(\cdot 10^{3}\) & **12.1\%** \\ \hline Baseline: & & & & \\ DeepCAD auto-encoder & 99.50\% & 98.0\% & 0.75\(\cdot 10^{3}\) & 2.7\% \\ \end{tabular} \end{table} Table 2: Quantitative results of CAD reconstruction of the presented ARE-Net, DeepCAD with point cloud network and the DeepCAD auto-encoder. Figure 3: Accuracy results for different numbers of input images passed into the ARE-Net using the complete object test set. Two representative photos of our real world objects and their reconstructions are shown in Figure 6. The reconstructed CAD sequence of the cardbox is a perfect cube with equal side lengths, up to the 8-bit precision. As for the more complicated camera mount, a valid CAD model could be created from the photos. However, only the basic L-shape is represented by the model. The relative dimensions are inaccurate and details like the screw holes are completely missing. Moreover, the reconstruction exhibits a prominent elongated bar at the bottom which is not at all present in the original model. This second real-world reconstruction was hence only partially successful. ## 6 Discussion and conclusions We developed a novel method for end-to-end generation of CAD sequences directly from photographic images using an encoder-decoder network architecture. Models were trained in a two-stage approach on 2D renderings of simulated CAD Figure 4: Random selection from the test set of representative good (green) and poor (yellow) reconstruction results. The model predictions are shown on the left, next to their corresponding ground-truth models. Figure 5: Random selection from the test set of ground-truth models that could not be successfully reconstructed. objects and positively evaluated. A first proof-of-concept of the method on real photos was realized. Two different multi-view pooling stages were compared: a feed-forward fully-connected network (FCN) and a gated recurrent unit (GRU). A number of hyper-parameters were extensively optimized. Our results show that the additional complexity introduced by the GRU pays off by producing a significant improvement in all three accuracy metrics. Moreover, the GRU takes in the individual images one after the other such that the number of input images can be handled more flexibly. Our experiments confirm the earlier finding [36] that around 12 different views of an object can be considered a practical lower bound, with little improvement above that number. Comparing our CAD models reconstructed from rendered images of the test set to reconstructions from 3D point-clouds by the state-of-the art PointNet++ encoder, our encoders successfully created valid CAD sequences in more than 80% of the cases which is lower than the success rate of the point-cloud encoder. Regarding the accuracy measures, our encoders outperformed the point-cloud encoder by a large margin. Most importantly, our work establishes the basic feasibility of image-based reverse engineering of 3D CAD models by neural networks. In future applications Figure 6: Real object reconstruction attempts: The top row show selected photos of the two objects placed on the photogrammetry ground-plane (left: cardboard box, right: camera mount angle). The bottom row shows the respective CAD reconstructions. this might reduce the amount of time-consuming work of highly trained engineers or enable untrained laymen to work with CAD technologies for 3D printing previously inaccessible without specialized training. Current limitations of the approach include that the length of CAD sequences is still limited to 60 commands, hence only supporting relatively simple objects. Also our representation is limited to planar and cylindrical surfaces, while many real-world objects may include more flexible triangle meshes or spline representations. Furthermore, the exact position and size of object details - especially small holes - must be improved for practical applications. The loss function used to train the DeepCAD decoder network penalizes deviations of the CAD parameters but does not contain a distance metric [44]. We believe that an end-to-end training of the complete model may improve these results, allowing for more specialized loss functions to get a more direct handle on the quantitative sequence parameters. Future work should also focus on improving the image rendering of the training data. This may include physics-based rendering techniques such as ray-tracing to better simulate real-world cases and the incorporation of reflections, image blur and noise to better mimic an actual picture taken by the end-user. Data augmentation by different backgrounds and model textures should also be considered. Just like the camera view angles, the distance and translation of the object should also be varied. A fine-tuning of the model parameters training with a (limited) set of real-world photos of 3D-printed objects from given CAD models could also be pursued. Finally different backbone and/or pooling architectures, such as attention based techniques could be explored going forward. Generally the direction proposed in this work seems promising. It will be interesting to see what this or similar approaches will lead to down the line. One may predict that experts and consumers might soon be using parametric, CAD generating 3D-scanning-applications, just as naturally as optical character recognition (OCR) is used today, saving countless hours of repetitive work and providing unprecedented possibilities of interaction and creation in this three-dimensional world. ## Acknowledgements We would like to thank Rundi Wu and his co-workers for openly sharing their ground-braking DeepCAD work and providing extensive support materials such as their dataset, the generative CAD decoder, the point-cloud encoder and evaluation metrics.
2306.01000
New Insights into the Lamb Shift: The Spectral density of the Shift
In an atom, the interaction of a bound electron with the vacuum fluctuations of the electromagnetic field leads to complex shifts in the energy levels of the electron, with the real part of the shift corresponding to a shift in the energy level and the imaginary part to the width of the energy level. The most celebrated radiative shift is the Lamb shift between the $2S_{1/2}$ and the $2P_{1/2}$ levels of the hydrogen atom.~The measurement of this shift in 1947 by Willis Lamb Jr. proved that the prediction by Dirac theory that the energy levels were degenerate was incorrect. Hans~Bethe's calculation of the shift demonstrated the renormalization process required to deal with the divergences plaguing the existing theories and led to the understanding that it was essential for theory to include interactions with the zero-point quantum vacuum field. This was the birth of modern quantum electrodynamics (QED). Other calculations of the Lamb shift followed by Welton and Power in an effort to clarify the physical mechanisms leading to the shift. We have done a calculation of the shift using a group theoretical approach which gives the shift as an integral over frequency of a function, which we call the spectral density of the shift. The spectral density reveals how different frequencies contribute to the total energy shift. We find, for example, that half the radiative shift for the ground state 1S level in H comes from photon energies below 9700 eV, and that the expressions by Power and Welton do not have the correct low frequency behavior, although they do give approximately the correct value for the total shift.
G. Jordan Maclay
2023-05-31T01:45:41Z
http://arxiv.org/abs/2306.01000v1
# New Insights into the Lamb Shift: The Spectral Density of the Shift ###### Abstract In an atom, the interaction of a bound electron with the vacuum fluctuations of the electromagnetic field leads to complex shifts in the energy levels of the electron, with the real part of the shift corresponding to a shift in the energy level and the imaginary part to the width of the energy level. The most celebrated radiative shift is the Lamb shift between the \(2S_{1/2}\) and the \(2P_{1/2}\) levels of the hydrogen atom. The measurement of this shift in 1947 by Willis Lamb Jr. proved that the prediction by Dirac theory that the energy levels were degenerate was incorrect. Hans Bethe's calculation of the shift demonstrated the renormalization process required to deal with the divergences plaguing the existing theories and led to the understanding that it was essential for theory to include interactions with the zero-point quantum vacuum field. This was the birth of modern quantum electrodynamics (QED). Other calculations of the Lamb shift followed by Welton and Power in an effort to clarify the physical mechanisms leading to the shift. We have done a calculation of the shift using a group theoretical approach which gives the shift as an integral over frequency of a function, which we call the spectral density of the shift. The spectral density reveals how different frequencies contribute to the total energy shift. We find, for example, that half the radiative shift for the ground state 1S level in H comes from photon energies below 9700 eV, and that the expressions by Power and Welton do not have the correct low frequency behavior, although they do give approximately the correct value for the total shift. B 2022 00, 0; doi:10.3390/physics000000 ## 1 Introduction In astronomy, in quantum theory, in quantum electrodynamics (QED), there have been periods of great progress in which solutions to challenging problems have been obtained, and the fields have moved forward. However, in some cases getting the right answers can still leave fundamental questions unanswered. The Big Bang explained the origin of the cosmic background radiation, but left the problem of why the universe appears to be made of matter and not equal amounts of matter and antimatter[1]. In quantum theory, we can compute the behavior of atoms yet we cannot describe a measurement in a self-consistent way, or make sense of the collapse of a photon wavefunction from a near infinite volume to a point[2]. In quantum electrodynamics we can compute the Lamb shift of the H atom to 15 decimal places[3], yet we are left with the paradox of using perturbation theory to remove infinite terms, or to understand a quantum vacuum with infinite energy. In this paper, we examine different approaches to the computation of the non-relativistic Lamb shift. For all these approaches, the Lamb shift can be expressed in different ways as an integral over frequency of a spectral density. We analyze the differences in the spectral densities for the different approaches as a function of frequency and compare the spectral densities to those obtained by using a group theoretical analysis. The integral of the spectral density over all frequencies gives the corresponding value of the Lamb shift. Feynman called the the three page long 1947 non-relativistic Lamb shift calculation by Hans Bethe the most important calculation in quantum electrodynamics because it tamed the infinities plaguing earlier attempts. When the sum over all states is evaluated numerically, it gives a finite prediction that agreed with experiment[4][5]. Dirac said it "fundamentally changed the nature of theoretical physics." Yet when this calculation is explored more deeply, questions arise about it and about other calculations of the Lamb shift, for example those by Welton [6] and Power[7], that employ different methods that have different low frequency behavior from Bethe's result yet give approximately the same value for the level shift [8]. These three approaches to the Lamb shift and the corresponding vacuum energy densities have also been considered in [9]. There is an intimate relationship between radiative shifts and vacuum fluctuations. The shift can be interpreted as arising from virtual transitions induced by the quantum fluctuations of the electromagnetic field. Since the vacuum field contains all frequencies, virtual transitions to all states, bound and scattering, are possible. These short lived virtual transition result in a slight shift in the average energy of the atom, a shift which we call the Lamb shift [10]. We note that the Lamb shift can also be described as an interaction of the electron with its own radiation field, yielding the same results as the vacuum field[8]. Bethe's calculation was based on second order perturbation theory applied to the minimal coupling of the atom with the vacuum field \((e/mc)A\cdot p\) and a dipole approximation. This interaction leads to the emission and absorption of virtual photons corresponding to virtual transitions. The shift is expressed as a sum over the intermediate states reached by virtual transitions. The predicted shift is divergent, but Bethe subtracted the term that corresponded to the linearly divergent vacuum energy shift for a free bare electron, essentially doing a mass renormalization to remove this higher order divergence in the spectral density for the shift. For S states, the resulting spectral density has a 1/frequency behavior for high frequencies giving a logarithmic divergence in the shift. Welton's model for computing the Lamb shift was based on the perturbation of the motion of a bound electron in the H atom due to the quantum vacuum fluctuations altering the location of the electron, which resulted in a slight shift of the bound state energy [6][8][10]. This simplified intuitive model predicts a spectral density proportional to 1/frequency for all frequencies and a shift only for S states. The approach of Feynman[11], interpreted by Power [7], considers a large box containing H atoms and is based on the shift in the energy in the quantum vacuum field due to the change in the index of refraction arising from the presence of H atoms. This approach predicts that the shift in the energy in the vacuum field around the H atoms exactly equals the radiative shift predicted by Bethe for all energy [8][9]. It gives a spectral density with the same high frequency dependence as Bethe, but a different low frequency dependence. A similar calculation to Power's models the Lamb shift as a Stark shift [8]. The Lamb shift has been previously computed using O(4) symmetry [12] and by a different approach from ours using SO(4,2) symmetry [13]. We present the results of a calculation of the Lamb shift that is based on a SO(4,2) group theoretical analysis of the H atom that allows us to determine the dependence of the shift on frequency with no sum over states[14]. The degeneracy group of the non-relativistic H atom is O(4), with generators angular momentum operator \(L\) and Runge-Lenz vector \(A\). A representation of O(4) of dimension \(n^{2}\) exists for each value of the principal quantum number \(n\), where the angular momentum \(L\) has values from 0 to \(n-1\), and there are \(2L+1\) possible values of \(L_{z}=m\). If we extend this group by adding a 4 vector of generators we get the non-invariance group SO(4,1) which has representations that include all states of different \(n\) and \(L\) and operators that connect states with different principal quantum numbers. Adding a 5 vector of additional generators gives the group SO(4,2) and allows us to express Schrodinger's equation in terms of the new generators, and to make effective group theoretical calculations [14]. We use basis states that allow us to include both bound and scattering states seamlessly [15] and no sum over states appears in the final expression for the spectral density. One advantage of this approach is that for each energy level we can easily compute a spectral density for the shift whose integral over frequency from 0 to \(mc^{2}/\hbar\) is the radiative shift that includes transitions to all possible states. Thus we can see how different frequencies of the vacuum field contribute to the radiative shift. We compare the different approaches of Bethe, Welton and Power to the group theoretical spectral density of the non-relativistic Lamb shift for the 1S ground state, the 2S and 2P levels. With this new picture of the Lamb shift, we have found differences between the various approaches. Knowing the spectral density of the shift provides new insights into understanding the Lamb shift. ## 2 Background of Radiative Shift Calculations The first calculation of the Lamb shift of a hydrogen atom was done by Bethe in 1947, who assumed the shift was do the interaction of the atom with the vacuum field. He calculated the shift using second order perturbation theory, assuming that there was minimal coupling in the Hamiltonian: \[H_{int}=-\frac{e}{mc}\mathbf{A}\cdot\mathbf{p}+\frac{e^{2}}{2mc^{2}}\mathbf{A }^{2} \tag{1}\] where \(m\), \(e\), and \(\mathbf{p}\) are the mass, charge and momentum of the electron, \(c\) is the speed of light in vacuum, and \(\mathbf{A}\) is the vector potential in the dipole approximation for the vacuum field in a large quantization volume \(V\) \[\mathbf{A}=\sum_{\mathbf{k},\lambda}(\frac{2\pi\hbar c^{2}}{\omega_{k}V})^{1/ 2}(a_{\mathbf{k},\lambda}+a_{\mathbf{k},\lambda}^{\dagger})\mathbf{e}_{ \mathbf{k},\lambda} \tag{2}\] where the sum is over the virtual photon wave number \(\mathbf{k}\), where \(kc=\hbar\omega_{k}\), the energy of the virtual photon, and the polarization \(\lambda\); \(a_{\mathbf{k},\lambda}\) and \(a_{\mathbf{k},\lambda}^{\dagger}\) are the annihilation and creation operators, and \(\mathbf{e}_{\mathbf{k},\lambda}\) is a unit vector in the direction of polarization of the electric field. The shift from the \(\mathbf{A}^{2}\) term is independent of the state of the atom and is therefore neglected. The total shift \(\Delta E_{nTot}\) for energy level n of the atom in state \(|n\rangle\) is given by second order perturbation theory as [8] \[\Delta E_{nTot}=-\frac{2}{3\pi}\frac{\alpha}{m^{2}c^{2}}\sum_{m}|\mathbf{p}_{ mm}|^{2}\int\frac{EdE}{E_{m}-E_{n}+E} \tag{3}\] where the integral is over the quantum vacuum field energy \(E=\hbar\omega\) and the momentum matrix elements are \(|\mathbf{p}_{mm}|=|\langle m|\mathbf{p}|n\rangle|\). The sum is over all intermediate states \(|m\rangle\), scattering and bound, where \(m\neq n\). The fine structure constant is \(\alpha=e^{2}/\hbar c\). The integrand in Eq. 3 has a linear divergence. Bethe observed that this divergence in Eq. 3 corresponded to the integral that occurs when the binding energy vanishes \((E_{m}-E_{n})\to 0\) and the electrons are free: \[\Delta E_{free}=-\frac{2}{3\pi}\frac{\alpha}{m^{2}c^{2}}\sum_{m}|\mathbf{p}_{ mm}|^{2}\int dE. \tag{4}\] He subtracted this divergent term \(\Delta E_{free}\) from the total shift \(\Delta E_{nTot}\) \[\Delta E_{nL}=\Delta E_{nTot}-\Delta E_{free} \tag{5}\] to obtain a finite observable shift \(\Delta E_{nL}\) for the state \(|nL\rangle\) \[\Delta E_{nL}=\frac{2\alpha}{3\pi(mc)^{2}}\sum_{m}^{s}|\mathbf{p}_{mm}|^{2} \int_{0}^{\hbar\omega_{c}}dE\frac{(E_{m}-E_{n})}{E_{m}-E_{n}+E-i\epsilon}, \tag{6}\] where \(\omega_{C}\) is a cutoff frequency for the integration that Bethe took as \(\hbar\omega_{c}=mc^{2}\). Using an idea from Kramers, Bethe did this renormalization, taking the difference between the terms with a potential present and without a potential present, essentially performing the free electron mass renormalization. He reasoned that relativistic retardation could be neglected and the radiative shift could be reasonable approximated using a non-relativistic approach and he cut the integration off at an energy corresponding to the mass of the electron. He obtained a finite result that required a numerical calculation over all states, bound and scattering, that gave good agreement with measurements [4][5][16]. The spectral density in the Bethe formalism, which we will analyse, is the quantity in Eq. 6 being integrated over \(E\). It includes the sum over states \(m\). The term for \(m\) represents the contribution to the Lamb shift for the virtual transition from state \(n\) to state \(m\). Note since the ground state is the lowest state, all intermediate states have higher energies so the ground state shift has to be positive. For the purposes of comparison to the other calculations of the Lamb shift it is helpful to show the next steps Bethe took to evaluate the shift \(\Delta E_{n}\) for S states, which have the largest shifts. Note that the spectral density we will analyse in Eq. 6 is not affected by the subsequent approximations Bethe made to evaluate the integral. First the E integration is done: \[\Delta E_{n}^{Bethe}=\frac{2\alpha}{3\pi}(\frac{1}{mc})^{2}\sum_{m}|\mathbf{p} _{nm}|^{2}(E_{m}-E_{n})ln\frac{(mc^{2}+E_{m}-E_{n})}{|E_{m}-E_{n}|}. \tag{7}\] To simplify the evaluation Bethe assumed \(|E_{m}-E_{n}|<<mc^{2}\) in the logarithm and that the logarithm would vary slowly with \(m\) so it could be replaced by an average value \[\widehat{\Delta E}_{n}^{Bethe}=\frac{2\alpha}{3\pi}(\frac{1}{mc})^{2}ln\frac {mc^{2}}{|E_{m}-E_{n}|_{Ave}}\sum_{m}|\mathbf{p}_{nm}|^{2}(E_{m}-E_{n}) \tag{8}\] where the hat over the \(\Delta E\) indicates this is an approximation to Eq. 7. Now that the E integration is done, the spectral density is no longer manifest. The summation can be evaluated using the dipole sum rule \[2\sum_{m}^{s}|\mathbf{p}_{nm}|^{2}\left(E_{m}-E_{n}\right)=\hbar^{2}\left<n \left|\nabla^{2}V\right|n\right>. \tag{9}\] The value of the Laplacian with a Coulomb potential V=\(-Ze^{2}/r\) is \(\nabla^{2}V(r)=4\pi Ze^{2}\delta(\mathbf{r})\) so we have \[\left<n\left|\nabla^{2}V\right|n\right>=4\pi Ze^{2}|\psi_{n}(0)|^{2}, \tag{10}\] where \(\psi(r)\) is the wave function for a Coulomb potential and \(|\psi_{n}(0)|^{2}\) is zero except for \(S\) states \[|\psi_{n}(0)|^{2}=\frac{1}{\pi}\big{(}\frac{Zamc}{n\hbar}\big{)}^{3}. \tag{11}\] For S states, this gives an energy shift equal to [8]: \[\widehat{\Delta E}_{n}^{Bethe}=\frac{4mc^{2}}{3\pi}\alpha(Z\alpha)^{4}\frac{1 }{n^{3}}ln\frac{mc^{2}}{|E_{m}-E_{n}|_{Ave}}. \tag{12}\] where the so called Bethe log for an S states with principal quantum number n is \[ln\frac{mc^{2}}{|E_{m}-E_{n}|_{Ave}}=\frac{\sum_{m}|\mathbf{p}_{nm}|^{2}(E_{m }-E_{n})ln\frac{mc^{2}}{|E_{m}-E_{n}|}}{\sum_{m}|\mathbf{p}_{nm}|^{2}\left(E_{ m}-E_{n}\right)} \tag{13}\] where the sum is over all states, bound and scattering. Bethe also has extended the formalism for shifts for states that are not S states [16]. Regarding the approximations Bethe made to obtain Eq. 8 from Eq. 7 and the use of the Bethe log Eq. 13, he commented: "The important values of \(|E_{m}-E_{n}|\) will be of order of the ground state binding energy for a hydrogenic atom. This energy is very small compared to \(mc^{2}\) so the log [in our Eq. 7] is very large and not sensitive to the exact value of \((E_{m}-E_{n})\). In the numerator we neglect \((E_{m}-E_{n})\) altogether and replace it by an average energy [16]." Our work shows that Bethe was correct that the relative contribution from energies of the order of the ground state is very important, but we find the contribution from higher energy scattering states is very significant, and therefore that the approximation \(|E_{m}-E_{n}|<<mc^{2}\) is not valid for higher energy scattering states for which \(E_{m}\) increases to the value \(mc^{2}\). We are not aware of any quantitative estimates of the error in the approximation. The difference, 0.3%, between our value for the total 1S shift and that of Bethe may be due to this approximation, although we have not verified this. On the other hand Bethe's approximation may have made his non-relativistic approach viable. To provide a more intuitive physical picture of the shift, Welton considered the effect of a zero-point vacuum field on the motion of an electron bound in a coulomb potential \(V(\mathbf{r})\) at a location \(\mathbf{r}\). The perturbation \(\xi\)=\((\xi_{x},\xi_{y},\xi_{z})\) in the position of the bound electron due to the random zero-point vacuum field \(\mathbf{E}_{0}\) causes a variation in the potential energy \[V(\mathbf{r}+\xi)=V(\mathbf{r})+\xi\cdot\nabla V(\mathbf{r})+\frac{1}{2}\left( \xi\cdot\nabla\right)^{2}V(\mathbf{r})... \tag{14}\] Because of the harmonic time dependence of the vacuum field, \(\left\langle\xi\right\rangle\) vanishes and the radiative shift is given approximately by the vacuum expectation value of the last term: \[\Delta E_{n}^{Welton}=\frac{\left\langle\xi^{2}\right\rangle}{6}\left\langle \nabla^{2}V(\bar{\mathbf{r}})\right\rangle_{n} \tag{15}\] where we assume the potential has spherical symmetry, thus \(\left\langle\xi_{x}^{2}\right\rangle=\left\langle\xi_{y}^{2}\right\rangle= \left\langle\xi_{z}^{2}\right\rangle=\left\langle\xi^{2}/3\right\rangle\). Eq. 15 gives \(\Delta E_{n}^{Welton}\) as the product of two factors, the first depending on the nature of the fluctuations in the position of the bound electron due to the vacuum field and the second depending on the structure of the system. \(\xi\) is determined by \(m\xi\)=\(e\mathbf{E}_{0}\). With a Fourier decomposition of \(E_{0}\) and \(\xi\), and integrating over the frequency distribution of the vacuum field, we obtain the vacuum expectation value[8][10] \[\left\langle(\vec{\xi})^{2}\right\rangle=\frac{2\pi}{\pi}(\frac{\hbar}{mc})^{ 2}\int_{0}^{mc^{2}}\frac{dE}{E}. \tag{16}\] Using the results in Eqs. 10 and 11 we can evaluate the Laplacian in Eq. 15 and obtain a shift for S states equal to [8]: \[\Delta E_{n}^{Welton}=\frac{4mc^{2}}{3\pi}\alpha(Z\alpha)^{4}\frac{1}{n^{3}} \int_{0}^{mc^{2}}\frac{dE}{E}. \tag{17}\] Eq. 17 shows that the spectral density for the Welton approach is proportional to 1/E. For the upper limit of integration, we take \(mc^{2}\) as Bethe did. The lower limit of 0 gives a divergent shift. Sometimes a lower limit of the ground state energy is taken. On the other hand, if we happen to compare Eq. 17 to Eq. 12, we see that if we take for the lower limit the Bethe log Eq. 13, we get exactly the same total S state shift as in the approximate Bethe formalism Eq. 12. With these limits, the RMS amplitude of oscillation of the electron bound in the Coulomb potential \(\sqrt{\left\langle(\vec{\xi})^{2}\right\rangle}\) is about 72 fermis, about 1/740 of the mean radius of the 1S electron orbit. Feynman proposed another approach for computing the Lamb shift based on a fundamental observation about the interaction of matter and the vacuum field[11]. He considered a large box containing a low density of atoms in the quantum vacuum. The atoms cause a change in the index of refraction, which leads to changes in the frequencies of the vacuum field. The wavelengths remain the same. He maintained that the change in the energy of the zero point vacuum field in the box due to the frequency changes resulting from a weak perturbing background of atoms acting as a refracting medium would correspond to the self energy of the atoms, which is precisely the Lamb shift. Power, based on the suggestion by Feynman, considered the change in vacuum energy when N hydrogen atoms are placed in a volume V, using the Kramers-Heisenberg expression for the index of refraction \(n(\omega_{k})\)[7][8]. The H atoms cause a change in the index of refraction and therefore a change in the frequencies of the vacuum fluctuations present. The corresponding change in vacuum energy \(\Delta E\) is \[\Delta E=\sum_{k}\frac{1}{n(\omega_{k})}\frac{1}{2}\hbar\omega_{k}-\frac{1}{2} \hbar\omega_{k} \tag{18}\] where the sum is over all frequencies \(\omega_{k}\) present. For a dilute gas of atoms in a level n, the index of refraction is [8] \[n(\omega_{k})=1+\frac{4\pi N}{3\hbar}\sum_{m}\frac{\omega_{mn}|\mathbf{d}|^{2} _{mn}}{\omega_{mn}^{2}-\omega_{k}^{2}} \tag{19}\] where \(\omega_{mn}=(E_{m}-E_{n})/\hbar\) and \(\mathbf{d}_{mn}=e\mathbf{x}_{mn}\), the transition dipole moment. After substituting \(n(\omega_{k})\) into Eq. 18, we get a divergent result for the energy shift. Following Bethe's approach, Power subtracted from \(\Delta E\) the energy shift for the N free electrons, which equals the shift when \(\omega_{mn}\to 0\), with no binding energy. After making this subtraction and converting the sum over \(\omega_{k}\) to an integral over \(\omega\), and letting \(NV\to 1\) the observable shift in energy is obtained[8]: \[\Delta E_{n}^{Power}=-\frac{2}{3\pi c^{3}}\sum_{m}\omega_{mn}^{3}|\mathbf{d}_ {mn}|^{2}\int_{0}^{mc^{2}/\hbar}\frac{d\omega\omega}{\omega_{mn}^{2}-\omega^{ 2}}. \tag{20}\] Noting that \[\langle m|\frac{\mathbf{p}}{m}|n\rangle=\frac{i}{\hbar}\langle m|[H,\mathbf{x }]|n\rangle=\frac{i}{\hbar}(E_{m}-E_{n})\langle m|\mathbf{x}|n\rangle \tag{21}\] we can show \[|\mathbf{p}_{mn}|^{2}=m^{2}\omega_{mn}^{2}|\mathbf{x}_{mn}|^{2}=\frac{m^{2} \omega_{mn}^{2}}{e^{2}}|\mathbf{d}_{mn}|^{2}. \tag{22}\] This allows us to write Power's result Eq. 20 as \[\Delta E_{n}^{Power}=-\frac{2e^{2}}{3\pi m^{2}c^{3}}\sum_{m}\omega_{mn}| \mathbf{p}_{mn}|^{2}\int_{0}^{mc^{2}/\hbar}\frac{d\omega\omega}{\omega_{mn}^{ 2}-\omega^{2}}. \tag{23}\] Writing this equation in terms of \(E=\hbar\omega\) instead of \(\omega\) yields \[\Delta E_{n}^{Power}=-\frac{2\alpha}{3\pi}(\frac{1}{mc})^{2}\sum_{m}|\mathbf{ p}_{mn}|^{2}(E_{m}-E_{n})\int_{0}^{mc^{2}}\frac{EdE}{(E_{m}-E_{n})^{2}-E^{2}} \tag{24}\] We will use this equation to analyze the spectral density for Power's method, showing the spectral density is different from Bethe's at low frequencies but the same at high frequencies. When Eq. 24 is integrated with respect to E, taking the principal value, we obtain \[\Delta E_{n}^{Power}=\frac{2\alpha}{3\pi}(\frac{1}{mc})^{2}\sum_{m}|\mathbf{ p}_{mn}|^{2}(E_{m}-E_{n})ln[\frac{mc^{2}+(E_{m}-E_{n})}{E_{m}-E_{n}}\times \frac{mc^{2}-(E_{m}-E_{n})}{E_{m}-E_{n}}]^{1/2}. \tag{25}\] Except for the argument in the \(ln\) function, which corresponds to the upper limit of integration, this is the same as Bethe's expression Eq. 7 for the shift. If we assume \(mc^{2}>>E_{m}-E_{n}\), as Bethe did, then both expressions for the total shift are identical. It is clear, however, that this approximation is not valid at high energies for the second factor in the \(ln\) function in Eq. 25, which may even become less than one making the \(ln\) term negative. Feynman's approach highlights the changes in the vacuum field energy due to the interactions with the H atoms. One assumption in the computation by Power is that the index of refraction in the box containing the atoms is spatially uniform. We will return to this assumptions and suggest a model that predicts, for a single atom, the changes in the vacuum field energy as a function of position for each spectral component of the radiative shift. ## 3 Spectral Density of the Lamb Shift Our goal is to develop an expression for the energy shift of a level, in terms of the generators of the group SO(4,2), that is an integral over frequency. Then the integrand will be the spectral density of the shift, and group theoretical techniques can be used to evaluate it [14]. We derive a generating function for the shifts for all levels. We first focus on the ground state 1S level as an illustration of the results. At ordinary temperatures and pressures, most atoms are in the ground state. The radiative shift for the 1S level is [14] \[\Delta E_{1}=\frac{4mc^{2}\alpha(Z\alpha)^{4}}{3\pi}\int_{0}^{\phi_{c}}d\phi e^ {\phi}\sinh\phi\int_{0}^{\infty}dse^{se^{-\phi}}\frac{d}{ds}\frac{1}{\left( \coth\frac{s}{2}+\cosh\phi\right)^{2}} \tag{26}\] where the dimensionless normalized frequency variable \(\phi\) is defined as \[\phi=\frac{1}{2}ln[1+\frac{\hbar\omega}{|E_{1}|}] \tag{27}\] where \(E_{1}\) is the ground state energy -13.6 eV. The cutoff \(\phi_{c}\) corresponds to \(E=\hbar\omega_{c}=mc^{2},511\) keV corresponding to the electron mass. The group theoretical expression for the Lamb shift Eq. 26 is directly derived from the Klein-Gordon equations of motion using a non-relativistic dipole approximation, assuming infinite proton mass, and minimal coupling with the vacuum field. Basis states of \((1/Z\alpha)\) are used since they have no scattering states and have the same quantum numbers as the usual bound energy eigenstates [14]. The level shift is obtained as the difference between the mass renormalization for a spinless meson bound in the desired state and the mass renormalization for a free meson. Second order perturbation theory is not used. Near the end of the derivation an equation which is equivalent to Bethe's result Eq. 6 for the radiative shift can be derived by inserting a complete set of Schrodinger energy eigenstates. Thus we expect the fundamental results from Bethe's spectral density (with no approximations) and the group theoretical spectral density to be in agreement [10][14]. For convenience an explanation of the basis states used to derive Eq. 26 is given in Appendix A, and the derivation of Eq. 26 is given in Appendix B since the derivation in [14] is spread in steps throughout the paper as the group theory methods are developed. We can write Eq. 26 as an integral over \(E=\hbar\omega\), which is the energy of the vacuum field in eV, and evaluate the definite integral over \(s\) analytically for different values of \(E\). We measure the ground state Lamb shift \(\Delta E_{1}\) in eV so the spectral density of the shift \(d\Delta E_{1}/dE\) is measured in eV/eV which is dimensionless: \[\Delta E_{1}=\int_{0}^{mc^{2}}\frac{d\Delta E_{1}}{dE}dE \tag{28}\] where the ground state spectral density from Eq. 26 is \[\frac{d\Delta E_{1}}{dE}=\frac{4\alpha^{3}}{3\pi}e^{-2\phi}\sinh\phi\int_{0} ^{\infty}dse^{se^{-\phi}}\frac{1}{\sinh(\frac{s}{2})^{2}}\frac{1}{\left(\coth \frac{s}{2}+\cosh\phi\right)^{3}}. \tag{29}\] Fig. 1 shows a logarithmic plot (ordinate is a log, abscissa is linear) of the spectral density \(\frac{d\Delta E_{1}}{dE}\) of the ground state Lamb shift with Z=1 over the entire range of energy \(E\) computed from Eq. 29 using Mathematica. The spectral density is largest at the lowest energies, and decreases monotonically by about 4 orders of magnitude as the energy increases to 511 eV. The ground state shift is the integral of the spectral density from energy 0 to 511 keV. Fig. 2 is a loglog plot (both ordinate and abscissa are log) of the same information. The use of the loglog plot expands the energy range for each decade, revealing that for energy above about 1000 eV the slope is approximately -1, indicating that the spectral density is nearly proportional to \(1/E\). For energy below about 10 eV, the spectral density in Fig. 2 is almost flat, corresponding to a linear decrease as energy increases, with a maximum spectral density at the lowest energy computed, as shown in Fig. 3. Fig. 2 shows that there are essentially two different behaviors of the spectral density. For values of the energy E of the vacuum field that are about 10 eV and below, in the range of the changes in energy for bound state transitions, the spectral density corresponds to the near horizontal portion of the spectral density in Fig. 2, and when E is much larger than the bound state energies, the spectral density goes as 1/E. Fig. 3 shows linear plots (linear in ordinate and abscissa) of the spectral density of the shift for the ground state computed from Eq. 29 for several lower energy regions. Fig 3a shows a linear decrease in the spectral density as the energy increases over the small energy interval plotted. Fig 3b show a linear decrease of about 15% as the energy increases from 0 eV to 3 eV. Fig. 3c shows that the spectral density decreases by a factor of about 4 as the energy increases from 0 eV to 100 eV. In the low frequency limit, the spectral density decreases linearly from the asymptotic constant value as the energy increases. From explicit evaluations, we will show in Section 4 that for shifts in S states with principal quantum number n, the asymptotic spectral density for large \(E\) is proportional to \(\alpha(Z\alpha)^{4}(1/n^{3})\), and show in Section 5 that as the energy E goes to zero, the spectral density increases linearly, reaching a maximum value that is proportional to \(\alpha(Z\alpha)^{2}(1/n^{2})\). An approximate fit to the ground state data in Fig. 1 is \[\frac{d\Delta E_{1}^{Fit}}{dE}=A\frac{(1+e^{-BE})}{(E+C)}. \tag{30}\] Figure 1: Plot of the log of the spectral density of the ground state Lamb shift from the group theoretical expression Eq. 29 on the vertical axis versus the energy in eV from 0 to 510 keV on the horizontal axis. Figure 2: This loglog plot shows the log of the spectral density of the ground state shift from the group theoretical expression Eq. 29 on the vertical axis versus the log of the energy in eV. From about 0 eV to 10 eV, there is a slow linear decrease in the spectral density. For energies above about 100 eV, the behavior is dominated by a 1/energy dependence. Figure 3: Linear plot of the ground state spectral density as a function of eV calculated from group theory, plotted as a function of energy for low and mid energies. From about 0 eV to 10 eV, the spectral density decreases linearly from its maximum value at the origin which corresponds to 0 eV for all graphs. where \(A=4.4008\times 10^{-6}\), \(B=0.08445\), \(C=106.79\). The fit is quite good at the asymptotes and within 10% over the entire energy range. We can use the spectral density shown in Fig. 1 or 2 in order to determine the contribution to the total ground state shift from different energy regions. If we integrate the spectral density from 0 eV to energy \(E\), we obtain the value of the partial shift \(\Delta_{1}(E)\) that these energies (0 eV to \(E\) eV) contribute to the total shift \(\Delta E_{1}\) for the ground state. In Fig. 4 we have plotted \(\Delta_{1}(E)/\Delta E_{1}\), which is the fraction of the total shift \(\Delta E_{1}\) due to the contributions from energies below \(E\), as a function of \(E\). Fig. 4a shows that almost 80% of the shift comes from energies below about 100,000 eV. Fig. 4b shows that about half the total shift is from energies below 9050 eV. Fig. 4c shows that energies below 100 eV contribute about 10% of the total shift. Energies below 13.6 eV contribute about 2.5% while energies below 1 eV contribute about 1/4% of the total. As Fig. 4c shows, the fraction of the total shift increases linearly for E\(<\)10 eV, corresponding to the nearly horizontal portion of the shift density for E\(<\)10 eV, as shown in Fig. 2. The contribution to the total 1S shift for the visible spectral interval 400-700 nm (1.770 eV to 3.10 eV) is about \(1.00342\times 10^{-7}\) eV or about 3/10 % of the total shift. The relative contribution to the total shift per eV is much greater for lower energies. For example, half the 1S shift corresponds to energies 0 to 9000 eV, but only about 0.2% corresponds to 500,000 to 509,000 eV. The largest contribution to the shift per eV is at the lowest energies, which have the steepest slope of the spectral density curve in Fig. 1, about 1000 times greater than the slope for the largest values of the energy. But the total range for the large energies, from 9050 to 510,000 is so large that the absolute contribution to the total shift for large energies is significant. For the ground state Fig. 5 shows how the dominant terms for different \(m\) in the Bethe sum over states in Eq. 6 contribute to the full spectral density obtained from group theory Eq. 29. Each such term in the Bethe sum could be interpreted as corresponding to the shift resulting from virtual transitions from state \(n\) to state \(m\) occurring due to the vacuum field. Each term shown has a behavior similar to that of the full spectral density, but the magnitudes decrease as the transition probabilities decrease. Fig. 6 shows the spectral densities for 1S (black) and 2S (orange) shifts. The shapes are similar but the spectral density for the 1S shift is about eight times as large at high frequencies and about four times as large at low frequencies, factors that we will derive explicitly by considering the asymptotic forms of the spectra density for S states with different principal quantum numbers. Both have a \(1/E\) high frequency behavior. The s integration in the group theoretical calculation for the 2S state diverges for energies below 10.2 eV due to a non-relativistic approximation, but the spectral density of the shift can be obtained from a low energy approximation, Eq. 47, to the group theory result, which we derive in Section 5. We can define the spectral density \(\frac{d\Delta E_{n}}{dE}\) for a state \(n\) in a convenient form suggested by Eq. 29, \[\frac{d\Delta E_{n}}{dE}=\frac{4\alpha^{3}}{3\pi}\int_{0}^{\infty}dsW_{n}(s, \phi_{n})\qquad\text{where}\quad\phi_{n}=\ln\left[1+\frac{E}{\left|E_{n} \right|}\right] \tag{31}\] where the energy for state \(n\) is \(E_{n}=-mc^{2}(Z\alpha)^{2}/2n^{2}\). From our group theoretical results, we have for the 2S-2P Lamb shift [14] \[W_{2S-2P}(s,\phi_{2})=\frac{4e^{(2se^{-\phi_{2}}+\phi_{2})}\sinh^{3}(\phi_{2} )\text{csch}^{2}\left(\frac{s}{2}\right)}{\left(\cosh(\phi_{2})+\text{coth} \left(\frac{s}{2}\right)\right)^{5}} \tag{32}\] and for the 2P shift [14]: \[W_{2P}(s,\phi_{2})=-\frac{e^{(2se^{-\phi_{2}}+\phi_{2})}\sinh(\phi_{2})\text {csch}^{4}\left(\frac{s}{2}\right)\left(\cosh(\phi_{2})\sinh(s)+\cosh(s)-3 \right)}{2\left(\cosh(\phi_{2})+\text{coth}\left(\frac{s}{2}\right)\right)^{5}} \tag{33}\] The spectral density of the 2P shift has a very different behavior from the spectral density of the 2S shift (Fig. 7). It is negative and and it falls off as \(1/E^{2}\). The shift is negative because the dominant Figure 4: The ordinate is the fraction of the ground state shift \(\Delta E_{1}\) due to vacuum field energies between 0 and E, plotted as a function of E on the abscissa. This plot is obtained by integration of the spectral density from Eq. 29, shown in Fig. 1. The plot is linear in the ordinate and abscissa. The origin corresponds to (0,0) for all plots. contribution to the shift is from virtual transitions from the 2P state to the lower 1S state, with an energy difference of about 10.2 eV. For frequencies below about 20 eV, the absolute value of the spectral density of the 2P shift increases rapidly in magnitude as the energy is reduced and is much bigger than the spectral density for the 2S shift. The 2S shift cannot have a negative contribution from the lower 1S state since the transition 2S-\(>\)1S is forbidden by the conservation of angular momentum. The classic Lamb shift arises from the difference between the two spectral densities, so the negative 2P spectral density actually increases the 2S-2P Lamb shift as the energy decreases (Fig. 8). The total 2P shift is about 0.3% percent of the 2S shift. Bethe also computed a negative contribution for the shift from the 2P state[16]. Comparing the Ground State Group Theoretical Lamb Shift Calculations to Those of Bethe, Welton, and Feynman Integrating the group theoretical spectral density Eq. 29 from near zero energy (\(5.4x10^{-7}\) eV) to 511 keV, about the rest mass energy of the electron, gives the 1S shift of \(3.4027x10^{-}5\) eV, in agreement with the numerical result of Bethe and Salpeter summing over states and using the Bethe log approximation, \(3.392x10^{-}5\) eV, to about 0.3% [5]. Bethe and Salpeter reported that the ground state Bethe log Eq. 13, which is a logarithmically weighted average value of the excitation of the energy levels contributing to the radiative 1S shift, was 19.77 Ry or 269 eV [16]. Because of the weighting, it is not clear how one should interpret this Figure 5: This loglog plot shows the 1S spectral density from group theory Eq. 29 in black, and the contributions to this shift in the Bethe formalism for the transition \(1S\to 2P\) (blue), \(1S\to 4P\) (red), \(1S\to 8P\) (green). The dashed blue line shows the high frequency \(1/E\) asymptote. The black line is the complete spectral density which is the summation of the contributions from all transitions. Figure 6: This loglog plot shows the log of the group theoretical spectral density for the 1S (black) and 2S (orange) shifts on the vertical axis versus the log of the frequency in eV. The dashed orange curve below 1 eV is a 2S low energy approximation Eq. 47 from group theory or the Bethe formula. The blue is the largest single contribution in the Bethe formalism to the 2S shift for the transition \(2S\to 3P\). value, other than it indicates that high energy photons and scattering states contribute significantly to the shift. As we have noted, our group theoretical method does not provide an equivalent weighted average value for direct comparison. Although the methods of Bethe, Welton, and Power as defined all give approximately the same value for the 1S shift, which equals the integral of the spectral density in our approach, they differ significantly in their frequency dependence, which we will now examine. ## 4 The Spectral Density of The Lamb shift at High Frequency The form for \(d\Delta E_{n}/dE\), which is the Lamb shift spectral density for level \(n\), can be obtained at high energies from 1) the classic calculation by Bethe using second order perturbation theory; 2) the calculation by Welton of the Lamb shift; 3) the calculation of Power of the Lamb shift based on Feynman's approach; and 4) our group theoretical calculation. The spectral density for level \(n\) can be written from Bethe's expression Eq. 6 \[\frac{\Delta E_{n}^{Bethe}}{\Delta E}=\frac{2\alpha}{3\pi}(\frac{1}{mc})^{2} \sum_{m}|\mathbf{p}_{mn}|^{2}(E_{n}-E_{m})\frac{1}{E_{n}-E_{m}-E}. \tag{34}\] If we are evaluating the spectral density for the ground state \(n=1\), \(Z=1\), then \(E_{1}=-13.613\)eV, and for the bound states \(E_{m}=-13.613eV/m^{2}\). For scattering states \(E_{m}\) is positive. Hence the denominator Figure 8: This loglog plot shows the log of the spectral density for the 2S shift (orange) and the 2S-2P Lamb shift (blue) versus the log of the energy. The solid black line is the \(1/E\) asymptote. Figure 7: This loglog plot shows the log of the absolute value of the spectral density on the vertical axis versus the log of the frequency in eV for the 2S shift (orange), which goes as \(1/E\) for large \(E\), and for the 2P shift (green), which goes as \(1/E^{2}\) for large \(E\). At 511 keV, the 2P spectral density is about 5 orders of magnitude smaller than the 2S spectral density. Below 20 eV, the absolute value of the 2P spectral density is greater than the 2S spectral density. Note that the 2P spectral density is actually negative and the 2S spectral density is positive. is negative for all terms in the sum over \(m\) and never vanishes, and the spectral density is positive, and the ground state shift is positive as it must be. For large values of \(E\), we can make the approximation \[\frac{\Delta E_{n}^{Bethe}}{\Delta E}|_{E->\infty}=\frac{2\alpha}{3\pi}(\frac{1 }{mc})^{2}\sum_{m}|\mathbf{p}_{mn}|^{2}(E_{m}-E_{n})\frac{1}{E}. \tag{35}\] The summation can be evaluated using the dipole sum rule Eq. 9, and Eqs. 10 and 11 for the Coulomb S state wavefunction, obtaining the final result for the high frequency spectral density for S states with principal quantum number \(n\) \[\frac{d\Delta E_{n}^{Bethe}}{dE}|_{E->\infty}=\frac{4mc^{2}}{3\pi}\alpha(Z \alpha)^{4}\frac{1}{n^{3}}\frac{1}{E}. \tag{36}\] The result highlights the \(1/E\) divergence at high frequencies, and shows the presence of a coefficient proportional to \(1/n^{3}\). To put a scale on the coefficient, we note that the high frequency spectral density can be written as \((8/3\pi)(\alpha(Z\alpha)^{2}/n)(E_{n}/E)\). The spectral density for all frequencies from Welton's model, Eq. 17, is identical to this high frequency limit of Bethe's calculation. Thus at low frequencies, the spectral density for Welton's calculation diverges as \(1/E\). Because of the expectation value of the Laplacian, Welton's approach predicts a shift only for S states. Its appeal is that it gives a clear physical picture of the primary role of vacuum fluctuations in the Lamb shift and shows the presence of the \(1/E\) characteristic behavior. To obtain a level shift, it requires providing a low energy limit for the integration. As we have noted, if the lower limit is the Bethe's log average excitation energy, 269 eV for n=1, and the upper limit \(mc^{2}\) then Welton's total 1S shift agrees with Bethe's. A choice of this type works since 1) it does not include any contributions from energies below 269 eV and 2) it gives a compensating contribution for energies from 269 eV to about 1000 eV that is larger than the actual spectral density, as shown in Fig. 4, and 3) above about 1000 eV, Welton's model gives the same \(1/E\) spectral density as Bethe. The spectral density for Power's model can be obtained from Eq. 24 \[\frac{\Delta E_{n}^{Power}}{dE}=-\frac{2\alpha}{3\pi}(\frac{1}{mc})^{2}\sum_{m} |\mathbf{p}_{mn}|^{2}(E_{m}-E_{n})\frac{E}{(E_{m}-E_{n})^{2}-E^{2}} \tag{37}\] Letting E become large, we see the result is identical to the high frequency limit Eq. 35 for the Bethe formalism and the Welton model so we have \[\frac{\Delta E_{n}^{Power}}{dE}|_{E->\infty}=\frac{4mc^{2}}{3\pi}\frac{\alpha( Z\alpha)^{4}}{n^{3}}\frac{1}{E}. \tag{38}\] Thus we find for S states a \(1/E\) dependence of the high frequency spectral density, corresponding to the logarithmic divergence at high frequency. We can write this high energy theoretical result in a form allowing easy comparison to the calculated group theoretical spectral density eV/eV: \[\frac{d\Delta E_{n}^{Bethe}}{dE}|_{E->\infty}=\frac{4mc^{2}}{3\pi}\frac{\alpha (Z\alpha)^{4}}{n^{3}}\frac{1}{E}. \tag{39}\] The spectral density goes as \(1/n^{3}\) for S states. For the ground state \(n=1\), \(Z=1\) we have \[\frac{d\Delta E_{1}^{Bethe}}{dE}|_{E->\infty}=4.488\times 10^{-6}\frac{1}{E} \tag{40}\] A fit to the last two data points near 510 KeV in the group theoretical calculations gives: \[\frac{d\Delta E_{1}^{Ccalc}}{dE}|_{E->\infty}=4.4008\times 10^{-6}\frac{1}{E}. \tag{41}\] The coefficients differ by about 2%. Fig. 9 is a plot of the ground state group theoretical calculated spectral density (red) from Eq. 29 and the theoretical high energy \(1/E\) function from Bethe, Power and Welton, Eq. 40 (black), and the difference times a factor of 10. The asymptotic theoretical result agrees with the full group theoretical calculation from Eq. 29 to within about 2% at 511 keV, and to about 6% at 50 KeV. It is notable that the high frequency form is a reasonable approximation down to 50 keV. Indeed, the Welton approach is based on this observation; it has the same \(1/E\) energy dependence at all energies. ## 5 Spectral Density of the Lamb Shift at Low Frequency We can obtain a low frequency limit of the spectral density of the Lamb shift from the Bethe spectral density Eq. 34. For small values of \(E\), the spectral density can be expanded to first order in E, giving \[\frac{\Delta E_{n}^{Bethe}}{dE}|_{E->0}=\frac{2\alpha}{3\pi}(\frac{1}{mc})^{2 }\sum_{m}|\mathbf{p}_{mn}|^{2}(1-\frac{E}{E_{m}-E_{n}}). \tag{42}\] Since the sum is over a complete set of states \(m\) including scattering states we can evaluate the first term in parenthesis using the sum rule \[\sum_{m}|\mathbf{p}_{mn}|^{2}=-2mE_{n}=(mc)^{2}\frac{(Z\alpha)^{2}}{n^{2}}. \tag{43}\] For the second term we use Eq. 22 and the Thomas-Reiche-Kuhn sum rule [17] \[\sum_{m}\omega_{mn}|\mathbf{d}_{mn}|^{2}=\frac{3e^{2}\hbar}{2m} \tag{44}\] to evaluate the resulting summation. The final result for \(E\to 0\) is \[\frac{\Delta E_{n}^{Bethe}}{dE}|_{E->0}=\frac{2\alpha}{3\pi}\frac{(Z\alpha)^{ 2}}{n^{2}}-\frac{\alpha}{\pi mc^{2}}E. \tag{45}\] The corresponding spectral density for \(n=1,Z=1\) is \[\frac{d\Delta E_{1}^{Bethe}}{dE}|_{E->0}=\frac{4\alpha\times 13.6}{3\pi mc^{2}}(1 -\frac{3E}{4\times 13.6})=8.253\times 10^{-8}(1-0.0551E). \tag{46}\] Figure 9: Top red curve is the 1S group theoretical calculated spectral density Eq. 29, slightly lower black curve is the \(1/E\) asymptotic model Eq. 39, and the bottom green curve is the difference times 10, plotted for the interval 50-510keV. Both axes are linear. As E decreases to zero, the spectral density increases linearly to a constant value \(\frac{4\alpha}{3\pi}\frac{|E_{n}|}{mc^{2}}=2\alpha^{3}Z^{2}/3\pi n^{2}=8.253\times 1 0^{-8}/n^{2}\). The intercept goes as \(1/n^{2}\), but the slope \(\alpha/\pi mc^{2}\), which has a remarkable simple form, is independent of \(n\). If we take the low frequency limit of the group theoretical result analytically, we obtain exactly the same result as in Eq. 45 from the Bethe formulation \[\frac{d\Delta E_{1}^{GTheory}}{dE}|_{E->0}=\frac{d\Delta E_{1}^{ Bethe}}{dE}|_{E->0}=\frac{2\alpha}{3\pi}\frac{(Z\alpha)^{2}}{n^{2}}-\frac{ \alpha}{\pi mc^{2}}E. \tag{47}\] Fig. 3 shows the results of group theoretical calculations of the spectral density of the ground state Lamb shift for different energy regions, showing the near linear increase in the spectral density as the frequency decreases from 80 eV to \(10^{-5}\) eV. For low values of E, the slopes and intercept agree within about two tenth of a percent with the theoretical values from Eq. 47. To explore Power's approach at low frequency, we can let \(E\) become very small in the spectral density Eq. 37, giving \[\frac{\Delta E_{n}^{Power}}{dE}|_{E->0}=-\frac{2\alpha}{3\pi mc^{3}}\sum_{m}| \mathbf{p}_{mn}|^{2}\frac{E}{E_{m}-E_{n}} \tag{48}\] which is identical to the second term in the low E approximation to the Bethe result Eq. 42 so we have: \[\frac{\Delta E_{n}^{Power}}{dE}=-\frac{1}{\pi}\frac{\alpha}{mc^{2}}E. \tag{49}\] This result Eq. 49 is identical to the frequency dependent term in Eq. 47, which is the low frequency spectral density from the Bethe approach and from the group theoretical expression. However, in the low frequency limit based on Power's expression for the spectral density, the constant term that is present in the other approaches does not appear. This a consequence of the form used for the index of refraction, which assumes that real photons are present that can excite the atom with resonant transitions. More sophisticated implementations of Feynman's proposal may avoid this issue. Comparison of the Spectral Energy Density of the Vacuum Field and the Spectral Density of the Radiative Shift The theory of Feynman proposes that the vacuum energy density in a large box containing H atoms, which we assume are all in the 1S ground state, increases uniformly with the addition of the atoms. He maintains that the total vacuum energy in the box increases by the Lamb shift times the number of atoms present. If we had one atom in a very large box, we would not expect the change in energy density to be uniform but more concentrated near the atom. To develop a model of the spatial dependence of the change in energy density for one atom, we can use the close relationship between the vacuum field and the radiative shift. The spectral densities of the ground state shift and of the quantum vacuum with no H atoms present are both know. In the box the vacuum field density must increase so that the integral gives the 1S Lamb shift. The spectral energy density of the vacuum field with no H atom present is equal to [8] \[\rho_{0}(\omega)=\frac{\hbar\omega^{3}}{2\pi^{2}c^{3}} \tag{50}\] where c is the speed of light in cm/sec and \(\omega\) is in \(sec^{-1}\). If we measure frequency in eV so \(\hbar\omega=E\) then the vacuum spectral energy density in \(1/cc\) is \[\rho_{0}(E)=\frac{E^{3}}{2\pi^{2}\hbar^{3}c^{3}}. \tag{51}\] and \(\int_{E_{1}}^{E_{2}}\rho_{0}(E)dE\) would be the energy density eV/cc in the energy interval \(E_{1}\) to \(E_{2}\). The question we are addressing is: what volume of vacuum energy of density \(\rho_{0}(E)\) is required to supply the amount of energy needed for the radiative shift? We can express the total radiative shift \(\Delta E_{1}\) as the integral of the vacuum energy density \(\rho_{0}(E)\) over an effective volume \(V_{1}(E)\) \[\Delta E_{1}=\int_{0}^{mc^{2}}dE\rho_{0}(E)V_{1}(E) \tag{52}\] where we use the same upper limit for \(E\) as in all of our calculations. Recall our definition of the spectral shift Eq. 28: \[\Delta E_{1}=\int_{0}^{mc^{2}}dE\frac{\Delta E_{1}}{dE}. \tag{53}\] By comparison of Eq. 52 and Eq. 53 we determine that to insure energy balance at each energy E, the effective spectral volume \(V_{1}(E)\) is \[V_{1}(E)=\frac{d\Delta E_{1}}{dE}\frac{1}{\rho_{0}(E)}. \tag{54}\] The spectral volume \(V_{1}(E)\) has the dimensions of \(cc\) and contains the amount of vacuum energy at energy value \(E\) that corresponds to the ground state spectral density at the same energy \(E\). In Fig. 10, for the 1S ground state radiative shift, we plot the log of the spectral volume \(V_{1}(E)\) on the y-axis in units of cubic Angstroms versus the log of the energy \(E\) in eV on the x-axis. For energies above about 100 eV, the spectral volume is less than 1 cubic Angstrom, approximately the volume of the ground state wavefunction. For an energy of 1 eV, the spectral volume is \(11850A^{3}\), corresponding to a sphere of radius about 14 A. This calculation predicts that there is a sphere of positive vacuum energy of radius 14 A around the atom corresponding to the 1 eV shift spectral density. Fig. 11 shows the radius of the spherical spectral volume \(V_{1}(E)\) for energies from 0.05 eV, with spectral radius of 278 A, to 23 eV, with radius 0.49 A. ## 7 Conclusion The non-relativistic Lamb shift can be interpreted as due to the interaction of an atom with the fluctuating electromagnetic field of the quantum vacuum. We introduce the concept of a spectral shift density which is a function of frequency \(\omega\) or energy \(E=\hbar\omega\) of the vacuum field. The integral of the spectral density from E=0 to the rest mass energy of an electron, 511 keV, gives the radiative shift. We report on calculations of the spectral density of the level shifts for 1S, 2S and 2P states based on a group theoretical analysis and compare the results to the spectral densities implicit in previous calculations of the Lamb shift. The group theoretical calculation provides an explicit form for the spectral density over the entire spectral range. Bethe's approach requires a summation over an infinite number of states, all bound and all scattering, to obtain a comparable spectral density. We compare all approaches for asymptotic cases, for very large and very small energies E. Figure 10: This loglog plot shows the spectral volume \(V_{1}(E)\) as a function of \(E\). The spectral volume \(V_{1}(E)\) contains the free field vacuum energy at energy value \(E\) that corresponds to the ground state shift spectral density at the same energy \(E\). The calculations of the shift spectral density provide a new perspective on radiative shifts. The group theory approach as well as the approaches of Bethe, Power, and Welton all show the same \(1/E\) high frequency behavior for S states above about \(E=\hbar\omega\)= 1000 eV to E=511 keV, namely a spectral density for S states equal to \((4/3\pi)(\alpha(Z\alpha)^{4}mc^{2}/n^{3})(1/E)\) for states with principal quantum number \(n\). Since our group theory calculation shows that about 76% of the ground state 1S shift is contributed by E above 1000 eV, this is essentially why all the approaches give approximately the same result for the 1S Lamb shift. Only the Bethe and group theory calculations have the correct low frequency behavior. We find that for S states the spectral density increases linearly as E approaches zero. Its maximum value is at E=0 and for S states equals \((2\alpha/3\pi)(Z\alpha)^{2}/n^{2}\). This maximum value is about \(1/(Z\alpha)^{2}\) or about \(2\times 10^{4}\) larger than the high frequency spectral density at E=510 keV. Thus low energies contribute much more to the shift for a given spectral interval than the high energies. Energies below 13.6 eV contribute about 2.5 %. Because of the huge spectral range contributing to the shift, contributions to the shift from high energies are very important. Half the contribution to the 1S shift is from energies above 9050 eV. The 2P shift has a very different spectral density from an S state: it is negative and has an asymptotic behavior that goes as \(1/E^{2}\) rather than as \(1/E\). Below about 20 eV, the absolute value of the 2P spectral density is much larger than the 2S spectral density and it dominates the 2S-2P shift spectral density, yet the total 2P shift is only about 0.3% of the total 2S shift. ## Appendix A : Eigenstates \(|nlm;a\rangle\) of \(1/Z\alpha\) To obtain an equation for these basis states \(|nlm;a\rangle\) we write Schrodinger's equation for a charged non-relativistic particle with energy \(E=-\frac{a^{2}}{2m}\)[14][15] in a Coulomb potential \[\left[p^{2}+a^{2}-\frac{2m\hbar cZ\alpha}{r}\right]|a\rangle=0. \tag{1}\] There are solutions for \(|a\rangle\) for certain critical values of the energy \(E_{n}=-\frac{a_{n}^{2}}{2m}\) or equivalently when \(a=a_{n}\) where \(\frac{a_{n}}{mcZ\alpha}=\frac{1}{n}\). These are the usual energy eigenstates which we label as \(|nlm;a_{n}\rangle\). Conversely we can let \(a\) be fixed in value and let \(Z\alpha\) have different values. If it has certain eigenvalues \(Z\alpha_{n}\) then for any value of \(a\) we can have another set of eigenvectors corresponding to eigenvalues \(\frac{a}{mcZ\alpha_{n}}=\frac{1}{n}\). To demonstrate this we start by inserting factors of \(1=\sqrt{ar}\frac{1}{\sqrt{ar}}\) in Schrodinger's equation Eq. 1 obtaining \[\left(\sqrt{ar}(p^{2}+a^{2})\sqrt{ar}-2amZ\alpha\right)\frac{1}{\sqrt{ar}}|a \rangle=0. \tag{2}\] Figure 11: This plot shows the log of the radius in Angstroms of the spherical spectral volume \(V_{1}(E)\) as a function of the vacuum field energy E from 0.05 eV to 23 eV. We can rewrite this equation, multiplying successively from the left by \(\frac{1}{\sqrt{ar}},\frac{1}{p^{2}+a^{2}}\), and \(\frac{1}{\sqrt{ar}}\), and then multiplying by \(a^{2}\), and dividing by \(mcZ\alpha\), multiplying by \(\sqrt{n\hbar}\) obtaining \[\left(\frac{a}{mcZ\alpha}-K_{1}(a)\right)\sqrt{\frac{n\hbar}{ar}}|a\rangle=0 \tag{111}\] where \[K_{1}(a)=\frac{1}{\sqrt{ar}}\frac{2a^{2}\hbar}{p^{2}+a^{2}}\frac{1}{\sqrt{ar}} \tag{112}\] There are solutions to this equation for eigenvalues of \(1/Z\alpha\) such that \(\frac{a}{mcZ\alpha_{n}}=\frac{1}{n}\): \[\left(\frac{1}{n}-K_{1}(a)\right)|nlm;a\rangle=0 \tag{113}\] where \[\sqrt{\frac{n\hbar}{ar}}|nlm;a\rangle=|nlm;a\rangle\] The \(n\hbar\) in the square root insures the new states are also normalized to 1. The kernel \(K_{1}(a)\) is bounded and Hermetian with respect to the eigenstates \(|nlm;a\rangle\) of \(1/Z\alpha\), therefore these eigenstates of \(1/Z\alpha\) form a complete orthonormal basis for the hydrogen atom. Because the kernel is bounded, there are no continuum states in this representation. To show they have the same quantum numbers as the usual states, we note when \(a=a_{n}\) then the eigenstates of \(K_{1}(a_{n})\) becomes \(|nlm;a_{n}\rangle\) and these corresponds to the usual energy eigenstates \(|nlm;a_{n}\rangle\). We can change the value of \(a\) in Eq. 113 to obtain these eigenstates using the dilation operator \(D(\lambda)=e^{iS\lambda}\) where the dimensionless operator S, which is also a generator of transformations of SO(4,2), is \[S=\frac{1}{2\hbar}(\mathbf{p}\cdot\mathbf{r}+\mathbf{r}\cdot\mathbf{p}). \tag{114}\] When S operates on the canonical variables we obtain \[D(\lambda)\mathbf{p}D^{-1}(\lambda)=e^{-\lambda}\mathbf{p}\] \[D(\lambda)\mathbf{r}D^{-1}(\lambda)=e^{\lambda}\mathbf{r}.\] Operating on \(K_{1}(a)\) with \(D(\lambda)\) we find \[D(\lambda)K_{1}(a)D^{-1}(\lambda)=K_{1}(ae^{\lambda}).\] We can pick \(\lambda\) as \[\lambda_{n}=ln(a_{n}/a)\] so that \(ae^{\lambda_{n}}=a_{n}\). Thus operating with \(D(\lambda_{n})\) on Eq. 113 we obtain \[\left(\frac{1}{n}-K_{1}(a_{n})\right)D(\lambda_{n})|nlm;a\rangle=0. \tag{115}\] This is the equation for the usual Schrodinger energy eigenstates so \[D(\lambda_{n})|nlm;a\rangle=|nlm;a_{n}\rangle=\sqrt{\frac{n\hbar}{a_{n}r}}|nlm; a_{n}\rangle. \tag{116}\] Thus the usual Schrodinger energy eigenstates \(|nlm;a_{n}\rangle\) can be expressed in terms of the eigenstates of \(1/Z\alpha\) as \[|nlm;a_{n}\rangle=\sqrt{\frac{a_{n}r}{n\hbar}}D(\lambda_{n})|nlm;a\rangle. \tag{10}\] The relationship shows that complete basis functions \(|nlm;a\rangle\) of \(1/Z\alpha\) are proportional to the ordinary bound state energy wavefunctions and therefore have the same quantum numbers as the ordinary bound states[14][15]. A comparable set of \(1/Z\alpha\) eigenstates useful for momentum space calculations is derived in [14]. ## Appendix B : Derivation of Group Theoretical Formula for the Shift Spectral Density The group theoretical approach is based solely on the Schrodinger and Klein-Gordon equations of motion in the non-relativistic dipole approximation. We obtain a result [14] \[\Delta E_{NL}=\frac{2\alpha}{3\pi(mc)^{2}}\int_{0}^{\hbar\omega_{c}}dE\langle NL |p_{i}\frac{H-E_{N}}{H-(E_{N}-E)-i\epsilon}p_{i}|NL\rangle. \tag{11}\] where \(E=\hbar\omega\), \(H=\frac{p^{2}}{2m}-\frac{Z\alpha\hbar c}{r}\) and the states \(|NL\rangle\) are the usual H atom energy eigenstates. \(\omega_{\rm C}\) is a cutoff frequency for the integration that we will take as \(\hbar\omega_{c}=mc^{2}\). If we insert a complete set of states in this expression we obtain Bethe's result Eq. 6, a step we avoid with the group theoretical approach. If we add and subtract \(E\) from the numerator in Eq. 11, we find the real part of the shift is \[\Delta E_{NL}=\frac{2\alpha}{3\pi(mc)^{2}}Re\int_{0}^{\hbar\omega_{c}}dE[ \langle NL|p^{2}|NL\rangle-E\Omega_{NL}] \tag{12}\] where \[\Omega_{NL}=\langle NL|p_{i}\frac{1}{H-E_{N}+\hbar\omega-i\epsilon}p_{i}|NL\rangle. \tag{13}\] We want to convert the matrix element \(\Omega_{NL}\) to a matrix element of a function of SO(4,2) generators taken between a new set of basis states \(|nlm;a\rangle\), which are complete with no scattering states, where \(a=\sqrt{2m|E|}\), and n,l,m have their usual meaning and values. The new basis states \(|nlm;a\rangle\) are eigenstates of \((Z\alpha)^{-1}\)[14][15]. Sometimes we simply write them as \(|nlm\rangle\) with the \(a\) implicit. We define a generator of SO(4,1) as \(\Gamma_{0}=1/K_{1}(a)=(1/2)(\frac{\sqrt{r}p^{2}\sqrt{r}}{a}+ar)\) so \[(\Gamma_{0}-n)|nlm;a\rangle=0. \tag{14}\] This is Schrodinger's equation in the language of SO(4,2). We need to define several more generators. Since the algebra of SO(4,2) generators closes, commutators of generators must also be generators. To find \(\Gamma_{4}\), we calculate \(\Gamma_{4}=-i[S,\Gamma_{0}]\), obtaining \[\Gamma_{4}=\frac{1}{2\hbar}\left(\frac{\sqrt{r}p^{2}\sqrt{r}}{a}-ar\right) \qquad\Gamma_{0}=\frac{1}{2\hbar}\left(\frac{\sqrt{r}p^{2}\sqrt{r}}{a}+ar\right) \tag{15}\] where the generator \(S\) is defined in Appendix A. The generators \((\Gamma_{4},S,\Gamma_{0})=(j_{1},j_{2},j_{3})\) form a O(2,1) subgroup of SO(4,2) and \(S=i[\Gamma_{4},\Gamma_{0}],\Gamma_{0}=-i[S,\Gamma_{4}]\) and for our representations \(\Gamma_{0}^{2}-\Gamma_{4}^{2}-S^{2}=\mathbf{L^{2}}=\mathbf{l}(l+1)\). The scale change S transforms \(\Gamma_{0}\equiv\Gamma_{0}(a)\) according to the equation \[e^{i\lambda S}\Gamma_{0}(a)e^{-i\lambda S}=\Gamma_{0}(e^{\lambda}a)=\Gamma_{ 0}\cosh\lambda-\Gamma_{4}\sinh\lambda \tag{16}\] and similarly \[e^{i\lambda S}\Gamma_{4}(a)e^{-i\lambda S}=\Gamma_{4}(e^{\lambda}a)=\Gamma_{ 4}\cosh\lambda-\Gamma_{0}\sinh\lambda. \tag{17}\] Finally we define a three vector of generators proportional to the momentum \[\Gamma_{i}=\frac{1}{\hbar}\sqrt{\tau}p_{i}\sqrt{\tau}. \tag{100}\] The quantity \(\mathbf{\Gamma}=(\Gamma_{0},\Gamma_{1},\Gamma_{2},\Gamma_{3},\Gamma_{4})\) is a five vector of generators under transformations generated by SO(4,2). For the representation of SO(4,2) based on the states \(|nlm\rangle\), all generators are Hermetian, and \(\mathbf{\Gamma}^{2}=\Gamma_{A}\Gamma^{A}=-\Gamma_{0}^{2}+\Gamma_{1}^{2}+ \Gamma_{2}^{2}+\Gamma_{3}^{2}+\Gamma_{4}^{2}=1\) for our representation, and \(g_{AB}=(-1,1,1,1,1)\) for \(A,B=0,1,2,3,4\). The commutators of the components of the five vector are also generators of \(SO(4,2)\) transformations. Inserting factors of \(1=\sqrt{a\tau}\frac{1}{\sqrt{a\tau}}\) and using the definitions of the generators we can transform Eq. 101 to \[\Omega_{NL}=\frac{m\nu}{N^{2}}(NL|\Gamma_{i}\frac{1}{\Gamma n(\xi)-\nu}\Gamma _{i}|NL) \tag{101}\] where \[n^{0}(\xi)=\frac{2+\xi}{2\sqrt{1+\xi}}=\cosh\phi\qquad n^{i}=0\qquad n^{4}( \xi)=-\frac{\xi}{2\sqrt{1+\xi}}=-\sinh\phi \tag{102}\] and \[\xi=\frac{\hbar\omega}{|E_{N}|}\qquad\nu=\frac{N}{\sqrt{1+\xi}}=Ne^{-\phi}. \tag{103}\] From the definitions we see \(\phi=\frac{1}{2}ln(1+\xi)>0\) and \(n_{A}(\xi)n^{A}(\xi)=-1\). The contraction over \(i\) in \(\Omega_{NL}\) may be evaluated using the group theoretical formula [14]: \[\sum_{B}\Gamma_{B}f(n\Gamma)\Gamma^{B}=\frac{1}{2}(n\Gamma+1)^{2}f(n\Gamma+1)+ \frac{1}{2}(n\Gamma-1)^{2}f(n\Gamma-1)-(n\Gamma)^{2}f(n\Gamma). \tag{104}\] We apply the contraction formula to the the integral representation \[f(n\Gamma)=\frac{1}{\Gamma n-\nu}=\int_{0}^{\infty}dse^{\nu s}e^{-n\Gamma s} \tag{105}\] and obtain the result \[\Gamma_{A}\frac{1}{\Gamma n-\nu}\Gamma^{A}=-2\nu\int_{0}^{\infty}ds\,e^{\nu s }\frac{d}{ds}(\sinh^{2}\frac{s}{2}\,e^{-n\Gamma s}). \tag{106}\] Applying this to our expression Eq. 101 for \(\Omega_{NL}\) gives \[\begin{split}\Omega_{NL}&=-2\frac{m\nu^{2}}{N^{2}} \int_{0}^{\infty}dse^{\nu s}\frac{d}{ds}\left(\sinh^{2}\frac{s}{2}M_{NL}(s) \right)\\ &-m\frac{\nu}{N^{2}}(NL|\Gamma_{4}\frac{1}{\Gamma n(\xi)-\nu} \Gamma_{4}|NL)+m\frac{\nu}{N^{2}}(NL|\Gamma_{0}\frac{1}{\Gamma n(\xi)-\nu} \Gamma_{0}|NL)\end{split} \tag{107}\] where \[M_{NL}(s)=(NL|e^{-\Gamma n(\xi)s}|NL). \tag{108}\] In order to evaluate the last two terms in Eq. 107 we use \(\Gamma_{0}|NL)=N|NL)\) and express the action of \(\Gamma_{4}\) on our states as \(\Gamma_{4}=N-(1/\sinh\phi)(\Gamma n(\xi)-\nu)\). This expression for \(\Gamma_{4}\) is derived from Eqs. 101 and 102: \(\Gamma n(\xi)-\nu=\Gamma_{0}\cosh\phi-\Gamma_{4}\sinh\phi-\nu\), and then substituting Eq.103, \(\nu=Ne^{-\phi}\). Using the virial theorem \((NLM|p^{2}|NLM)=a_{N}^{2}\), we find that the term in \(p^{2}\) in Eq. 107 exactly cancels the last two terms in \(\Omega_{NL}\), yielding the result for the level shift \[\Delta E_{NL}=\frac{4mc^{2}\alpha(Z\alpha)^{4}}{3\pi N^{4}}\int_{0}^{\phi_{c} }d\phi\sinh\phi e^{\phi}\int_{0}^{\infty}ds\,e^{\nu s}\frac{d}{ds}\left(\sinh^{ 2}\frac{s}{2}M_{NL}(s)\right) \tag{109}\] where \[\phi_{c}=\frac{1}{2}ln\left(1+\frac{\hbar\omega_{c}}{|E_{N}|}\right)=\frac{1}{2} ln\left(1+\frac{2N^{2}}{(Z\alpha)^{2}}\right). \tag{108}\] We can derive a generating function for the shifts for any eigenstate characterized by \(N\) and \(L\) if we multiply Eq.107 by \(N^{4}e^{-\beta N}\) and sum over all \(N,N\geq L+1\). To simplify the right side of the resulting equation, we use the definition Eq. 106 and the fact that \(\Gamma_{4},\ S,\) and \(\Gamma_{0}\) form an O(2,1) algebra so we have: \[\sum_{N=L+1}^{\infty}e^{-\beta N}M_{NL}=\sum_{N=L+1}^{\infty}(NL|e^{-j\cdot \Psi}|NL), \tag{109}\] where \[e^{-j\cdot\Psi}\equiv e^{-\beta\Gamma_{0}}e^{-s\Gamma n(\xi)}. \tag{110}\] We perform a \(j\) transformation generated by \(e^{i\phi\xi}\), such that \(e^{-j\cdot\Psi}\to e^{-j_{3}\Psi}=e^{-\Gamma_{0}\psi}\). The trace is invariant with respect to this transformation so we have \[\sum_{N=L+1}^{\infty}e^{-\beta N}M_{NL}=\sum_{N=L+1}^{\infty}(NL|e^{-j_{3}\Psi }|NL)=\sum_{N=L+1}^{\infty}e^{-N\psi}=\frac{e^{-\psi(L+1)}}{1-e^{-\psi}}, \tag{111}\] where we have used \((NL|\Gamma_{0})|NL)=N\). In order to find a particular \(M_{NL}\), we must expand the right hand side of the equation in powers of \(e^{-\beta}\) and equate the coefficients to those on the left hand side. Using the isomorphism between \(j\) and the Pauli \(\sigma\) matrices \((\Gamma_{4},S,\Gamma_{0})\rightarrow(j_{1},j_{2},j_{3})\rightarrow(\frac{i}{ 2}\sigma_{1},\ \frac{i}{2}\sigma_{2},\ \frac{1}{2}\sigma_{3})\) gives the result \[\cosh\frac{\psi}{2}=\cosh\frac{\beta}{2}\cosh\frac{s}{2}+\sinh\frac{\beta}{2} \sinh\frac{s}{2}\cosh\phi. \tag{112}\] Rewriting this equation gives \[e^{+\frac{1}{2}\psi}=de^{\frac{1}{2}\beta}+be^{-\frac{1}{2}\beta}-e^{-\frac{1 }{2}\psi} \tag{113}\] where \[\begin{array}{l}d=\cosh\frac{s}{2}+\sinh\frac{s}{2}\cosh\phi\\ b=\cosh\frac{s}{2}-\sinh\frac{s}{2}\cosh\phi\end{array}. \tag{114}\] Let \(\beta\) become very large, which implies large \(\psi\), and iterate the equation for \(e^{-\frac{1}{2}\psi}\) to obtain the result \[e^{-\psi}=Ae^{-\beta}\left[1+A_{1}e^{-\beta}+A_{2}e^{-2\beta}+\ldots\right] \tag{115}\] where \(A=1/d^{2}\) and \(A_{1}=-(2/d)(b-d^{-1})\). To obtain \(M_{NL}\), we expand the right side of Eq. 111 in powers of \(\psi\) \[\frac{e^{-\psi(L+1)}}{1-e^{-\psi}}=\sum_{m=1}^{\infty}e^{-\psi(m+L)}. \tag{116}\] For large \(\beta\) it follows from Eqs. 111, 115, and 116 that \[\sum_{N=L+1}^{\infty}e^{-\beta N}M_{NL}=\sum_{m=1}^{\infty}\left[e^{-\beta}A (1+A_{1}e^{-\beta}+A_{2}e^{-2\beta}+...\right]^{m+L}. \tag{117}\] Using the multinomial theorem [18] the right side of Eq. 117 becomes \[\sum_{m=1}^{\infty}A^{m+L}\sum_{r,s,t,...}\frac{(m+L)!}{r!s!t!...}\,A_{1}^{s}A _{2}^{t}...e^{-\beta(m+L+s+2t+...)} \tag{118}\] where \(r+s+t+...=m+L\). To obtain the expression for \(M_{NL}\), we note \(N\) is the coefficient of \(\beta\) so \(N=m+L+s+2t+...=r+2s+3t+...\) Accordingly we find \[M_{NL}=\sum_{r,s,t,}A^{(r+s+t+...)}\frac{(r+s+t+...)!}{r!s!t!}A_{1}^{s}A_{2}^{t}. \tag{100}\] where r+s+t+...=N and r+s+t+...>L. For the 1S shift, as Eq. 101 indicates, we want the matrix element \(M_{10}\) which corresponds to \(e^{-\beta}\) so \(M_{10}=A\). For the 2S shift we have \(M_{20}=A^{2}+AA_{1}\), and for the 2P shift \(M_{21}=A^{2}\). Therefore the radiative shift for the 1S ground state is \[Re\Delta E_{10}=\frac{4mc^{2}\alpha(Z\alpha)^{4}}{3\pi}\int_{0}^{\phi_{c}}d\phi e ^{\phi}\sinh\phi\int_{0}^{\infty}dse^{\varepsilon e^{-\phi}}\frac{d}{ds}\frac{ 1}{\left(\coth\frac{s}{2}+\cosh\phi\right)^{2}}. \tag{101}\] The shift for the 2S-2P level is \[Re(\Delta E_{20}-\Delta E_{21})=\frac{m\alpha(Z\alpha)^{4}}{6\pi}\int_{0}^{ \phi_{c}}d\phi e^{\phi}\sinh^{3}\phi\int_{0}^{\infty}dse^{2\varepsilon e^{- \phi}}\frac{d}{ds}\frac{1}{\left(\coth\frac{s}{2}+\cosh\phi\right)^{4}}. \tag{102}\] ## Funding This research received no external funding. The author has no conflicts of interest. ## Acknowledgements I thank Prof. Peter Milonni for many insightful and enjoyable discussions, particularly about the resonant behavior of the index of refraction and the volume of vacuum energy corresponding the the spectral density, and I thank Prof. Lowell S. Brown for his observations, especially about the 1/ frequency asymptotic behavior.
2309.06101
Tuning of Ray-Based Channel Model for 5G Indoor Industrial Scenarios
This paper presents an innovative method that can be used to produce deterministic channel models for 5G industrial internet-of-things (IIoT) scenarios. Ray-tracing (RT) channel emulation can capture many of the specific properties of a propagation scenario, which is incredibly beneficial when facing various industrial environments and deployment setups. But the environment's complexity, composed of many metallic objects of different sizes and shapes, pushes the RT tool to its limits. In particular, the scattering or diffusion phenomena can bring significant components. Thus, in this article, the Volcano RT channel simulation is tuned and benchmarked against field measurements found in the literature at two frequencies relevant to 5G industrial networks: 3.7 GHz (mid-band) and 28 GHz (millimeter-wave (mmWave) band), to produce calibrated ray-based channel model. Both specular and diffuse scattering contributions are calculated. Finally, the tuned RT data is compared to measured large-scale parameters, such as the power delay profile (PDP), the cumulative distribution function (CDF) of delay spreads (DSs), both in line-of-sight (LoS) and non-LoS (NLoS) situations and relevant IIoT channel properties are further explored.
Gurjot Singh Bhatia, Yoann Corre, Marco Di Renzo
2023-09-12T10:08:00Z
http://arxiv.org/abs/2309.06101v1
# Tuning of Ray-Based Channel Model for 5G Indoor Industrial Scenarios ###### Abstract This paper presents an innovative method that can be used to produce deterministic channel models for 5G industrial internet-of-things (IIoT) scenarios. Ray-tracing (RT) channel emulation can capture many of the specific properties of a propagation scenario, which is incredibly beneficial when facing various industrial environments and deployment setups. But the environment's complexity, composed of many metallic objects of different sizes and shapes, pushes the RT tool to its limits. In particular, the scattering or diffusion phenomena can bring significant components. Thus, in this article, the Volcano RT channel simulation is tuned and benchmarked against field measurements found in the literature at two frequencies relevant to 5G industrial networks: 3.7 GHz (mid-band) and 28 GHz (millimeter-wave (mmWave) band), to produce calibrated ray-based channel model. Both specular and diffuse scattering contributions are calculated. Finally, the tuned RT data is compared to measured large-scale parameters, such as the power delay profile (PDP), the cumulative distribution function (CDF) of delay spreads (DSs), both in line-of-sight (LoS) and non-LoS (NLoS) situations and relevant IIoT channel properties are further explored. channel models, 5G, benchmark, ray-tracing, mmWave. ## I Introduction The 5G mobile communication was developed to enhance mobile networks' broadband capabilities and supply improved wireless access to a wide range of industry verticals, such as the manufacturing, automotive, and agricultural sectors [1]. Industrial environments are considered severe from the point of view of electromagnetic (EM) wave propagation. A large number of obstructions and fading fluctuations may cause degraded signal and system reliability. Hence, radio channel characterization is critical for designing radio communication systems for future smart factories. A channel model is known as an abstract and simplified approach to mathematically or computationally reproduce the main characteristics of an actual channel and evaluate the impact on the performance of a specific wireless technology. Empirical channel models rely on wide-band channel measurements to characterize propagation by statistically assessing wide-band channel properties and then formulating mathematical relationships and equations to derive important variables like path loss, DS, angular spread, and so forth. Some popular empirical path loss models for indoor scenarios are: Log-Normal (Large-scale) Shadowing model, Alpha Beta Gamma (ABG) model, and Close-in (CI) free space path loss model. The 3rd Generation Partnership Project (3GPP) in the technical report (TR) 38.901 version 16.1.0 [2] also supports the ABG model and CI free space path loss model for indoor scenarios, such as Indoor Hotspot - Office (InH) and Indoor Factory (InF) scenarios. Non-geometric Stochastic Channel Models (NGSCMs) illustrate and determine physical parameters entirely stochastically by dictating underlying probability distribution functions without assuming an underlying geometry (examples are the Saleh-Valenzuela or the Zwick model). Whereas, Geometry-based Stochastic Channel Models (GSCMs) rely on some geometrical assumptions, and the propagation parameters are at least partially stochastic and specified by probability distributions [3]. The COST models, the 3GPP Spatial Channel Model (SCM), and Wireless World Initiative New Radio (WINNER) are some examples of GSCMs. Stochastic channel models (SCMs) do not aim to reproduce channel responses at a particular site but are plausible and realistic given an imaginary environment. SCMs leverage radio measurements such as wide-band channel measurements to derive statistical distributions for large-scale parameters and distribution laws for multi-path components (MPCs). Deterministic channel modeling is another approach to simulate radio propagation based on Maxwell's equations. RT is a commonly-used technique in deterministic channel modeling. Its algorithm starts by finding all the possible geometrical rays between a transmitter and a receiver for a given number of allowed interactions. Then, the calculation of the rays' (EM field) contributions is based on the geometrical optics (GO), uniform theory of diffraction for diffraction (UTD), and effective roughness theory (ER), assuming that the far-field conditions are met. The RT can provide path loss data, angle-of-arrival (AoA), angle-of-departure (AoD), time delay, optical visibility (LoS or NLoS), etc. RT, similar to the finite element method, can require a lot of computational resources to produce precise results, especially for complex problems. This is particularly consequential in industrial environments with many complicated structures and objects with complex geometries. As a result, RT simulations can become computationally intensive for InP scenarios. Recently, 3GPP in the TR 38.901 version 16.1.0 [2], has proposed models for typical smart factory environments called InF and IIoT channel models. These channel models rely significantly on measurement data collected in typical propagation conditions. Several measurement campaigns in InF scenarios can be found in the literature. In [4], the authors try to assess the channel propagation at 28 and 60 GHz frequencies for light and heavy industrial layouts. In [5], the authors demonstrate massive multi-input multi-output (mMIMO) channel characteristics based on channel sounding measurements carried out in an industrial environment at frequency ranges from 26 to 30 GHz. In [6], the authors perform directional wide-band channel measurements at 28 GHz in an industrial environment. The work in [7] explains a wide-band channel measurement campaign at 3.7 and 28 GHz with direction-of-arrival information at 28 GHz. Such campaigns typically use empirical models and analyze channel parameters such as path loss, PDP, AoA, AoD, LoS probability, and Root Mean Square (RMS) DS, among other relevant parameters. Stochastic and empirical channel models rely heavily on wide-band field measurements [6, 7, 8]. Field measurements, such as channel-sounding measurements, are used to get precise data about the radio channel. They pose several difficulties in complexity, cost, and representability, which need efficient radio propagation models to be developed as workable replacements. Another challenge for those models is that propagation in InF scenarios is more site-specific than in usual residential or office environments [4, 7]. For instance, in InF scenarios, metallic machines are one of the most common objects in the environment. The huge bodies of such machines can become major blockers in the NLoS case. The smooth metallic surface creates many reflections, and the large body prevents the signal from propagating directly. The InF environment with a large number of machines complicates the radio propagation in the factory. Given the drawback of field measurements, the site-specific nature of InF scenarios, and the recent interest in mmWave frequency bands for wireless networks, many channel and network performance metrics for mmWave communications have been lately generated through RT [9, 10]. Hence, ray-based channel models can be considered an interesting radio propagation and characterization approach in InF scenarios. In this paper, we present an approach that can be used to produce a site-specific calibrated ray-based channel model for a typical 5G InF scenario. This paper aims to reduce the dependency on complicated field measurements. This will allow a tool to create site-specific channel data in factory scenarios and a database of channel samples that may be used for research studies. There have been earlier attempts to calibrate ray-based models for urban scenarios using measurement data like in [10]. Still, to the best of our knowledge, there has not been any such attempt for industrial environments yet. This paper is structured as follows. Section II briefly describes the various benefits of RT for InF scenarios. Section III describes the approach used for the validation of RT against measurements. Section IV portrays some ideas for exploiting the calibrated RT tool. Finally, Section V presents the conclusion and future perspective of this work. ## II Benefits of Ray-tracing for InF scenarios The RT approach presents some benefits to realize the next-generation channel models for future smart factories. These benefits can provide a new paradigm to study the signal propagation characteristics and model the wireless channel in industrial scenarios. * RT is a powerful technique for predicting "specular MPCs" (SMPCs: specular reflections and diffractions) and "dense MPCs" (DMPCs: diffuse scattering) in complex industrial settings. RT enables deterministic modeling of ray paths, making it valuable for channel modeling, network planning, and optimization, particularly at mmWave frequencies where there are a limited number of such paths. * RT requires detailed modeling of the propagation environment, incorporating precise dimensions and dielectric properties of objects. Hence, RT models can be tailored to specific scenarios, effectively capturing the site-specific characteristics of indoor industrial environments. * The RT model can be calibrated [10]. Once a RT model is calibrated and validated for a specific scenario, it can predict channel parameters for different positions of the base station (BS) and the user terminal (UT), and predict coverage maps and correlation properties. It can also predict MIMO channel properties, spectral efficiency, and data rates. * The validated RT model then can provide received power and other large-scale parameters at arbitrary locations for radio coverage planning and network optimization of a communication system. * Spatial and time variability of the channel in terms of the movement of the transmitter, receiver, and other objects (such as mobile robots, automated guided vehicles (AGVs), drones, and humans.) in the environment can also be included. * It is relatively easy to include beyond 5G enabling technologies such as reconfigurable intelligent surfaces (RIS) into RT models. For instance, the effective roughness theory used to model diffuse scattering interactions can be used to model the anomalous reflections from RIS [11]. ## III Ray-tracing validated against measurements This section will focus on the benchmarking attempt against the measurement results found in the literature. In [7], the authors present a wide-band channel measurement campaign at 3.7 and 28 GHz with direction-of-arrival information at 28 GHz. Large-scale channel parameters are evaluated using empirical channel models, and the results are compared to the 3GPP TR 38.901 InF model. It is an example of an industrial measurement campaign and a typical use case for InF scenarios. Hence, it will be a reference to benchmark and tune our ray-based model. The 3D digital model of the measurement scenario, as shown in Fig. 1, was created just from the 2D floor plan available in the literature. We did not have access to a detailed description, photos of the measurement site, or exact specifications of the measurement setup. Thus, the chosen objects, their material, and their dimensions are a guess based on the reference paper and analysis of typical objects found in such factories. The exact location of different receiver positions was also unknown. ### _The 3D digital scenario and simulation settings_ The BS antenna used in simulations is a single element, \(\frac{\lambda}{2}\) vertically polarized omni-directional antenna. The transmit ted power was kept at 0 dBm to analyze only the propagation channel's impact. The UT antenna is also a single element, \(\frac{\lambda}{2}\) vertically polarized omni-directional antenna. The BS is deployed at a fixed position at a height of 1.85 m. The UTs are deployed at 75 different positions: 1-38 LoS (step size = 1 m) and 39-75 NLoS (step size = 1 m), as shown in Fig. 1. All the UTs are at a height of 1.44 m. The digital model is 74.4 m long, 24.4 m wide, and 4.6 m high. Table I gives the specifications of the 3D digital scenario, while table II gives an overview of the various objects in the 3D digital scenario. After creating a digital model of the measurement site, point-to-multi-points (P2MP) simulations were realized for positions 1-38 for the LoS case and 39-75 for the NLoS case, using Volcano Flex [12] RT. Volcano Flex is a time-efficient propagation model based on the ray-launching (also known as the shooting-and-bouncing) approach, capable of predicting deterministic path-loss in any small-scale urban or indoor scenario. It can provide channel properties and 3D multi-path trajectories. By default, two reflections and one diffraction are allowed for each ray. For P2MP simulation results, various channel parameters, such as received power, PDP, channel impulse response (CIR), transfer function (TF), Azimuth AoA, Elevation AoD, and many more, can be calculated. After the initial analysis, the simulation results of position 1 (LoS) and 39 (NLoS) were compared with the measured results. The multi-paths richness was observed to be strongly underestimated, and low-power components at higher delays were missing. Hence, the number of allowed interactions was changed. The P2MP analysis was repeated with a maximum of three reflections, one diffraction allowed for each ray, and diffuse scattering from walls and machines were activated [10]. The PDPs showed significant improvement for most of the positions when the number of interactions was increased, but this increased the computation times as well. A point-to-point (P2P) link simulation with three reflections, one diffraction, and diffuse scattering from walls and machines took around two and a half hours compared to about thirty minutes if diffuse scattering was disabled. This time further decreases to a few seconds for the case of two reflections and one diffraction for each ray, but the results worsen significantly. Hence, the results could further be improved by increasing the number of interactions, but this would significantly increase the computation times. ### _Calibration of channel simulation results_ The calibration process aims to adjust the contribution of different types of ray interactions and minimize the difference between the simulated and measured results. As mentioned earlier, metallic machines are a common object in InF situations. Their enormous size makes them a major NLoS obstacle and scatterer for low-elevation user devices. Machines are not represented with all details but by a large simple block. They are associated with the material properties of a typical metal [13]. After visual inspection of predicted rays and initial channel parameters, it could be seen that the SMPCs (specular reflections and diffractions) from the machines were over-estimated. Hence, to decrease the impact of SMPCs just from the machines, the default metal properties were swapped to \(\acute{\varepsilon}_{\mathrm{r}}^{{}^{\prime}}=3\), \(\acute{\varepsilon}_{\mathrm{r}}^{{}^{\prime\prime}}=0.1\) at 3.7 GHz and \(\acute{\varepsilon}_{\mathrm{r}}^{{}^{\prime}}=3\), \(\acute{\varepsilon}_{\mathrm{r}}^{{}^{\prime\prime}}=0.09\) at 28 GHz, with a thickness of 40 cm. This would decrease the weight of the specular reflections and diffractions, and allow for some (minor) transmission, while the strength of the diffuse scattering remains unchanged. These equivalent material properties offer significant improvement. However, we did not exhaustively analyze, and further optimization remains. Then we used the same principle as in [10] with ray-path classification and calibration to better fit the measurements. PDPs (maximum power and power decay trend), DSs, and angular distributions are analyzed for this study. Furthermore, the diffraction components were decreased by a diffraction offset of 10 dB. The contribution of the DMPCs (diffuse scattering) from the channel simulations was Fig. 1: 3D digital model of the measurement site in channel emulator. weaker than the measured results. Hence, the DMPCs were increased by a diffuse scattering offset of 12 dB. This coarse correction has no apparent physical justification but shows promising results and will permit the realization of realistic studies. ### _Comparison of calibrated and measured results_ #### Iv-C1 Power Delay Profile Fig. 3 (top) [7] shows the instantaneous PDP (IPDP) and averaged PDP (APDP) from the measurement data at 3.7 and 28 GHz, respectively, for the LoS case. The LoS PDP corresponds to a measurement point with a BS-UT distance of 5.2 m, resulting in a time of flight (ToF) delay of 17.5 ns. At 3.7 GHz, the measured LoS component was received with a power of -58.8 dBm, and at 28 GHz, with a power of -78.1 dBm. Fig. 3 (bottom) also shows the simulated LoS PDP from the RT model. It corresponds to a BS-UT distance of 6.1 m, resulting in a ToF delay of 20.3 ns. This component was received with a power of -59.6 dBm, and at 28 GHz, with a power of -77.1 dBm. In LoS case for both measured and simulated results, several SMPCs can be seen together with DMPCs for both frequencies. For measurement results, they reach the noise floor of -145 dBm at a delay of about 450 ns at 28 GHz and 550 ns at 3.7 GHz. Thanks to calibration, the average trend of the simulated PDP is similar to the measurement until 180 ns. Beyond this value, the simulated power decrease accelerates; barely any propagation path exceeds the noise floor at delays greater than 320 ns and 400 ns at 28 GHz and 3.7 GHz, respectively. Fig. 4 (top) [7] shows the IPDP and APDP from the measurement data at 3.7 and 28 GHz, respectively, for NLoS case. The NLoS PDP corresponds to a measurement point with a BS-UT distance of 9.9 m, resulting in a time of flight delay of 33 ns. Fig. 4 (bottom) also shows the simulated LoS PDP. It corresponds to a BS-UT distance of 16.5 m, resulting in a ToF delay of 54.9 ns. For measurement results, only a few strong SMPCs can be seen at 3.7 GHz with a delay of 40 ns and a power of -79.4 dBm. It is difficult to distinguish between specular and diffuse components, even in a short delay range. Besides, in the simulation, we observe some dominant specular ray-paths at 65 and 80 ns with a power close to -80 dBm. The maximum predicted power is consistent with the measurement. However, the density of components detected at this level is underestimated. The reason may come from the simplification of the machinery representation. As shown in Fig. 3 and 4, the LoS and NLoS PDPs from measurement and simulated results follow the same trend with some disagreements that can also be attributed to the lack of detailed information about the measurement setup and scenario. Ray-based channel models, especially for the industrial environment, are highly scenario specific. Hence, the lack of proper details about various industrial objects' size, shape, position, and other aspects can change RT results, especially for NLoS cases and mmWave propagation, where DMPCs play a significant role. #### Iv-C2 RMS DS and Angular Spread As shown in Fig. 5, the CDF of DS was calculated using the calibrated RT results and compared with the measurement results at 3.7 GHz. For LoS case, the DS for the simulated scenario is smaller than the measured DS, due to underestimated power at higher delays. For the NLoS case, the observed difference is the opposite; this can be partly attributed to the fact the BS-UT distances in the digital scenario are almost 1.5 times larger than in the Fig. 4: Comparison between the measured PDP (top picture, taken from [7]) and the simulated PDP (bottom), for NLoS case. Fig. 3: Comparison between the measured PDP (top picture, taken from [7]) and the simulated PDP (bottom), for LoS case. Fig. 2: Some of the predicted specular ray components for UT 1 (LoS) after initial calibration. actual measurement scenario. The comparison also reinstates that it is hard to distinguish between specular and diffuse components, even in a short delay range. This complicates choosing the optimal offset for different interactions and can undermine the multi-path richness of the channel. The horizontal (azimuth) AoA power spectrum was also calculated and compared to the measurements, as shown in Fig. 6. ## IV Exploitation After its calibration, the RT tool can be exploited for various applications. We give some examples here. Fig. 7 shows the coverage map of the factory floor at 3.7 GHz with a resolution of 2 m. The coverage analysis can be performed with a finer resolution, but this would cost higher computation times. Such a map is interesting to complement the channel analysis; it provides a more extensive and comprehensible evaluation of the factory's power distribution beyond the scope of what can be observed solely through measurements. For instance, we can see different NLoS situations with higher or lower power degradation. The calibrated model can also easily analyze channel properties between new BS-UT positions, as shown in Fig. 8. This makes the ray-based model a flexible tool to complement the measurement-based channel characterization. Finally, Fig. 9 (top) shows a factory scenario that the authors in [14] use to investigate various channel emulation use cases, like simplifying the propagation model to improve the time efficiency. Based on a real factory setup, three distinct zones are identified. These zones differ from each other in terms of shape, size, object material, and clutter density. Zone A is the biggest. It is 79.5 m long and is characterized by machines and storage containers. Zone B is 78.6 m long and is empty primarily to facilitate the movement of factory workers and their loads, with two big lobbies connecting a few rooms. Zone C is 51.6 m long, with metallic lockers, wooden benches, and metallic housing units. This scenario requires a separate in-depth study and analysis, but the first simulations conducted with the approach mentioned in this paper are reported here. Fig. 9 (bottom) shows that the CDFs of DS vary in a significant manner from one zone to another. These results re-emphasize that the IIoT channel models are site-specific, and any change in the factory scenario can strongly impact the channel properties. Fig. 5: Comparison between the measured CDF of RMS DS (top picture, taken from [7]) and the simulated CDF of RMS DS (bottom), for LoS and NLoS case at 3.7 GHz. Fig. 8: Simulation results corresponding to the new BS 2 and UT 82 at 3.7 GHz. Fig. 6: Comparison between the measured (top picture, taken from [7]) and the simulated horizontal AoA power profile at 28 GHz. Fig. 7: Coverage map of the factory floor at 3.7 GHz. Such a problem involving producing channel samples for InF scenarios can be easier to address using a RT tool. The approach suggested in this paper can be used to reduce the dependency on complicated channel measurements and provide more flexibility to produce channel models for InF and IIoT scenarios. ## V Conclusions and future perspective This work tested and calibrated a ray-based channel model in an industrial scenario. The calibration steps helped improve the average trend of simulated PDPs, DS, and angular distribution. Several aspects need further analysis, such as optimized material properties and extent of simplification of representation of various objects in the environment, more ways to increase the multi-path richness of the channel, and optimized diffraction and diffuse scattering offset. This work is undoubtedly one of the first studies directed toward producing a site-specific calibrated ray-based channel model for typical 5G InF scenarios. It is worth noting that the calibrated ray-based channel model can be used to predict several other channel and network parameters for the given factory scenario, such as coverage maps, correlation properties, MIMO channel properties, spectral efficiency, signal-to-noise ratio, data rates, and so forth. Besides, it is relatively easy to include beyond 5G enabling technologies such as mMIMO and reconfigurable intelligent surfaces (RIS) into RT models. Considering the challenges presented by InF and IIoT scenarios, the ray-based model offers an intriguing supplementary or alternative option to empirical and stochastic methodologies. The produced channel samples will be completed with additional scenarios and made publicly available with free access soon. ## Acknowledgment This work is part of a project that has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska Curie grant agreement No. 956670. We thank the Fraunhofer Heinrich Hertz Institute's support in understanding the measurement setup.
2309.16312
Gravity Mediated Entanglement between Oscillators as Quantum Superposition of Geometries
Protocols for observing gravity induced entanglement typically comprise the interaction of two particles prepared either in a superposition of two discrete paths, or in a continuously delocalized (harmonic oscillator) state of motion. An important open question has been whether these two different approaches allow to draw the same conclusions on the quantum nature of gravity. To answer this question, we analyse using the path-integral approach a setup that contains both features: a superposition of two highly delocalized center of mass states. We conclude that the two usual protocols are of similar epistemological relevance. In both cases the appearance of entanglement, within linearised quantum gravity, is due to gravity being in a highly non-classical state: a superposition of distinct geometries.
Ofek Bengyat, Andrea Di Biagio, Markus Aspelmeyer, Marios Christodoulou
2023-09-28T10:07:43Z
http://arxiv.org/abs/2309.16312v1
# Gravity Mediated Entanglement between Oscillators as Quantum Superposition of Geometries ###### Abstract Protocols for observing gravity induced entanglement typically comprise the interaction of two particles prepared either in a superposition of two discrete paths, or in a continuously delocalized (harmonic oscillator) state of motion. An important open question has been whether these two different approaches allow to draw the same conclusions on the quantum nature of gravity. To answer this question, we analyse using the path-integral approach a setup that contains both features: a superposition of two highly delocalized center of mass states. We conclude that the two usual protocols are of similar epistemological relevance. In both cases the appearance of entanglement, within linearised quantum gravity, is due to gravity being in a highly non-classical state: a superposition of distinct geometries. Feynman argued that detecting the gravitational pull of a mass in superposition would be direct evidence of the quantum nature of gravity [1], as the description of such an experiment would require the assignment of quantum mechanical probability amplitudes to different configurations of the gravitational field. Due to the weakness of gravity this has remained a gedankenexperiment since the last 65 years. Partly motivated by experimental progress there has been a revival of this fundamental question and several proposals have been put forward that analyze Feynman's gravity-mediated entanglement (GME) in more detail [2; 3; 4]. Given the increase in quantum control of mesoscopic solid masses [5; 6; 7] and advances in gravity measurements at small scales [8; 9; 10; 11], these experiments are expected to become feasible in our times [12; 13]. The literature is split into two kinds of experimental protocols for observing gravity induced entanglement. We refer to them as the _path protocol_ and the _oscillator protocol_. The path protocol [2; 3] envisages witnessing entanglement production due to the gravitational interaction between two particles, each in a quantum superposition of centre of mass location. The quantum delocalization around each position is neglected. The oscillator protocol [4; 14; 15] takes a different route: entanglement arises due to gravitational interaction between two continuously delocalized particles, initially prepared as quantum ground states of harmonic oscillators. Both protocols would serve as a test for the predictions of (linearized) quantum gravity. The question remains whether for both protocols one can draw the same conclusion about the quantum nature of gravity. Specifically, that neither case can be described by our existing (classical) theory of gravity, but requires linearized quantum gravity. In [16; 17; 18; 19] it was argued that during the path protocol the entanglement is caused by spacetime existing in a superposition of causally evolving, diffeomorphically inequivalent geometries. In this paper, we extend this analysis to the oscillator protocol. We assume linearised gravity and compute the evolution with the path integral, which has the advantage of keeping spacetime locality explicit. We then consider a state that generalises both the path and oscillator protocols: a superposition of two center of mass states with significant quantum spread of each particle. We calculate the generalised concurrence for this state--an entanglement measure for continuous variable systems proposed in [20]-- in the limit of an instantaneous interaction (zeroth order in \(1/c\)). We recover results from the literature: in the limit where the width of the wavepackets is small with respect to the size of the path superposition, we recover the entanglement rate for the path protocol. In the converse limit, we recover the entanglement rate for the oscillator protocol. We then discuss the interesting case of an initially very localised state, which, perhaps counterintuitively, results in much _faster_ entanglement generation. We calculate the leading order relativistic correction for this case, which is a quantitatively larger correction than the relativistic corrections discussed in [17] for the path protocol, as well as qualitatively different as it arises solely due to the fast spreading of the wavepacket. We conclude that while the physics of the oscillator protocol can in fact be richer, observing gravity mediated entanglement from either the path or oscillator protocol is due to superposition of spacetime geometries. The computations are analytic but lengthy, involving thousands of terms at times, and so require non-trivial use of a symbolic computation software. The code used is provided as Supplementary Material. _Protocol --_ The protocol we study is depicted in Fig. 1. The initial distance between the center of mass of the particles is denoted \(d\). At time \(t_{1}\), each of the two particles \(A\) and \(B\) is in a well-localized state. This can be achieved by applying a strong trapping potential at times \(t<t_{1}\), which is switched off at time \(t_{1}\). Local operations are applied to each particle to produce some desired wavefunction \(\psi_{2}(t_{2})\), which is assumed separable in \(A\) and \(B\). The particles are left to interact for a time \(T=t_{3}-t_{2}\) that is much larger than \(t_{2}-t_{1}\) and \(t_{4}-t_{3}\). At time \(t_{3}\), the particles are in some entangled state. Local operations are applied again to the particles to make them well localised (and thus approximately disentangled from the field) by the time \(t_{4}\). That is, entanglement produced at times \(t\notin[t_{2},t_{3}]\) is taken negligible. Entanglement is then measured by measuring center of mass momentum and position correlations. _Action --_ We assume linearized gravity. For brevity, the metric perturbation \(h_{\mu\nu}\) at a fixed time is denoted as \(\varphi\) and a 'path' \(h_{\mu\nu}(t)\) as \(\mathcal{G}\). The action of the joint system of field and the two particles \(A\) and \(B\) is of the form \[S\left[x_{a},\mathcal{G}\right]=S_{\mathrm{G}}\left[x_{a},\mathcal{G}\right]+ \sum_{a\in\{A,B\}}S_{a}\left[x_{a}\right] \tag{1}\] where \(a=A,B\), \(S_{a}\) the action of a particle with mass \(m_{a}\) in flat spacetime, and \(S_{\mathrm{G}}\) the gravitational action which includes the kinetic terms for \(\varphi\) and its coupling to the energy momentum tensor, which in turn describes point particles with trajectories \(x_{a}^{\mu}=(t,\mathbf{x}_{a})\). \(S_{a}\), \(S_{\mathrm{G}}\) and \(T_{\mu\nu}\) are given in the Appendix. Note that without the term \(S_{\mathrm{G}}\) there can be no entanglement generation since \(e^{i\sum_{a}S_{a}\left[x_{a}\right]/\hbar}=\prod_{a}e^{iS_{a}\left[x_{a} \right]/\hbar}\). _Path integral --_ We model the system by a quantum state \(\ket{\Psi(t)}\in\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{ \mathrm{G}}\), where \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\) are associated with the centre of mass of the particles and \(\mathcal{H}_{\mathrm{G}}\) contains the states of the gravitational field. The specific assignment of a state to the gravitational field is a delicate point which we discuss further below. The state at time \(t_{1}\) is assumed to be of the form \[\ket{\Psi(t_{1})}=\ket{\psi_{1}^{A}}\ket{\psi_{1}^{B}}\ket{\varphi_{1}}, \tag{2}\] that is, a separable state of the two particles and the field. We also assume that the state at time \(t_{4}\) is \[\ket{\Psi(t_{4})}=U\ket{\Psi(t_{1})}=\ket{\psi_{4}}\ket{\varphi_{1}}, \tag{3}\] where \(U\) is the unitary evolution operator and now \(\ket{\psi_{4}}\) is a possibly non-separable state for the two particle, allowing for gravity mediated entanglement. Note that we have assumed that the state of the gravitational field is the same at the beginning and at the end of the experiment, for two reasons. First, because the particles are light and slow-moving, we assume that the amplitude to excite radiation is negligible; in other words, we approximate the final state as its zero graviton component. Secondly, we assume that the particles start and end in well-localised states at the same position. We take \(\ket{\varphi_{1}}\) to be the classical-like state peaked on the classical field \(\varphi_{1}\): the newtonian field1 of two localised particles at \(\mathbf{Y}_{A}\) and \(\mathbf{Y}_{B}\). Footnote 1: We take the particles to be localised at the same position \(\mathbf{Y}_{a}\) at the beginning and end of the experiment. If they would end up in a different place, the final gravitational state would be peaked around a different newtonian field. Nothing but convenience rests on this. We introduce the path integral by positing that \[\bra{\mathbf{y}_{4}^{a},\varphi_{1}}U\ket{\mathbf{y}_{1}^{a},\varphi_{1}}= \int_{\mathbf{y}_{1}^{a},\varphi_{1}}^{\mathbf{y}_{4}^{a},\varphi_{1}}\mathrm{ D}x_{a}^{\prime}\mathrm{D}\mathcal{G}^{\prime}e^{iS[x_{a}^{\prime}, \mathcal{G}^{\prime}]}, \tag{4}\] where \(\ket{\mathbf{y}^{a}}\) is a position eigenstate for particle \(a\). Therefore the wavefunction for the final state \(\psi_{4}(\mathbf{y}_{4}^{a}):=\bra{\mathbf{y}_{4}^{a}}\psi_{4}\) is given in terms of the initial wavefunctions \(\psi_{1}^{a}(\mathbf{y}_{1}^{a}):=\bra{\mathbf{y}_{1}^{a}}\psi_{1}^{a}\) as \[\psi_{4}(\mathbf{y}_{4}^{a})=\int\mathrm{d}\mathbf{y}_{1}^{a}\psi_{1}^{a}( \mathbf{y}_{1}^{a})\int_{\mathbf{y}_{1}^{a},\varphi_{1}}^{\mathbf{y}_{4}^{a},\varphi_{1}}\mathrm{D}x_{a}^{\prime}\mathrm{D}\mathcal{G}^{\prime}e^{iS[x_{ a}^{\prime},\mathcal{G}^{\prime}]}. \tag{5}\] We now take the stationary phase approximation for \(\mathcal{G}\). The action is to be evaluated on the classical gravitational field \(\mathcal{G}[x_{a}]\) sourced by a particle moving as \(x_{a}(t)\). \[\psi_{4}(\mathbf{y}_{4}^{a})\approx\int\mathrm{d}\mathbf{y}_{1}^{a}\psi_{1}( \mathbf{y}_{1}^{a})\int_{\mathbf{y}_{1}^{a}}^{\mathbf{y}_{4}^{a}}\mathrm{D}x_{ a}^{\prime}e^{iS[\mathcal{G}[x_{a}^{\prime}]]}. \tag{6}\] We have introduced the notation \(S[\mathcal{G}[x_{a}^{\prime}]]=S[x_{a}^{\prime},\mathcal{G}[x_{a}^{\prime}]]\) for brevity. Note that, in general, Figure 1: A sketch of the protocol. The particles start in a separable, well-localised state at \(t_{1}\). A more general separable state is prepared by local operations (\(t_{2}\)). The particles evolve freely for a period \(T\), resulting in a possibly entangled state. Finally, local operations on the particles make them localised again, without changing the entanglement. \(\mathcal{G}[x_{a}^{\prime}](t_{1})\neq\varphi_{1}\neq\mathcal{G}[x_{a}^{\prime}](t_{ 4})\), therefore the path \(\mathcal{G}[x_{a}^{\prime}]\) is _not_ one of the paths integrated over in (5). However, we assume for the moment that the error is not too significant, because the initial state will be very localized. The validity of this assumption will be corroborated by our results. A second stationary phase approximation for the trajectories is taken by keeping only the contribution on the solution of the classical equations of motion for the particles for initial and final positions \(\mathbf{y}_{1}^{a}\) and \(\mathbf{y}_{4}^{a}\), denoted as \(x_{a}\). We obtain that \[\psi_{4}(\mathbf{y}_{4}^{a})\approx\int\mathrm{d}\mathbf{y}_{1}^{a}\psi_{1}( \mathbf{y}_{1}^{a})e^{iS[\mathcal{G}[x_{a}]]}. \tag{7}\] Since \(\psi_{1}(\mathbf{y}_{1}^{a})\) is separable in \(A\) and \(B\), entanglement generation will arise from the \(S_{G}\) part of \(S\). The gravitational contribution at each \(\mathbf{y}_{4}^{a}\) corresponds to a phase \(S_{G}[\mathcal{G}[x_{a}]]\) with \(x_{a}(t_{1})=\mathbf{y}_{1}^{a}\) and \(x_{a}(t_{4})=\mathbf{y}_{4}^{a}\). Each such pair of trajectories sources a gravitational field, and integrating over \(\mathbf{y}_{1}^{a}\) corresponds to'summing' the contributions from the gravitational interaction of all these pairs of trajectories for \(A\) and \(B\). We note two points that will be further discussed later. First, (7) indicates that the gravitational field is not in a classical-like state during the experiment. Second, the trajectories \(x_{a}\) can be taken to be straight lines even when going to the leading order in \(1/c\), which indicates no radiation due to no acceleration. This justifies our original assumption of no radiation in \(|\varphi_{1}\rangle\). State evolution --We have yet to use the assumptions that the'splitting' and'recombination' phases are fast with respect to \(T\), which imply we only need to consider the free evolution for \(t\in[t_{2},t_{3}]\). It can be shown that, to leading order in \(1/c\), \[\psi_{4}(\mathbf{y}_{4}^{a})\approx \int\mathrm{d}\mathbf{y}_{3}^{a}e^{iS_{3}[\mathcal{G}[x_{2}^{a}] ]}\cdot\int\mathrm{d}\mathbf{y}_{2}^{a}e^{iS_{2}[\mathcal{G}[x_{2}^{a}]]}\] \[\cdot\int\mathrm{d}\mathbf{y}_{1}^{a}\psi_{1}(\mathbf{y}_{1}^{a} )e^{iS_{1}[\mathcal{G}[x_{1}^{a}]]}. \tag{8}\] \(x_{i}^{a}\) are the trajectories of the particles at the time interval \([t_{i},t_{i+1}]\). Above, the actions are approximated up to leading order in \(1/c\) (which is \(1/c^{2}\)). We caution that due to retardation it is not straightforward to arrive at (8), see the Appendix. Evolution from \(\psi_{2}\) to \(\psi_{3}\) is done by the second integral in (8). The assumptions made previously imply that the evolution done with the actions \(S_{1}\) and \(S_{3}\) will not contribute to the entanglement measured for \(\psi_{4}\). We thus assume that the state \(\psi_{2}=\int\mathrm{d}\mathbf{y}_{1}^{a}\psi_{1}e^{iS_{1}}\) has been prepared separable in \(A\) and \(B\) and write \[\psi_{3}(\mathbf{y}_{3}^{a})\approx\int\mathrm{d}\mathbf{y}_{2}^{a}e^{iS_{2}[ \mathcal{G}[x_{2}^{a}]]}\psi_{2}(\mathbf{y}_{2}^{a}). \tag{9}\] Once \(\psi_{3}(\mathbf{y}_{3}^{a})\) is calculated, we proceed to compute the entanglement in \(\psi_{3}\), which in the approximation we work in will be the same as that in \(\psi_{4}\). Two gaussians in path superposition --We now fix the states \(\psi_{2}^{a}\) and apply the above to the generalised case shown in Fig. 2 to arrive at an analytical expression for the entanglement at zeroth order in \(1/c\). That is, in this (and the next) section the gravitational interaction is approximated as an instantaneous interaction through a newtonian potential. The state \(\psi_{2}=\otimes_{a}\psi_{2}^{a}\) is separable in \(A\) and \(B\) and describes two quantum particles of mass \(m_{a}=m\), each prepared initially in a spatial superposition of two gaussians. The initial wavefunction of each particle in the position basis is \[\psi_{2}^{a}(\mathbf{y}^{a})\propto\exp\left(-\frac{(\mathbf{y}^{a}-\beta/2)^{ 2}}{2\alpha^{2}}\right)+\exp\left(-\frac{(\mathbf{y}^{a}+\beta/2)^{2}}{2 \alpha^{2}}\right), \tag{10}\] where \(\alpha\) is the delocalization of the position of each path and \(\beta\) is the separation between the two paths. The normalisation will drop out from the entanglement. Thinking of the gaussians as ground states of a harmonic potential (which was turned off at \(t=t_{2}\)), we introduce the (nominal) frequency \(\omega=\hbar/m\alpha^{2}\). We can calculate \(\psi_{3}\) using (9). To simplify the difficult calculation, we assume \(d\gg\beta,\alpha,\alpha\omega t\), and expand to second order in the dimensionless parameters \(\alpha/d\) and \(\beta/d\). This is a realistic assumption for most realistic experimental scenarios. We also expand to zeroth order in the dimensionless parameter \(\omega d/c\). The above make the integrals over \(\mathrm{d}\mathbf{y}_{2}^{a}\) in (9) computationally tractable. The integrands become sums of gaussians multiplied by polynomials, a case for which closed formulas can be found. The code used for the long analytical calculation is provided as Supplementary Material. To quantify the amount of entanglement generated by gravity in \(\psi_{3}\), we compute the generalised entanglement measure \(\mathcal{E}\) recently introduced in [20] and given in the Appendix. This is a generalization to the case of continuous sets of states of the concurrence, a well known entanglement measure for discrete sets of states. We find \[\mathcal{E}= \frac{2Gm^{2}t}{\hbar d}\frac{1}{d^{2}}\Bigg{[}\frac{1}{4}\beta^{ 4}+\alpha^{2}\beta^{2}\left(1+\frac{1}{4}(\omega t)^{2}\right) \tag{11}\] \[+\alpha^{4}\Bigg{[}f_{0}\left(\frac{\beta}{\alpha}\right)+f_{2} \left(\frac{\beta}{\alpha}\right)\frac{(\omega t)^{2}}{3}+f_{4}\left(\frac{ \beta}{\alpha}\right)\frac{(\omega t)^{4}}{9}\Bigg{]}\Bigg{]}^{1/2}.\] Figure 2: The initial state of matter. We take the convention that the two particles have opposite positive directions to reduce the number of sign changes in what follows. Note that \(\omega\equiv\hbar/m\alpha^{2}\). The functions \(f_{0},f_{2}\), and \(f_{4}\) are of order unity and given in the appendix. As is shown below, the faster than linear scaling with time is due to the growth in uncertainty. Path and oscillator protocols --When \(\beta\gg\alpha\) each of the particles begins in a superposition of two gaussians with negligible overlap. This will remain true during the experiment so long as \(\beta\gg\alpha\omega T\). Then, (11) reduces to the path protocol entanglement [13] \[\mathcal{E}_{\beta\gg\alpha}=\frac{Gm^{2}T}{\hbar d}\left(\frac{\beta}{d} \right)^{2}. \tag{12}\] When \(\beta\ll\alpha\) each particle state is approximately a single gaussian and (11) reduces to \[\mathcal{E}_{\beta\ll\alpha}=\frac{Gm^{2}T}{\hbar d}\left(\frac{\sqrt{2} \alpha}{d}\right)^{2}\sqrt{1+\frac{(\omega T)^{2}}{3}+\frac{(\omega T)^{4}}{9 }}. \tag{13}\] We see that the distance \(\beta\) between the paths in the path protocol is replaced by the initial gaussian width \(\alpha\). The oscillator protocol has an additional amplification term that depends on \(\omega T\). When \(\omega T\) is large, we obtain \[\mathcal{E}_{\beta\ll\alpha}^{\omega T\gg 1}=\frac{2}{3}\frac{Gm^{2}T}{\hbar d }\left(\frac{\alpha\omega T}{d}\right)^{2}, \tag{14}\] which is in accordance with previous literature on the oscillator protocol [4]. Recall that \(\omega\propto\alpha^{-2}.\) This means that, while in the small \(\omega T\) limit, larger \(\alpha\) leads to larger entanglement, in the large \(\omega T\) limit, smaller spread leads to larger entanglement. This was already remarked in [4]. Relativistic correction --We now report the leading correction in \(1/c\) for the oscillator protocol when \(\omega T\gg 1\). The newtonian potential is now replaced by a Lienard-Wiechert type [17], approximated to leading order in \(1/c\). The tedious calculation is given in the Supplementary Material. As in the previous section, we expand the action to second order in \(\alpha/d\), but, this time keep the leading order in \(\omega d/c\). The state at \(t_{2}\) is now \[\psi_{2}(\mathbf{y}_{a})=\otimes_{a}\psi_{2}^{a}(\mathbf{y}_{a})\propto \otimes_{a}\exp\left(-\frac{\mathbf{y}^{a^{2}}}{2\alpha^{2}}\right). \tag{15}\] The entangled state \(\psi_{3}\) is computed again through (9) and is given in the Supplementary Material. The entanglement is \(\mathcal{E}_{\beta\ll\alpha}^{\omega T\gg 1}+\Delta\mathcal{E}_{\beta\ll \alpha}^{\omega T\gg 1}\) with \[\Delta\mathcal{E}_{\beta\ll\alpha}^{\omega T\gg 1}=-\frac{3}{2}\left(\frac{d}{ ct}\right)^{2}\mathcal{E}_{\beta\ll\alpha}^{\omega T\gg 1}. \tag{16}\] Using (14), \(\Delta\mathcal{E}_{\beta\ll\alpha}^{\omega T\gg 1}\) scales with \((\omega d/c)^{2}\). Note that there will be _less_ entanglement due to this relativistic effect. The correction for general values of \(\omega T\) is given in the Appendix. There is a physical explanation for this, namely, the fact that when \(\omega t\gg 1\) the entanglement generation is not due to the initial spread per se, but, due to the fast growth of the spread with free evolution. The particles see each other with retardation and so they see a smaller spread of the other particle, which results in less entanglement. The amount of entanglement reduction is controlled by the ratio of the distance of the center of mass and the light crossing time. Gravitational attraction --Feynman imagined detecting the gravitational field of a mass in superposition by measuring the displacement of another mass. However, surprisingly, gravity mediated entanglement can arise with no measurable displacement. This has been the main insight from the path protocol [2; 3]. Owing to our path integral formalism, we can conclude that this is the case for the oscillator protocol too. To see this, we compare the results of using (i) straight lines trajectories with (ii) the accelerated trajectories of the Kepler problem for the particles. One gets the exact same results at leading order in \(1/c\) both for the entanglement (11) and all derived results, as well as for the relativistic correction (16). Therefore at this level of approximation, the entanglement in the oscillator protocol, like that in the path-protocol, is not due to the gravity induced displacement between the particles. Superposition of geometries --Let us now return to (7) and discuss the state of the field. For the scope of this computation, we assume that the state of the field is the same at the beginning and at the end of the experiment. In the evaluation of the path integral, we take both the particles and the fields to be on-shell. The picture that arises, to this approximation, is that of a superposition of point particles on constant-velocity trajectories sourcing a corresponding gravitational field. Different trajectories of the particles will source diffeomorphically inequivalent geometries. Additionally, since the trajectories of the particles can be taken to not be accelerated, these geometries do not contain radiation. Therefore we may say that during the experiment, spacetime is in a superposition of a continuum of different, radiation-free, geometries. We may approximate the state of the particles and field at times \(t\in[t_{2},t_{3}]\) as \[\ket{\Psi(t)}\approx\int\mathrm{d}\mathbf{y}^{a}\,\psi(\mathbf{y}^{a})\ket{ \mathbf{y}^{a}}\ket{\varphi(\mathbf{y}^{a})}, \tag{17}\] where \(\psi(\mathbf{y}^{a})\) is given by (9), and \(\ket{\varphi(\mathbf{y}^{a})}\) are semiclassical states of the fields, peaked on classical fields \(\varphi(\mathbf{y}^{a})\) sourced by particles at positions \(\mathbf{y}^{a}\). Conclusions --We considered a protocol (see Figs. 1 and 2) that generalises the path and oscillator protocols for gravity mediated entanglement. We calculated the newtonian limit entanglement for this protocol in (11) and saw that the path and oscillator protocol entanglement are recovered as physical subregimes in (12) and (13). This is the first indication that entanglement in the oscillator and path protocols is due to superposition of spacetime geometries. The oscillator and path protocols amount to specific choices for the \(\psi_{2}^{a}\) wavefunctions for particles \(a=A,B\) in (9) (a sum of Dirac deltas for the path protocol and gaussians for the oscillator protocol). Note that (9) and the general setup and procedure used is applicable to more general (continuous or discrete) prepared wavefunctions \(\psi_{2}^{a}\). We see that the final matter state will be entangled in \(A\) and \(B\) because different trajectories \(x_{a}\) of the particles source a different classical field and give rise to a different phase in (9). Our computation relies on two approximations typical for GME experiments. One of them is \(\alpha/d\ll 1\), meaning that the size of each wavefunction is much smaller than the distance between the two. Another approximation is that \(\omega d/c\ll 1\), which means that the experiment's time scales \(T,1/\omega\) are long enough relative to the distance scales \(d,\alpha\) such that relativistic effects are negligible. The saddle-point approximation is expected to be valid as long as typical values of the action are much larger than \(\hbar\). Another assumption in our computation is that the initial and final states of the particles are very localised, and that therefore they are disentangled from the field. Of course, even though much harder to identify experimentally, detecting gravitationally induced decoherence due to entanglement with the field [21] would in itself be a great feat. Using the path integral formalism, rather than treating the problem as a direct Hamiltonian interaction term as in [15; 4; 13], allows us to work with covariant quantities: the actions that feature in the entangling phases in (7) and (9). Thus, in this setting, entanglement is generated in a spacetime-local manner. It follows that if the entire protocol lasts for about time \(T\) and \(T\ll d/c\) the particles will not become entangled. This was shown by some of us for the path protocol in [17]. In this work, we applied a similar computational technique to more general wavefunctions. We reported in (16) the first correction in \(1/c\) for the oscillator entanglement (13) for the case \(\omega t\gg 1\). In this case the entanglement is generated due to the fast rate of growth of the spread and scales inversely with the initial spread \(\alpha\), as noticed in [4]. The relativistic correction we calculated here intuitively corresponds to the particles seeing each other in the past due to retardation and thus interacting with a gaussian with smaller spread. This gives rise to a negative correction with respect to the newtonian calculation, that is, less entanglement. The relative error of the correction versus the newtonian entanglement is \(-\frac{3}{2}\left(d/cT\right)^{2}\). This relativistic correction is very difficult to detect for gravity, but, it could potentially be observed in an electromagnetic version of the experiment. Our results for the oscillator protocol without relativistic corrections match those of [4]. Relativistic corrections to the entanglement in the oscillator protocol were also computed in [15], but for a slightly different setup. While we compute the entanglement of an initially separable state as a function of interaction time, they compute the entanglement in the joint ground state of gravity and optically-trapped massive particles. As a result, their entanglement does not depend on time. The time-dependence shown here could be exploited in an experimental configuration that starts in two separable oscillator states and introduces the wanted interaction at a later time2, e.g. by bringing the particles more closely together. Footnote 2: This also fits better the scope of the discussions of a theory independent argument for the non-classicality of gravity [22; 23; 24]. It has been previously argued by some of us in [16; 17; 18; 18; 25] that the path protocol would show that the gravitational field has been indirectly detected in a quantum superposition of macroscopically distinct geometries. An analogous understanding for continuous matter wavefunctions sourcing the gravitational field has thus far been missing. We have shown here that the entanglement in the oscillator protocol can also be attributed to a superposition of geometries, specifically a superposition of a continuum of different, radiation-free geometries, and the different phases these accrue. A clear takeaway from our analysis is that if linearised quantum gravity is assumed, whatever the field state is taken to be, it cannot be a classical-like state that approximately solves Einstein's equations. ###### Acknowledgements. We acknowledge support of the ID# 61466 and ID# 62312 grants from the John Templeton Foundation, as part of the "Quantum Information Structure of Spacetime (QISS)" project (qiss.fr), and support from the Research Network Quantum Aspects of Spacetime (TURIS). OB acknowledges support from the Blaumann foundation. We thank Caslav Brukner, Anton Zasedatelev, Sougato Bose, Richard Howl, Anupam Mazumdar and Carlo Rovelli for useful discussions. ## Appendix _Action --_ The metric tensor is approximated as \(g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\) where \(h_{\mu\nu}\) is the metric perturbation that satisfies \(|h_{\mu\nu}|\ll 1\) and \(\eta_{\mu\nu}\) is the flat metric. The signature is \((-,+,+,+)\) and we have: \(g=\det g_{\mu\nu}\), \(t\) the coordinate time, \(R\) the Ricci scalar, \(G\) Newton's gravitational constant, \(c\) the speed of light, and \(x_{a}^{\mu}=(ct,\mathbf{x}_{a}(t))\) the trajectory of particle \(a\). We have \[S_{a}[x_{a}]=-m_{a}c\int\mathrm{d}t\sqrt{-\eta_{\mu\nu}\dot{x}_{a}^{\mu}\dot{x }_{a}^{\nu}}, \tag{18}\] \[S_{\mathrm{G}}\left[x_{a},g\right]= \int\mathrm{d}^{4}x\frac{c^{3}}{16\pi G}[\sqrt{-g}R]_{\mathcal{O}(h _{\mu\nu}^{2})} \tag{19}\] \[-\int\mathrm{d}^{4}x\frac{1}{2c}h_{\mu\nu}T^{\mu\nu}\big{|}_{g_{ \mu\nu}=\eta_{\mu\nu}}+O(h_{\mu\nu}^{3}),\] and \[T^{\mu\nu}\big{|}_{g=\eta}=\sum_{a}\gamma_{a}(t)m_{a}\dot{x}_{a}^{\mu}\dot{x}_ {a}^{\nu}\delta^{(3)}(\mathbf{x}-\mathbf{x}_{a}). \tag{20}\] Note that taking only the interaction term \(\sim T_{\mu\nu}h^{\mu\nu}\) of \(S_{\mathrm{G}}\) would give a wrong numerical result by a factor of two as the contribution of the kinetic terms of \(h_{\mu\nu}\) also need to be taken into account. The on-shell action \(\tilde{S}_{\mathrm{G}}\) reads [17] \[S_{\mathrm{G}}[\mathcal{G}[x_{a}^{\prime}]]= \sum_{a\neq b}\int_{0}^{t}\mathrm{d}t^{\prime}\frac{Gm_{a}m_{b} \bar{V}_{a}^{\mu\nu}(t_{ab})V_{b\mu\nu}/c^{4}}{d_{ab}^{\mu}\dot{x}_{a\mu}^{ \prime}(t_{ab})/c}, \tag{21}\] with \[V_{a}^{\mu\nu} =\dot{x}_{a}^{\mu}\dot{x}_{a}^{\nu}/\sqrt{\dot{x}_{a}^{\mu}\dot{x} _{a}^{\nu}\eta_{\mu\nu}/c^{2}}, \tag{22}\] \[\bar{V}^{\mu\nu} =V^{\mu\nu}-\frac{1}{2}\eta^{\mu\nu}\eta_{\alpha\beta}V^{\alpha \beta},\] (23) \[d_{ab}^{\mu} \equiv x_{b}^{\mu}-x_{a}^{\mu}(t_{ab})=(ct-ct_{ab},\mathbf{x}_{b} -\mathbf{x}_{a}(t_{ab})), \tag{24}\] where \(t_{ab}\) is the retarded time, defined implicitly as \(d_{ab}^{2}=0\). The omitted time argument in \(V_{a\mu\nu},d_{ab},\mathbf{x}_{a}\) and \(x_{a}^{\mu}\) implies dependence on the non-retarded time \(t^{\prime}\). _Path integral at leading order in \(1/c\) --_ We split the path integral at times \(t_{2},t_{3}\) as \[\psi_{4}(\mathbf{y}_{4}^{a})= \int\mathrm{d}\mathbf{y}_{3}^{a}\int\mathrm{d}\mathbf{y}_{2}^{a} \int\mathrm{d}\mathbf{y}_{1}^{a}\psi_{1}(\mathbf{y}_{1}^{a}) \tag{25}\] \[\int_{\mathbf{y}_{1}^{a}}^{\mathcal{N}_{2}^{a}}\mathrm{D}x_{1}^{ a}e^{iS_{1}[\mathcal{G}[x_{1}^{a}]]}\int_{\mathbf{y}_{2}^{a}}^{\mathcal{N}_{3}^{a}} \mathrm{D}x_{2}^{a}e^{iS_{2}[\mathcal{G}[x_{1}^{a},x_{2}^{a}]]}\] \[\int_{\mathbf{y}_{3}^{a}}^{\mathcal{N}_{4}^{a}}\mathrm{D}x_{3}^{ a}e^{iS_{3}[\mathcal{G}[x_{1}^{a},x_{2}^{a},x_{3}^{a}]]}.\] where the path \(x_{i}\) connects \(y_{i}\) and \(y_{i+1}\). Note that, because of the retardation of the field, \(S_{2}\) and \(S_{3}\) depend on the path in the previous time interval. This dependence can be disregarded to leading order in \(1/c\), since the time interval influenced by the previous trajectory is then of order \(d/c\), which implies that the contribution of the previous trajectories will be one order higher in \(1/c\). Schematically, we approximate the action in interval \(i\) as \[S_{i} =\int_{t_{i}}^{t_{i+1}}L_{[-\infty,t^{\prime}]}\mathrm{d}t^{ \prime}\approx\int_{t_{i}}^{t_{i+1}}L_{[t^{\prime}-d/c,t^{\prime}]}\mathrm{d}t^ {\prime} \tag{26}\] \[=\int_{t_{i}}^{t_{i}+d/c}L_{[t^{\prime}-d/c,t^{\prime}]}\mathrm{d }t^{\prime}+\int_{t_{i}+d/c}^{t_{i+1}}L_{[t^{\prime}-d/c,t^{\prime}]},\mathrm{ d}t^{\prime}\] where \(L_{[t^{\prime},t^{\prime\prime}]}\) depends on the particle trajectories between times \(t^{\prime}\) and \(t^{\prime\prime}\). Now, notice that in the first term the integration is over a time interval of length \(d/c\). This implies that the contribution from that term will be one order higher in \(1/c\) than the second term, meaning we can neglect it. Therefore, after a simple change of variables, we may write \[S_{i}\approx\int_{t_{i}}^{t_{i+1}-d/c}L_{[t^{\prime},t^{\prime}+d/c]}. \tag{27}\] Thus \(S_{i}\) only depends on the particle trajectories between \(t_{i},t_{i+1}\). This way, we can neglect the dependence of \(S_{2}\) and \(S_{3}\) on the past of the trajectories and write \(S_{3}[\mathcal{G}[x_{1}^{a},x_{2}^{a},x_{3}^{a}]]\approx S[\mathcal{G}[x_{3}^ {a}]]\) and \(S_{2}[\mathcal{G}[x_{1}^{a},x_{2}^{a}]]\approx S[\mathcal{G}[x_{2}^{a}]]\). _Entanglement measure --_ The definition of \(\mathcal{E}\) for a normalized bi-partite pure state \(|\psi\rangle\) is [20] \[\mathcal{E}^{2}=2-2\int\mathrm{d}y_{1}\mathrm{d}y_{1}^{\prime}\left|\int \mathrm{d}y_{2}\psi(y_{1},y_{2})\psi^{*}(y_{1}^{\prime},y_{2})\right|^{2}. \tag{28}\] _The functions \(f_{0}\),\(f_{2}\),\(f_{4}\) --_ The functions from Eq. (11) are order unit and given by \[f_{0}(x) =\frac{-x^{4}-4x^{2}+4e^{x^{2}/2}-2e^{x^{2}/4}(x^{4}+2x^{2}-4)+4} {4(e^{x^{2}/4}+1)^{2}} \tag{29}\] \[f_{2}(x) =\frac{-12x^{2}+8e^{x^{2}/2}+(-3x^{4}-12x^{2}+16)e^{x^{2}/4}+8}{8( e^{x^{2}/4}+1)^{2}}\] \[f_{4}(x) =\frac{(-x^{2}/2+e^{x^{2}/4}+1)^{2}}{(e^{x^{2}/4}+1)^{2}}\] They are plotted in the Supplementary Material. _Relativistic correction --_ The relativistic correction to (13) for general values of \(\omega T\) was found to be \[\Delta\mathcal{E}_{\beta\ll\alpha}= -\frac{1}{2}\left(\frac{\omega d}{c}\right)^{2}\frac{Gm^{2}T}{ \hbar d}\left(\frac{\sqrt{2}\alpha}{d}\right)^{2} \tag{30}\] \[\times\frac{\left(\omega T\right)^{2}/3-1}{\sqrt{1+\frac{\left( \omega T\right)^{2}}{3}+\frac{\left(\omega T\right)^{4}}{9}}}.\]
2310.00413
SSIF: Learning Continuous Image Representation for Spatial-Spectral Super-Resolution
Existing digital sensors capture images at fixed spatial and spectral resolutions (e.g., RGB, multispectral, and hyperspectral images), and each combination requires bespoke machine learning models. Neural Implicit Functions partially overcome the spatial resolution challenge by representing an image in a resolution-independent way. However, they still operate at fixed, pre-defined spectral resolutions. To address this challenge, we propose Spatial-Spectral Implicit Function (SSIF), a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain. We empirically demonstrate the effectiveness of SSIF on two challenging spatio-spectral super-resolution benchmarks. We observe that SSIF consistently outperforms state-of-the-art baselines even when the baselines are allowed to train separate models at each spectral resolution. We show that SSIF generalizes well to both unseen spatial resolutions and spectral resolutions. Moreover, SSIF can generate high-resolution images that improve the performance of downstream tasks (e.g., land use classification) by 1.7%-7%.
Gengchen Mai, Ni Lao, Weiwei Sun, Yuchi Ma, Jiaming Song, Chenlin Meng, Hongxu Ma, Jinmeng Rao, Ziyuan Li, Stefano Ermon
2023-09-30T15:23:30Z
http://arxiv.org/abs/2310.00413v1
# SSIF: Learning Continuous Image Representation for Spatial-Spectral Super-Resolution ###### Abstract Existing digital sensors capture images at fixed spatial and spectral resolutions (e.g., RGB, multispectral, and hyperspectral images), and each combination requires bespoke machine learning models. Neural Implicit Functions partially overcome the spatial resolution challenge by representing an image in a resolution-independent way. However, they still operate at fixed, pre-defined spectral resolutions. To address this challenge, we propose Spatial-Spectral Implicit Function (SSIF), a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain. We empirically demonstrate the effectiveness of SSIF on two challenging spatio-spectral super-resolution benchmarks. We observe that SSIF consistently outperforms state-of-the-art baselines even when the baselines are allowed to train separate models at each spectral resolution. We show that SSIF generalizes well to both unseen spatial resolutions and spectral resolutions. Moreover, SSIF can generate high-resolution images that improve the performance of downstream tasks (e.g., land use classification) by 1.7%-7%. ## 1 Introduction While the physical world is continuous, most digital sensors (e.g., cell phone cameras, multispectral or hyperspectral sensors in satellites) can only capture a discrete representation of continuous signals in both spatial and spectral domains (i.e., with a fixed number of spectral bands, such as red, green, and blue). In fact, due to the limited energy of incident photons, fundamental limitations in achievable signal-to-noise ratios (SNR), and time constraints, there is always a trade-off between spatial and spectral resolution (Mei et al., 2020; Ma et al., 2021)1. High spatial resolution and high spectral resolution can not be achieved at the same time, leading to a variety of spatial and spectral resolutions used in practice for different sensors. However, ML models are typically bespoke to certain resolutions, and models typically do not generalize to spatial or spectral resolutions they have not been trained on. This calls for image super-resolution methods. Footnote 1: Given a fixed overall sensor size and exposure time, higher spatial resolution and higher spectral resolution require the per pixel sensor to be smaller and bigger at the same time, which are contradicting each other. The goal of image super-resolution (SR) (Ledig et al., 2017; Lim et al., 2017; Zhang et al., 2018; Haris et al., 2018; Zhang et al., 2020; Yao et al., 2020; Mei et al., 2020; Saharia et al., 2021; Ma et al., 2021; He et al., 2021) is to increase the spatial or spectral resolution of a given single low-resolution image (Galliani et al., 2017). It has become increasingly important for a wide range of tasks including object recognition and tracking (Pan et al., 2003; Uzair et al., 2015; Xiong et al., 2020), medical image processing (Lu and Fei, 2014; Johnson et al., 2007), remote sensing (He et al., 2021; Bioucas-Dias et al., 2013; Melgani and Bruzzone, 2004; Zhong et al., 2018; Wang et al., 2022) and astronomy (Ball et al., 2019). Traditionally image SR has been classified into three tasks according to the input and output image resolutions:2 Spatial Super-Resolution (spatial SR), Spectral Super-Resolution (spectral SR) and Spatio-Spectral Super-Resolution (SSSR). Spatial SR (Zhang et al., 2018; Hu et al., 2019; Zhang et al., 2020; Niu et al., 2020; Wu et al., 2021; Chen et al., 2021; He et al., 2021) focuses on increasing the spatial resolution of the input images (e.g., from \(h\times w\) pixels to \(H\times W\) pixels) while keeping the spectral resolution (_i.e._, number of spectral bands/channels) unchanged. In contrast, spectral SR (Galliani et al., 2017; Zhang, 2021) focuses on increasing the spectral resolution of the input images (e.g., from \(c\) to \(C\) channels) while keeping the spatial resolution fixed. SSSR (Mei et al., 2020; Ma et al., 2021) focuses on increasing both the spatial and spectral resolution of the input images. Here, \(h,w\) (or \(H,W\) ) indicates the height and width of the low-resolution, LR, (or high-resolution, HR) images while \(c\) and \(C\) indicates the number of bands/channels of the low/high spectral resolution images. For video signal, SR can also be done along the time dimension, but we don't consider it here and leave it as future work. Footnote 2: A related task, Multispectral and Hyperspectral Image Fusion (Zhang et al., 2020; Yao et al., 2020), takes a high spatial resolution multispectral image and a low spatial resolution hyperspectral image as inputs and generates a high-resolution hyperspectral image. In this paper, we focus on the single image-to-image translation problem and leave this task as the future work. The diversity in input-output image resolutions (both spatial and spectral) significantly increases the complexity of developing deep neural network (DNN)-based SR models. Instead of jointly learning representations from images with different spatial and spectral resolutions, most SR research develops separate DNN models for each input-output image resolution pairs with a specific spatial and spectral resolution (Lim et al., 2017; Zhang et al., 2018; Ma et al., 2021; Mei et al., 2020). For example, convolution-based SR models such as RCAN (Zhang et al., 2018), SR3(Saharia et al., 2021), SSJSR (Mei et al., 2020) and (He et al., 2021) need to be trained separately for each input-output image resolution settings3. This practice has two limitations: 1) For some SR settings with much less training data, these models can yield suboptimal results or lead to overfitting; 2) It prevents generalizing trained SR models to unseen spatial/spectral resolutions. Footnote 3: Figure 5a in Appendix A.1 illustrates this separate training practice. Inspired by the recent progress in 3D reconstruction with implicit neural representation (Park et al., 2019; Mescheder et al., 2019; Chen and Zhang, 2019; Sitzmann et al., 2020; Mildenhall et al., 2020), image neural implicit functions (NIF) (Dupont et al., 2021; Chen et al., 2021; Yang et al., 2021; Zhang, 2021) partially overcome the aforementioned problems (especially the second one) by learning a continuous function that maps an arbitrary pixel spatial coordinate to the corresponding visual signal value; so in principle, they can generate images at any spatial resolution. For example, LIIF (Chen et al., 2021) is capable of generating images at any arbitrary resolution in the spatial domain. Figure 1: Spatial-Spectral Implicit Function (SSIF). Given an input low-resolution multispectral (LR-MSI) image, SSIF can perform both spatial (blue arrows) and spectral (red arrows) super-resolution simultaneously (illustrated with a specific pixel A). Unlike all the other neural implicit functions SSIF can generate images with any number of bands including ”Inf” – a continuous function. We call them _Spatial Implicit Functions (SIF)_. However, all current implicit function representations only focus on generalization in the spatial domain, and each SIF model is trained separately to target a specific spectral resolution (i.e., a fixed number of spectral bands). In this work, we propose Spatial-Spectral Implicit Function (\(SSIF\)), which generalizes the idea of neural implicit representations to the spectral domain. \(SSIF\) represents an image as a continuous function on both pixel spatial coordinates in the spatial domain and wavelengths in the spectral domain. As shown in Figure 1, given an input low-resolution multispectral (or RGB) image, a single \(SSIF\) model can generate images with different spatial resolutions and spectral resolutions. Note that extending the idea of implicit representations to the spectral domain is a non-trivial task. LIIF and other NIF models have an equal distance assumption in the spatial domain, meaning that pixels in the target HR image are assumed to be equally spaced. However, this equal distance assumption does not necessarily hold in the spectral domain. For many RGB or multispectral images, each band may have different spectral widths, i.e., wavelength intervals of different lengths. Moreover, the wavelength intervals of different bands may overlap with each other. The "Spectral Signature of Pixel A" of the image \(\mathbf{I}_{lr-m}\) in Figure 1 shows one example of such cases. To tackle this problem, we predict each spectral band value of each target pixel separately as the integral of the correlation between the pixel's radiance function and the current band's spectral response function over the desired spectral interval. Our contributions are as follows: 1. We propose Spatial-Spectral Implicit Function (\(SSIF\)) which represents an image as a continuous function on both pixel coordinates in the spatial domain and wavelengths in the spectral domain. \(SSIF\) can handle SR tasks with different spatial and spectral resolutions simultaneously. 2. We demonstrate the effectiveness of \(SSIF\) on two challenging spatio-spectral super-resolution benchmarks - CAVE (the indoor scenes) and Pavia Centre (Hyperspectral Remote Sensing images). We show that SSIF consistently outperforms state-of-the-art SR baseline models even when the baselines are trained separately at each spectral resolution (and spatial resolution), thus solving an easier task. Moreover, SSIF generalizes well to both unseen spatial resolutions and spectral resolutions. 3. We test the fidelity of the generated high resolution images on the downstream task of land use classification. Compared with the baselines, the images generated by \(SSIF\) have much higher classification accuracy with 1.7%-7% performance improvements. ## 2 Related Work Multispectral and Hyperspectral Image Super-ResolutionAs an ill-posed single image-to-image translation problem, super-resolution (SR) aims at increasing the spatial or spectral resolution of a given image such that it can be used for different downstream tasks. It has been widely used on natural imagesZhang et al. (2018); Hu et al. (2019); Zhang et al. (2020); Saharia et al. (2021); Chen et al. (2021), screen-shot images Yang et al. (2021), omnidirectional images Deng et al. (2021); Yoon et al. (2021) medical images Isaac and Kulkarni (2015), as well as multispectral He et al. (2021) and hyperspectral remote sensing imagesMei et al. (2017); Ma et al. (2021); Mei et al. (2020); Wang et al. (2022). It can be classified into three categories: spatial SR, spectral SR, and spatiospectral SR (SSSR). In this work, we focus on the most challenging task, SSSR, which subsumes spatial SR and spectral SR. Implicit Neural RepresentationRecently, we have witnessed an increasing amount of work using implicit neural representations for different tasks such as image regression Tancik et al. (2020) and compressionDupont et al. (2021); Strumpler et al. (2021), 3D shape regression/reconstruction Mescheder et al. (2019); Tancik et al. (2020); Chen and Zhang (2019), 3D shape reconstruction via image synthesis Mildhall et al. (2020), 3D magnetic resonance imaging (MRI) reconstruction Tancik et al. (2020), 3D protein reconstruction Zhong et al. (2020), spatial feature distribution modeling Mai et al. (2020); Zhou et al. (2022); Zhou et al. (2023), remote sensing image classification Mai et al. (2020), geographic question answering Mai et al. (2020), and etc.. The core idea is to learn a continuous function that maps spatial coordinates (e.g., pixel coordinates, 3D coordinates, and geographic coordinates) to the corresponding signals (e.g., point cloud intensity, MRI intensity, visual signals, etc.). A common setup is to input the spatial coordinates in a deterministic or learnable Fourier feature mapping layer Tancik et al. (2020) (consisting of sinusoidal functions with different frequencies), which converts the coordinates into multi-scale features. Then a multi-layer perceptron takes this multi-scale feature as input and whose output is used for downstream tasks. In parallel, implicit neu ral functions (INF) such as LIIF (Chen et al., 2021), ITSRN (Yang et al., 2021), Zhang (2021) are proposed for image super-resolution which map pixel spatial coordinates to the visual signals in the high spatial resolution images. One outstanding advantage is that they can jointly handle SR tasks at an arbitrary spatial scale. However, all the existing implicit functions learn continuous image representations in the spatial domain while still operating at fixed, pre-defined spectral resolutions. Our proposed SSIF overcomes this problem and generalizes INF to both spatial and spectral domains. ## 3 Problem Statement The spatial-spectral image super-resolution (SSSR) problem over various spatial and spectral resolutions can be conceptualized as follows. Given an input low spatial/spectral resolution (LR-MSI) image \(\mathbf{I}_{lr-m}\in\mathbb{R}^{h\times w\times c}\), we want to generate a high spatial/spectral resolution (HR-HSI) image \(\mathbf{I}_{hr-h}\in\mathbb{R}^{H\times V\times C}\). Here, \(h,w,c\) and \(H,W,C\) are the height, width and channel dimension of image \(\mathbf{I}_{lr-m}\) and \(\mathbf{I}_{hr-h}\), and \(H>h\), \(W>w\), \(C>c\). The spatial upsampling scale \(p\) is defined as \(p=H/h=W/w\). Without loss of generality, let \(\Lambda_{hr-h}=[\Lambda_{0}^{T},\Lambda_{1}^{T},...,\Lambda_{C}^{T}]\in \mathbb{R}^{C\times 2}\) be the wavelength interval matrix, which defines the spectral bands in the target HR-HSI image \(\mathbf{I}_{hr-h}\). Here, \(\Lambda_{i}=[\lambda_{i,s},\lambda_{i,e}]\in\mathbb{R}^{2}\) is the wavelength interval for the \(i\)th band of \(\mathbf{I}_{hr-h}\) where \(\lambda_{i,s},\lambda_{i,e}\) are the start and end wavelength of this band. \(\Lambda_{hr-h}\) can be used to fully express the spectral resolution of the target HR-HSI image \(\mathbf{I}_{hr-h}\). In this work, we do not use \(C/c\) to represent the spectral upsampling scale because bands/channels of image \(\mathbf{I}_{lr-m}\) and \(\mathbf{I}_{hr-h}\) might not be equally spaced (See Figure 1). So \(\Lambda_{hr-h}\) is a very flexible representation for the spectral resolution, capable of representing situations when different bands have different spectral widths or their wavelength intervals overlap with each other. When \(\mathbf{I}_{hr-h}\) has equally spaced wavelength intervals, such as those of most of the hyperspectral images, we use its band number \(C\) to represent the spectral scale. The spatial-spectral super-resolution (SSSR) can be represented as a function \[\mathbf{I}_{hr-h}=H_{sr}(\mathbf{I}_{lr-m},p,\Lambda_{hr-h}) \tag{1}\] where \(H_{sr}(\cdot)\) takes as input the image \(\mathbf{I}_{lr-m}\), the desired spatial upsampling scale \(p\), and the target sensor wavelength interval matrix \(\Lambda_{hr-h}\), and generates the HR-HSI image \(\mathbf{I}_{hr-h}\in\mathbb{R}^{H\times W\times C}\). In other words, we aim at learning **one single function**\(H_{sr}(\cdot)\) that can take any input images \(\mathbf{I}_{lr-m}\) with a fixed spatial and spectral resolution, and generate images \(\mathbf{I}_{hr-h}\) with diverse spatial and spectral resolutions specified by different \(p\) and \(\Lambda_{hr-h}\). Note that none of the existing SR models can achieve this. Most classic SR models have to learn separate \(H_{sr}(\cdot)\) for different pairs of \(p\) and \(\Lambda_{hr-h}\) such as RCAN (Zhang et al., 2018), SR3(Sahara et al., 2021), SSJSR (Mei et al., 2020), He et al. (2021). As for Spatial Implicit Functions (SIF) such as LIIF(Chen et al., 2021), SADN (Wu et al., 2021), ITSRN (Yang et al., 2021), Zhang (2021), they can learn one \(H_{sr}(\cdot)\) for different \(p\) but with a fixed \(\Lambda_{hr-h}\). ## 4 Spatial-Spectral Implicit Function ### Sensor principles To design SSIF, we follow the physical principles of spectral imaging. Let \(\mathbf{s}_{l,i}\) be the pixel density value of a pixel \(\mathbf{x}_{l}\) at the spectral band \(b_{i}\) with wavelength interval \(\Lambda_{i}\). It can be computed by an integral of the **radiance function**\(\gamma_{\mathbf{I}}(\mathbf{x}_{l},\lambda)\) and **response function**\(\rho_{i}(\lambda)\) of a sensor at band \(b_{i}\). \[\mathbf{s}_{l,i}=\int_{\Lambda_{i}}\rho_{i}(\lambda)\gamma_{\mathbf{I}}( \mathbf{x}_{l},\lambda)\,\mathrm{d}\lambda \tag{2}\] where \(\lambda\) is wavelength. So for each pixel \(\mathbf{x}_{l}\), the radiance function is a neural field that describes the radiance curve as a function of the wavelength. Note that unlike recent NeRF where only three discrete wavelength intervals (i.e., RGB) are considered, we aim to learn a _continuous_ radiance curve for each pixel. The spectral response function (Zheng et al., 2020) describes the sensitivity of the sensor to different wavelengths and is usually sensor-specific. For example, the red sensor in commercial RGB cameras has a strong response (i.e., high pixel density) to red light. The spectral response functions of many commercial hyperspectral sensors (e.g., AVIRIS's ROSIS-034, EO-1 Hyperion) are very complex due to atmospheric absorption. A common practice adopted by many studies (Barry et al., 2002; Brazile et al., 2008; Cundill et al., 2015; Crawford et al., 2019; Chi et al., 2021) is to approximate the response function of individual spectral bands as a Gaussian distribution or a uniform distribution. In this work, we adopt this practice and show that this inductive bias enforced via physical laws improves generalization. In the following, we will discuss the design of our SSIF which allows us to train a single SR model for different \(p\) and \(\Lambda_{hr-h}\). The whole model architecture of SSIF is illustrated in Figure 2b. ### SSIF Architecture Following previous SIF works (Chen et al., 2021; Yang et al., 2021), SSIF first uses an image encoder \(E_{I}(\cdot)\) to convert the input image \(\mathbf{I}_{lr-m}\in\mathbb{R}^{h\times w\times c}\) into a 2D feature map \(\mathbf{S}_{lr-m}=E_{I}(\mathbf{I}_{lr-m})\in\mathbb{R}^{h\times w\times d_{I}}\) which shares the same spatial shape as \(\mathbf{I}_{lr-m}\) but with a larger channel dimension. \(E_{I}(\cdot)\) can be any convolution-based image encoder such as EDSR (Lim et al., 2017) or RDN (Zhang et al., 2018). SSIF approximates the mathematical integral shown in Equation 2 as a weighted sum over the predicted radiance values of \(K\) wavelengths \(\{\lambda_{i,1},\lambda_{i,2},...,\lambda_{i,K}\}\) sampled from a wavelength interval \(\Lambda_{i}=[\lambda_{i,s},\lambda_{i,e}]\in\Lambda_{hr-h}\) at location \(\mathbf{x}_{l}\) (see Equation 3). Here, \(\rho_{i}(\lambda_{i,k})\) is the response function value, i.e., weight, of each wavelength \(\lambda_{i,k}\) given the current response function for band \(b_{i}\). \(\gamma_{\mathbf{I}}(\mathbf{x}_{l},\lambda_{i,k})\) is the radiance value of \(\lambda_{i,k}\) at location \(\mathbf{x}_{l}\) which can be computed by a neural implicit function \(G_{x,\lambda}\). Basically, \(G_{x,\lambda}\) maps an arbitrary pixel location \(\mathbf{x}_{lc}[-1,1]\odot[-1,1]\) of \(\mathbf{I}_{hr-h}\) and a wavelength \(\lambda_{i,k}\in\Lambda_{i}\) into the radiance value of the target image \(\mathbf{I}_{hr-h}\) at the corresponding location and wavelength, i.e., \(\gamma_{\mathbf{I}}(\mathbf{x}_{l},\lambda_{i,k})=G_{x,\lambda}(\mathbf{S}_{ lr-m},\mathbf{x}_{l},\lambda_{i,k})\). Here, \(\odot\) is the Cartesian product. \[\mathbf{s}_{l,i}=\sum_{k=1}^{K}\rho_{i}(\lambda_{i,k})\gamma_{\mathbf{I}}( \mathbf{x}_{l},\lambda_{i,k})=\sum_{k=1}^{K}\rho_{i}(\lambda_{i,k})G_{x, \lambda}(\mathbf{S}_{lr-m},\mathbf{x}_{l},\lambda_{i,k}) \tag{3}\] \(G_{x,\lambda}\) can be decomposed into three neural implicit functions - a pixel feature decoder \(F_{\mathbf{x}}\), a spectral encoder \(E_{\lambda}\), and a spectral decoder \(D_{\mathbf{x},\lambda}\). The pixel feature decoder takes the 2D feature map of the input image \(\mathbf{S}_{lr-m}\) as well as one arbitrary pixel location \(\mathbf{x}_{l\in[-1,1]\odot[-1,1]}\) of \(\mathbf{I}_{hr-h}\) and maps them to a pixel hidden feature \(\mathbf{h}_{l}\in\mathbb{R}^{d}\) where \(d\) is the hidden pixel feature dimension (see Equation 4). Here, \(F_{\mathbf{x}}\) can be any spatial implicit function such as LIF Chen et al. (2021) and ITSRN (Yang et al., 2021). \[\mathbf{h}_{l}=F_{\mathbf{x}}(\mathbf{S}_{lr-m},\mathbf{x}_{l}) \tag{4}\] The spectral encoder \(E_{\lambda}(\lambda_{i,k})\) encodes a wavelength \(\lambda_{i,k}\) sampled from any wavelength interval \(\Lambda_{i}=[\lambda_{i,s},\lambda_{i,e}]\in\Lambda_{hr-h}\) into a spectral embedding \(\mathbf{b}_{i,k}\in\mathbb{R}^{d}\). We can implement \(E_{\lambda}\) as any position encoder (Vaswani et al., 2017; Mai et al., 2020). Please refer to Appendix A.2 for a detailed description. \[\mathbf{b}_{i,k}=E_{\lambda}(\lambda_{i,k}) \tag{5}\] Finally, the spectral decoder \(D_{\mathbf{x},\lambda}(\mathbf{b}_{i,k};\mathbf{h}_{l})\) is a multilayer perceptron whose weights are modulated by the image feature embedding \(\mathbf{h}_{l}\). \(D_{\mathbf{x},\lambda}\) maps the spectral embedding \(\mathbf{b}_{i,k}\) into a radiance value of \(\lambda_{i,k}\) at location \(\mathbf{x}_{l}\), i.e., \(\mathbf{s}_{l,i,k}=D_{\mathbf{x},\lambda}(\mathbf{b}_{i,k};\mathbf{h}_{l})\). So we have \[\mathbf{s}_{l,i}=\sum_{k=1}^{K}\rho_{i}(\lambda_{i,k})G_{x,\lambda}(\mathbf{S} _{lr-m},\mathbf{x}_{l},\lambda_{i,k})=\sum_{k=1}^{K}\rho_{i}(\lambda_{i,k})D_{ \mathbf{x},\lambda}(\mathbf{b}_{i,k};\mathbf{h}_{l})=\sum_{k=1}^{K}\rho_{i}( \lambda_{i,k})\mathbf{s}_{l,i,k} \tag{6}\] The response function \(\rho_{i}(\lambda_{i,k})\) can be a learnable function or a predefined function based on the knowledge of the target HSI sensor. To make the learning easier, we pick a predefined function, e.g. a Gaussian distribution or a uniform distribution, for each band \(b_{i}\) by following Chi et al. (2021). Figure 2b illustrates the model architecture of SSIF. The prediction \(\mathbf{s}_{l,i}\in\mathbb{R}^{C}\) is compared with the ground truth \(\mathbf{s}^{\prime}_{l,i}\). A L1 reconstruction loss is used: \[\mathcal{L}=\sum_{(\mathbf{I}_{lr-m},\mathbf{I}_{hr-h})\in\mathcal{D}}\sum_{( \mathbf{x}_{l},\mathbf{s}_{hr-h},\Lambda_{hr-h})\in\mathbf{L}_{hr-h}}\sum_{ \Lambda_{i}\in\Lambda_{hr-h}}\parallel\mathbf{s}_{l,i}-\mathbf{s}^{\prime}_{ l,i}\parallel_{1}, \tag{7}\] where \(\mathcal{D}\) indicates all the low-res and high-res image pairs for the SSSR task. ### Super-Resolution Data Preparation Figure 1(a) illustrates the data preparation process of SSIF. Given a training image pair which consists of a high spatial-spectral resolution image \(\mathbf{I}_{hr-h}\in\mathbb{R}^{H\times W\times C_{max}}\) and an image with high spatial resolution but low spectral resolution \(\mathbf{I}_{hr-m}\in\mathbb{R}^{H\times W\times c}\), we perform downsampling in both the spectral domain and spatial domain. For the spectral downsampling process (the blue box in Figure 1(a)), we downsample \(\mathbf{I}_{hr-h}^{\prime}\) in the spectral domain to obtain \(\mathbf{I}_{hr-h}\in\mathbb{R}^{H\times W\times C}\) where the band number \(C\) is sampled uniformly between the min and max band number \(C_{min},C_{max}\). For the spatial downsampling (the orange box in Figure 1(b)), we spatially downsample \(\mathbf{I}_{hr-m}\) into \(\mathbf{I}_{lr-m}\in\mathbb{R}^{h\times w\times c}\) which serves as the input for \(SSIF\). Here, the downsampling scale \(p\) is sampled uniformly from the min and max spatial scale \(p_{min}\), \(p_{max}\). See Appendix A.3 for a detailed description. ## 5 Experiments To test the effectiveness of the proposed SSIF, we evaluate it on two challenging spatial-spectral super-resolution benchmark datasets - the CAVE dataset (Yasuma et al., 2010) and the Pavia Centre dataset5. Both datasets are widely used for super-resolution tasks on hyperspectral images. Please refer to Appendix A.5 for detailed description of both datasets. Footnote 5: [http://www.ahu.sus/cowintco/index.php/Hyperspectral_Bamcta_Sensing_Scenes](http://www.ahu.sus/cowintco/index.php/Hyperspectral_Bamcta_Sensing_Scenes) ### Baselines and SSIF Model Variants Compared with spatial SR and spectral SR, there has been much less work on spatiospectral super-resolution. So we mainly compare our model with 7 baselines: **RCAN + AWAN**, **AWAN + RCAN**, **AWAN + SSPSR**, **AWAN + SSPSR**, **RC/AW + MoG-DCN**, **RC/AW + MoG-DCN**, **SSJSR**, **US3RN**, and **LIIF**. Please refer to Appendix A.4 for a detailed description for each baseline. For the first 6 baselines, we have to train separate SR models for different spatial and spectral resolutions of the output images. LIIF can use one model to generate output images with different spatial resolutions. However, we still need to train separate models when the output image \(\mathbf{I}_{hr-m}\) with different band numbers \(C\). In contrast, our \(SSIF\) model is able to handle different spatial and spectral resolutions with one model. Based on the response functions we use (Gaussian or uniform) and the wavelength sampling methods, we have 4 SSIF variants: **SSIF-RF-GS**, **SSIF-RF-GF**, **SSIF-RF-US**, and **SSIF-RF-UF**. Both SSIF-RF-GS and SSIF-RF-GF uses a Gaussian distribution \(\mathcal{N}(\mu_{i},\,\sigma_{i}^{2})\) as the response function for each band \(b_{i}\) with wavelength interval \(\Lambda_{i}=[\lambda_{i,s},\lambda_{i,e}]\) where \(\mu_{i}=\frac{\lambda_{i,e}+\lambda_{i,e}}{2}\) and \(\sigma_{i}=\frac{\lambda_{i,e}-\lambda_{i,e}}{6}\). The difference is SSIF-RF-GS uses \(\mathcal{N}(\mu_{i},\,\sigma_{i}^{2})\) to sample \(K\) wavelengths from \(\Lambda_{i}\) while SSIF-RF-GF uses fixed \(K\) wavelengths with equal intervals in \(\Lambda_{i}\). Similarly, Both SSIF-RF-US and SSIF-RF-UF uses a Uniform distribution \(\mathcal{U}(\lambda_{i,s},\lambda_{i,e})\) as the response function for each band \(b_{i}\). SSIF-RF-US uses \(\mathcal{U}(\lambda_{i,s},\lambda_{i,e})\) to sample \(K\) wavelengths for each \(\Lambda_{i}\) while SSIF-RF-UF uses fixed \(K\) wavelengths with equal intervals. We also consider 1 additional SSIF variant - **SSIF-M** which only uses band middle point \(\mu_{i}=\frac{\lambda_{i,e}+\lambda_{i,e}}{2}\) for each wavelength, i.e., \(K=1\). ### SSSR on the CAVE dataset Table 1 shows the evaluation result of the SSSR task across different spatial scales \(p\) on the original CAVE dataset with 31 bands. We use three evaluation metrics - PSNR, SSIM, and SAM which measure the quality of generated images from different perspectives. We evaluate different baselines as well as \(SSIF\) under different spatial scales \(p=\{2,4,8,10,12,14\}\). Since \(p_{min}=1\) and \(p_{max}=8\), \(p=\{2,4,8\}\) indicates "in-distribution" results while \(p=\{10,12,14\}\) indicates "Out-of-distribution" results for \(p\) not present to LIIF or \(SSIF\) during training time. We can see that 1. All 5 \(SSIF\) can outperform or are comparable to the 7 baselines across all tested spatial scales even if the first 6 baselines are trained separately on each \(p\). 2. SSIF-RF-UF achieves the best or 2nd best results across all spatial scales and metrics. 3. A general pattern we can see across all spatial scales is that the order of the model performances is SSIF-RF-* \(>\) SSIF-M \(>\) LIIF \(>\) other six baselines. More interesting results emerge when we compare the performance of different models on different spectral resolutions, i.e., different \(C\). Figure 2(a) and 2(b) compare model performance under different \(C\) with a fixed spatial scale (\(p=4\) and \(p=8\) respectively). We can see that 1. Both Figure 2(a) and 2(b) show that SSIF-RF-UF achieves the best performances in two spatial scales and three metrics on "in-distribution" spectral resolutions. 2. However, the performance of SSIF-RF-UF, SSIF-RF-GF, and SSIF-M drop significantly when \(C>31\) while the performances of SSIF-RF-US and SSIF-RF-GS keep nearly unchanged for \(C>31\). This is because the first three SSIF use a fixed set of wavelengths during training while SSIF-RF-US and SSIF-RF-GS also sample novel wavelengths for each forward pass. This makes these two models have higher generalizability in "out-of-distribution" spectral scales. 3. A general pattern can be observed is that the order of model performance is SSIF-RF-* \(>\) SSIF-M \(>\) LIIF \(>\) other six baselines. 1. Except for \(p=2\), all SSIF can outperform all baselines on different spatial scales. 2. The performances of 4 SSIF-RF-* models are very similar across different spatial scales while SSIF-RF-US is the winner in most cases. They can outperform LIIF in most settings. Figure 3(a) and 3(b) compare different models across different spectral resolutions, i.e., \(C\) for a fixed spatial scale (\(p=4\) and \(p=8\) respectively). We can see that 1. The performances of 4 SSIF-RF-* models can outperform SSIF-M which is better than LIIF, and the other 6 baselines. 2. All 4 4 SSIF-RF-* show good generalizability for "out-of-distribution" spectral scales, especially when \(C>102\) while SSIF-M suffers from performance degradation. Figure 3: The evaluation result of the SSSR task across different \(C\) on the CAVE (Yasuma et al., 2010a) dataset. Here, the x axis indicates the number of bands \(C\) of \(\mathbf{L}_{tr-h}\). (a) and (b) compare the performances of different models across different \(C\) in two spatial scales \(p=4\) or \(p=8\). Since our \(SSIF\) can generalize to different \(p\) and \(C\), the evaluation metrics of each \(SSIF\) are generated by one trained model. In contrast, we trained separated LIIF models for different \(C\). The gray area in these plots indicates “out-of-distribution” performance in which \(SSIF\) are evaluated on \(Cs\) which have not been used for training. The ablation studies on \(K\) and the generated remote sensing images can be seen in Appendix A.8. ### Land Use Classification on the Pavia Centre Dataset To test the fidelity of the generated high spatial-spectral resolution images, we evaluate them on land use classification task. We train the state-of-the-art land use classification model, A2S2K-ResNet (Roy et al., 2020), on the training dataset of Pavia Centre and evaluate its performance on the testing area - both the ground truth HSI image as well as the generated images from LIIF and different SSIF models. Table 3 compares the performance of A2S2K-ResNet on different generated images across different spatial scales. We can see that although SSIF-M shows good performance on the SSSR task on both datasets, the generated images are less useful - the land use classification accuracy on its generated images is much worse than other models, even far behind LIIF. SSIF-RF-GS shows the best performance across different spatial scales and can outperform LIIF by 1.7%-7%. Please refer to Appendix A.9 for a detailed description of the dataset, model, training detailed. **Discussions of what the spectral encoder learned** To understand how the spectral encoder represents a given wavelength we plot each dimension of spectral embedding against the wavelength (Figure 10 in Appendix A.10). We find that they generally resemble piecewise-linear PL basis functions (Paul and Koch, 1974) or the continuous PK basis functions (Melal, 1976). This makes sense because PL and PK are classical methods to represent a scalar function - i.e., \(G_{x,\lambda}(\mathbf{S}_{tr-m},\mathbf{x}_{l},\ \cdot\ )\) in our case. We can think that the weights of these basis are provided by the image encoder and SIF network given an image \(\mathbf{S}_{tr-m}\) and location \(\mathbf{x}_{l}\). Having a spectral encoder with learnable parameters should provide better representation than fixed basis functions. ## 6 Conclusion In this work, we propose Spatial-Spectral Implicit Function (SSIF), a neural implicit model that represents an image as a continuous function of both pixel coordinates in the spatial domain and Figure 4: Evaluation across different \(C\) on the Pavia Centre dataset. The set-up is the same as Figure 3. Note that some of the baseline models do not appear in some of those plots because the performances of these models are very low and cannot be shown in the current metric range. \begin{table} \begin{tabular}{l|c|c|c|c} \hline Model & \multicolumn{3}{c}{Land Use Classification Accuracy (\%)} \\ \hline Band \(C\) & \multicolumn{3}{c}{102} \\ \hline Scale \(p\) & 2 & 3 & 4 & 8 \\ \hline LIIF (Chen et al., 2021) & 41.69 & 41.29 & 37.87 & 37.38 \\ \hline SSIF-M & 25.48 & 25.38 & 22.56 & 14.91 \\ \hline SSIF-RF-GS & 43.44 & **46.86** & **44.97** & **44.82** \\ SSIF-RF-GF & 35.37 & 37.91 & 37.20 & 38.08 \\ SSIF-RF-US & 40.15 & 38.48 & 34.86 & 30.20 \\ SSIF-RF-UF & **45.32** & 44.00 & 41.87 & 36.34 \\ \hline Acc Imp. & 1.75 & 5.57 & 7.10 & 7.44 \\ \hline HSI (Upper Bound) & \multicolumn{3}{c}{72.66} \\ \hline \end{tabular} \end{table} Table 3: The evaluation of the generated images using A2S2K-ResNet (Roy et al., 2020) on the Pavia Centre land use classification task. “HSI” is the accuracy on the ground truth test image which is the upper bound. “Acc Imp.” is the accuracy improvement from LIIF to SSIF-RF-GS. wavelengths in the spectral domain. This enables SSIF to handle SSSR tasks with different output spatial and spectral resolutions simultaneously with one model. In contrast, all previous works have to train separate super-resolution models for different spectral resolutions. We demonstrate the effectiveness of SSIF on the SSSR task with Two datasets - CAVE and Pavia Centre. We show that SSIF can outperform all baselines across different spatial and spectral scales even when the baselines are allowed to be trained separately at each spectral resolution, thus solving an easier task. We demonstrate that SSIF generalizes well to unseen spatial and spectral resolutions. In addition, we test the fidelity of the generated images on a downstream task - land use classification. We show that SSIF can outperform LIIF with a big margin (1.7-7%). In the current study, the effectiveness of SSIF is mainly shown on hyperspectral image SR, while SSIF is flexible enough to handle multispectral images with irregular wavelength intervals. This will be studied in future work. Moreover, the data limitation of the hyperspectral images poses a significant challenge to SR model training. We also plan to construct a large dataset for hyperspectral image super-resolution. Ethics StatementAll datasets we use in this work including the CAVE and Pavia Centra datasets are publicly available datasets. No human subject study is conducted in this work. We do not find specific negative societal impacts of this work. Reproducibility StatementOur source code has been uploaded as a supplementary file to reproduce our experimental results. The implementation details of the spectral encoder are described in Appendix A.2. The SSIF model training details are described in Appendix A.6.
2309.12174
Band Flattening and Overlap Fermion
We show that, for each symmetry class based on the tenfold way classification, the effective Dirac operator obtained by integrating out the additional bulk direction takes a value in the corresponding classifying space, from which we obtain the flat band Hamiltonian. We then obtain the overlap Dirac operator for each symmetry class and establish the Ginsparg--Wilson relation associated with $\mathcal{C}$ and $\mathcal{T}$ symmetries, and also the mod-two index theorem.
Taro Kimura, Masataka Watanabe
2023-09-21T15:33:03Z
http://arxiv.org/abs/2309.12174v1
# Band Flattening and Overlap Fermion ###### Abstract We show that, for each symmetry class based on the tenfold way classification, the effective Dirac operator obtained by integrating out the additional bulk direction takes a value in the corresponding classifying space, from which we obtain the flat band Hamiltonian. We then obtain the overlap Dirac operator for each symmetry class and establish the Ginsparg-Wilson relation associated with \(\mathcal{C}\) and \(\mathcal{T}\) symmetries, and also the mod-two index theorem. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Lagrangian vs Hamiltonian * 2.2 Symmetry and classification * 3 Bulk extension * 3.1 Wigner-Dyson class * 3.1.1 Class A * 3.1.2 Class AI, AII * 3.2 Chiral class * 3.3 BdG class * 3.3.1 Class D * 3.3.2 Class C * 3.3.3 Class DIII * 3.3.4 Class CI Overlap Dirac operator * 4.1 Ginsparg-Wilson relation * 4.1.1 Chiral symmetry * 4.1.2 \(\mathcal{C}\) and \(\mathcal{T}\) symmetries * 4.2 Index theorem * A Proof of Lemma 3.13 ## 1 Introduction Study of topological phases of matter, which has been originated in condensed-matter physics, now provides an interdisciplinary arena of research involving various domains of theoretical and experimental physics and also mathematics. A topological insulator phase is a primary example of the topological phases, that exhibits a gapless surface state, while the interior behaves as a gapped insulator. This gapless surface state is topologically protected, and the topological insulator cannot be transformed continuously to a topologically trivial insulator. For a topological band insulator, one can consider the topological invariant associated with its band structure, which plays an essential role in characterization of topological property. For example, the TKNN number [1, 2, 3] is the topological invariant associated with the two-dimensional class A system, which is given by integrating the Berry curvature (the first Chern class of the Bloch bundle) over the two-dimensional Brillouin torus. Since the topological property does not depend on the detail of the band structure, we often use the band flattened system to simplify the argument (see, e.g., [4, 5]). In this paper, we provide a systematic methodology, that we call the bulk extension, to obtain the band flattened Hamiltonian from a generic gapped Hamiltonian of free fermion. Here is the summary of the prescription. 1. Consider a \(d\)-dimensional gapped free fermion Hamiltonian \(H\). 2. Construct a \((d+1)\)-dimensional Dirac operator \(D\) by adding an extra direction. 3. Compute the functional determinant \(\det D\) while imposing the periodic boundary condition in the extra direction together with the Pauli-Villars regulator. 4. Read off the effective Dirac operator \(\overline{D}\) from the determinant, and convert it to the band flattened Hamiltonian in the bulk limit, \(\overline{H}=H/\sqrt{H^{2}}=:\operatorname{sgn}(H)\). We apply this formalism to generic symmetry classes of topological insulators and superconductors [4, 5] based on the Altland-Zirnbauer (AZ) tenfold way classification [6]. Here is the first result of this paper. **Theorem 1.1** (Theorem 3.6).: Let \(\gamma\) be (one of) the mass matrix of the gapped Hamiltonian \(H\). Then, we obtain the band flattened Hamiltonian from the effective Dirac operator under the periodic boundary condition, \(\overline{H}=\gamma\overline{D}\), in the bulk limit. In fact, for class \(\mathscr{C}\) system, the effective Dirac operator \(\overline{D}\) takes a value in the classifying space \(S_{\mathscr{C}}\in\{C_{0,1},R_{0,\ldots,7}\}\) (Proposition 3.8), which plays an essential role in the tenfold way classification: The homotopy group of \(S_{\mathscr{C}}\) characterizes topological property of the topological insulator/superconductor [4, 5]. See Table 1. The formalism of the bulk extension presented in this paper is motivated by the overlap Dirac operator, showing an exact chiral symmetry on a lattice, that was originally formulated for the class A system [7, 8, 9]. We extend the original construction of the overlap operator to generic AZ tenfold way symmetry classes. **Proposition 1.2** (Proposition 3.5).: The overlap Dirac operator of class \(\mathscr{C}\) is given by \[D_{\rm ov}=\frac{1}{2}(1+V)\,,\qquad V\in S_{\mathscr{C}}\,. \tag{1.1}\] It has been known [7, 8, 9, 10, 11] that the overlap Dirac operator of class A obeys Ginsparg-Wilson (GW) relation [12] (recovering dependence on the lattice constant \(a\)),1 Footnote 1: Another realization of GW relation is achieved by the perfect action [13]. \[\gamma D+D\gamma=aD\gamma D\,, \tag{1.2}\] which is interpreted as a non-linear deformation of the chiral symmetry relation, \(\{\gamma,D\}=\gamma D+D\gamma=0\). Applying the same argument to \(\mathcal{C}\) and \(\mathcal{T}\) symmetries, we obtain the corresponding GW relation. **Theorem 1.3** (Theorem 4.4).: Let \(C\) and \(T\) be the unitary operators defined in (2.10). For the system with \(\mathcal{C}\) and \(\mathcal{T}\) symmetries, the overlap Dirac operator obeys, \[CD+D^{\rm T}C=aD^{\rm T}CD\,,\qquad TD+D^{*}T=aD^{*}TD\,. \tag{1.3}\] These relations immediately imply an anomaly under \(\mathcal{C}\) and \(\mathcal{T}\) transformations similarly to the parity anomaly [14] in the overlap formalism, which is related to the anomalous behavior of Majorana(-Weyl) fermion (hence, \(\mathcal{C}\) transformation) [15, 16, 17, 18, 19], and of the \(\mathcal{T}\)-invariant topological system [20, 21]. We remark that GW relation with respect to an additional symmetry has been also discussed in [22] in the context of topological crystalline insulators/superconductors [23]. The overlap formalism also provides a concise description of the index theorem. It has been established that the \(\mathbb{Z}\)-valued index of the overlap Dirac operator, \(\operatorname{ind}(D_{\rm ov})=\dim\ker D_{\rm ov}-\dim\operatorname{coker} D_{\rm ov}\), is given as follows. **Proposition 1.4** (Hasenfratz-Laliena-Niedermayer [10], Luscher [11], Adams [24]).: Let \(\eta(A)\) be the eta invariant of a self-adjoint operator \(A\). Then, the \(\mathbb{Z}\)-valued index of the overlap Dirac operator is given by \[\operatorname{ind}(D_{\rm ov})=-\frac{1}{2}\operatorname{tr}\operatorname{ sgn}H=-\frac{1}{2}\eta(H)\,. \tag{1.4}\] This index agrees with the bulk topological invariant associated with the gapped Hamiltonian \(H\)[4, 5]. In fact, this agreement also holds for the \(\mathbb{Z}_{2}\)-topological invariant and the mod-two index of the overlap Dirac operator. **Theorem 1.5** (Theorem 4.5).: The mod-two index of overlap Dirac operator, \(\nu=\operatorname{ind}(D_{\text{ov}})=\dim\ker(D_{\text{ov}})\), is given by \[(-1)^{\nu}=\det V\,. \tag{1.5}\] The use of determinant signature to define the mod-two index has been proposed specifically for \((8n+2)\)-dimensional Majorana-Weyl fermion [15, 16]. We remark that the mod-two index has been recently formulated in the domain-wall fermion formalism [25], which has a similar expression using the sign factor appearing in the Dirac operator determinant. See also recent approaches to the index theorem on a lattice [26, 27]. ### Organization of the paper The remaining part of this paper is organized as follows. In Sec. 2, we discuss preliminary facts, including the relation between Hamiltonian formalism and Lagrangian formalism, and the symmetry classification. In Sec. 3, we apply the formalism, that we call the bulk extension, to obtain the band flattened Hamiltonian. For each symmetry class, we prove that the effective Dirac operator takes a value in the corresponding classifying space. In Sec. 4, we explore the overlap Dirac operator obtained through the bulk extension with the open boundary condition. We prove that the overlap operator obeys GW relation with respect to \(\mathcal{C}\) and \(\mathcal{T}\) symmetries, and discuss the anomalous behavior under \(\mathcal{C}\) and \(\mathcal{T}\) transformations. We also establish the mod-two index of the overlap Dirac operator. ### Acknowledgements We would like to thank Mikio Furuta for insightful comments on the preliminary version of the draft. The work of TK was in part supported by EIPHI Graduate School (No. ANR-17-EURE-0002) and Bourgogne-Franche-Comte region. MW is supported in part by Grant-in-Aid for JSPS Fellows (No. 22J00752). ### Note added While completing this manuscript, we became aware of a recent preprint by Clancy-Kaplan-Singh [28] also addressing the overlap fermion associated with \(\mathcal{C}\) and \(\mathcal{T}\) symmetries, and the mod-two index discussed in Sec. 4. Preliminaries **Notations** * For \(x\in\mathbb{K}\), let \(x^{*}\) be its \(\mathbb{K}\)-conjugate. We denote the conjugate matrix of \(M\) by \(M^{\dagger}:=M^{*\mathrm{T}}\). We define the set of self-conjugate matrices (real symmetric for \(\mathbb{R}\), complex hermitian for \(\mathbb{C}\), quaternion self-dual for \(\mathbb{H}\)) of size \(n\) by \[\mathsf{H}(n,\mathbb{K})=\{M\in\mathbb{K}^{n\times n}\mid M^{\dagger}=M\}\,.\] (2.1) * We define the set of skew-conjugate matrices of size \(n\) by \[\widetilde{\mathsf{H}}(n,\mathbb{K})=\{M\in\mathbb{K}^{n\times n}\mid M^{ \dagger}=-M\}\,.\] (2.2) * We denote a compact symplectic group by \(\mathrm{Sp}(n)=\mathrm{Sp}(2n,\mathbb{C})\cap\mathrm{U}(2n)\). * We denote a commutator and an anti-commutator by \([a,b]=ab-ba\) and \(\{a,b\}=ab+ba\). * We denote \(\mathbb{Z}_{n}=\mathbb{Z}/n\mathbb{Z}\). ### Lagrangian vs Hamiltonian Let \(d\) be the spacial dimension, and the spacetime dimension \(d+1\). Let \(\{\gamma_{\mu}\}_{\mu=0,\ldots,d}\) be the Euclidean gamma matrices, which are hermitian and obey the relation \(\{\gamma_{\mu},\gamma_{\nu}\}=\gamma_{\mu}\gamma_{\nu}+\gamma_{\nu}\gamma_{ \mu}=2\delta_{\mu,\nu}\). The free Dirac Lagrangian in the \((d+1)\)-dimensional Euclidean spacetime is given by \[\mathscr{L}=\bar{\psi}(\gamma^{\mu}\partial_{\mu}+m)\psi=:\bar{\psi}D\psi\,, \qquad D=\gamma^{\mu}\partial_{\mu}+m\,, \tag{2.3}\] where the associated Dirac operator \(D\) is non-hermitian in general.2 In fact, non-hermitian Hamiltonian discussed in, e.g., [29], has a direct interpretation as a Dirac operator. We remark that the Dirac operator becomes anti-hermitian in the case \(m=0\), \(D^{\dagger}=-D\). Considering the \(0\)-direction as a "time" direction, and putting \(\bar{\psi}=\psi^{\dagger}\gamma_{0}\), we then obtain the hermitian Hamiltonian as follows: Footnote 2: The Dirac operator becomes hermitian in the Lorentzian signature. \[\mathscr{L} =\psi^{\dagger}\Big{(}\partial_{0}+\gamma_{0}\vec{\gamma}\cdot \vec{\partial}+m\gamma_{0}\Big{)}\psi=:\psi^{\dagger}(\partial_{0}+H)\psi\,, \tag{2.4a}\] \[\mathscr{H} =\psi^{\dagger}H\psi=\psi^{\dagger}\left(\gamma_{0}\vec{\gamma} \cdot\vec{\partial}+m\gamma_{0}\right)\psi=\psi^{\dagger}\left(-i\vec{\gamma} \cdot\vec{\partial}+m\gamma_{0}\right)\psi \tag{2.4b}\] where \(\tilde{\gamma}_{j}=i\gamma_{0}\gamma_{j}\) is a hermitian gamma matrix for \(j=1,\ldots,d\). Hence, we have the relation between the Dirac operator and the Hamiltonian, \[D=\gamma_{0}H+\gamma_{0}\partial_{0}\,. \tag{2.5}\] In other words, we may identify the mass matrix in the Hamiltonian with the zero-th gamma matrix \(\gamma_{0}\), so that the mass term is proportional to the identity matrix in the Dirac operator. **Example 2.1** (\(\boldsymbol{d=2}\)).: Let \(\{\sigma_{i}\}_{i=1,2,3}\) be the Pauli matrices. The momentum space representation of the massive Dirac Hamiltonian of class A in \(d=2\) is given by \[H=p_{1}\sigma_{1}+p_{2}\sigma_{2}+m\sigma_{3}\,. \tag{2.6}\] In this case, we identify the mass matrix, \(\gamma_{0}=\sigma_{3}\). The corresponding Dirac operator in \(2+1\) dimensions is given by \[D=p_{1}(\sigma_{3}\sigma_{1})+p_{2}(\sigma_{3}\sigma_{2})+ip_{0}\sigma_{3}+m=ip _{1}\sigma_{2}-ip_{2}\sigma_{1}+ip_{0}\sigma_{3}+m\,, \tag{2.7}\] which is not hermitian. In the massless case \(m=0\), \(D^{\dagger}=-D\), and it shows the parity symmetry. **Example 2.2** (\(\boldsymbol{d=3}\)).: The massive Dirac Hamiltonian of class A in \(d=3\) is given as follows: \[H=\vec{p}\cdot(\vec{\sigma}\otimes\sigma_{3})+m(\mathbb{1}\otimes\sigma_{2})= \begin{pmatrix}\vec{p}\cdot\vec{\sigma}&-im\\ +im&-\vec{p}\cdot\vec{\sigma}\end{pmatrix}\,. \tag{2.8}\] Then, having the mass matrix \(\gamma_{0}=\mathbb{1}\otimes\sigma_{2}\), the Dirac operator is given by \[D=-i\vec{p}\cdot(\vec{\sigma}\otimes\sigma_{1})+ip_{0}(\mathbb{1}\otimes \sigma_{2})+m(\mathbb{1}\otimes\mathbb{1}\,)=\begin{pmatrix}m&+p_{0}-i\vec{p} \cdot\vec{\sigma}\\ -p_{0}-\vec{p}\cdot\vec{\sigma}&m\end{pmatrix}\,. \tag{2.9}\] If \(m=0\), it shows the chiral symmetry \(\{D,\Gamma\}=0\) where \(\Gamma=\mathbb{1}\otimes\sigma_{3}\). ### Symmetry and classification Let us introduce the discrete symmetries, \(\mathcal{C}\) and \(\mathcal{T}\), which play an essential role in the classification of Hamiltonian and Dirac operator. **Definition 2.3**.: Let \(C\) and \(T\) be unitary operators, which act on a Hamiltonian as follows, \[CHC^{-1}=-H^{*}\,,\qquad THT^{-1}=+H^{*}\,. \tag{2.10}\] In the momentum space representation, we have \(CH(p)C^{-1}=-H(-p)^{*}\) and \(TH(p)T^{-1}=+H(-p)^{*}\). We define the complex conjugation operator \(K\), \(KXK=X^{*}\) for any operator \(X\). Then, we define anti-unitary operators, that we call charge conjugation operator \(\mathcal{C}\) and time reversal operator \(\mathcal{T}\), \[\mathcal{C}=CK\,,\qquad\mathcal{T}=TK\,. \tag{2.11}\] If there exist \(\mathcal{C}\) and \(\mathcal{T}\) operators for a given Hamiltonian \(H\), we say that the Hamiltonian \(H\) has \(\mathcal{C}\) and \(\mathcal{T}\) symmetry, respectively. _Remark 2.4_.: If the Hamiltonian \(H\) has both \(\mathcal{C}\) and \(\mathcal{T}\) symmetries, it also has the chiral symmetry, i.e., there exists an unitary operator \(\Gamma\propto CT\), which anti-commutes with \(H\), \(\{\Gamma,H\}=0\). There are two possible realizations of \(\mathcal{C}\) and \(\mathcal{T}\) operators, such that \[\mathcal{C}^{2}=\pm 1\,,\qquad\mathcal{T}^{2}=\pm 1\,, \tag{2.12}\] from which we obtain the AZ tenfold way classification [6]. We provide the summary of the classification in Table 1. The left-most column shows the symmetry class \(\mathscr{C}\): We both use the Cartan notation and the classifying space notation. There are two complex and eight real classes. Then, we show the classifying space and the space of time-evolution operator \(U_{\mathscr{C}}=e^{iH}\) for each symmetry class. We observe that the classifying space of class \(\mathscr{C}_{p}\) agrees with the space of \(U_{\mathscr{C}_{p+1}}\), where \(p\in\mathbb{Z}_{2}\) (\(\mathscr{C}=C\)) and \(p\in\mathbb{Z}_{8}\) (\(\mathscr{C}=R\)). The right-most column shows \(\mathcal{C}\) and \(\mathcal{T}\) symmetries of each class. ## 3 Bulk extension Utilizing both formalisms of Lagrangian and Hamiltonian, we introduce the process of the bulk extension, which gives rise to the band flattened Hamiltonian. We start with a \(d\)-dimensional gapped Hamiltonian of class \(\mathscr{C}\) denoted by \(H\). Applying the Lagrangian formalism, we then obtain a \((d+1)\)-dimensional Dirac operator by adding the \(0\)-direction, \(D=\gamma_{0}H+\gamma_{0}\partial_{0}\). We apply lattice discretization to deal with this direction. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Symmetry class & \(\mathscr{C}\) & Classifying space \(S_{\mathscr{C}}\) & T-evolution operator \(U_{\mathscr{C}}\) & \(\mathcal{T}^{2}\) & \(\mathcal{C}^{2}\) & \(\chi\) \\ \hline A & \(C_{0}\) & \(\mathrm{U/U\times U}\) & U & 0 & 0 & 0 \\ AIII & \(C_{1}\) & U & \(\mathrm{U/U\times U}\) & 0 & 0 & 1 \\ \hline AI & \(R_{0}\) & \(\mathrm{O/O\times O}\) & U/O & \(+1\) & 0 & 0 \\ BDI & \(R_{1}\) & O & \(\mathrm{O/O\times O}\) & \(+1\) & \(+1\) & 1 \\ D & \(R_{2}\) & O/U & O & 0 & \(+1\) & 0 \\ DIII & \(R_{3}\) & U/Sp & O/U & \(-1\) & \(+1\) & 1 \\ AII & \(R_{4}\) & \(\mathrm{Sp/Sp\times Sp}\) & U/Sp & \(-1\) & 0 & 0 \\ CII & \(R_{5}\) & Sp & \(\mathrm{Sp/Sp\times Sp}\) & \(-1\) & \(-1\) & 1 \\ C & \(R_{6}\) & Sp/U & Sp & 0 & \(-1\) & 0 \\ CI & \(R_{7}\) & U/O & Sp/U & \(+1\) & \(-1\) & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: The AZ tenfold way classification of the classifying spaces and the associated time-evolution operators with respect to \(\mathcal{T}\), \(\mathcal{C}\), and chiral (\(\chi\)) symmetries. **Definition 3.1**.: We define the shift operator in the \(0\)-direction denoted by \(\nabla_{0}\), such that \(\nabla_{0}\psi_{n_{0}}=\psi_{n_{0}+1}\), where \(\psi_{n_{0}}\) is the field operator with the \(0\)-direction coordinate \(n_{0}\in\{1,\dots,N\}\) with \(N\) the size of the \(0\)-direction. We do not explicitly write \(d\)-dimensional dependence of the field \(\psi\) for simplicity. Then, we define the \((d+1)\)-dimensional Wilson-Dirac operator as follows. **Definition 3.2**.: Let \(H\) be a \(d\)-dimensional gapped Hamiltonian of class \(\mathscr{C}\). Let \(\gamma\equiv\gamma_{0}\) and let \(a\) be the lattice spacing constant in the \(0\)-direction. Denoting the projection operator given by \(P_{\pm}=\frac{1}{2}(1\,\pm\gamma)\), we define the \((d+1)\)-dimensional Wilson-Dirac operator, \[D=\gamma H-\frac{1}{a}P_{+}\nabla_{0}-\frac{1}{a}P_{-}\nabla_{0}^{\dagger}+ \frac{1}{a}\,. \tag{3.1}\] _Remark 3.3_.: If we do not impose the Wilson term, we instead have \[D=\gamma H-\frac{1}{2a}\left(\nabla_{0}-\nabla_{0}^{\dagger} \right)\,, \tag{3.2}\] which involves additional contributions of species doublers in the low-energy regime. We evaluate the functional determinant of the Wilson-Dirac operator with the following boundary conditions in the \(0\)-direction, \[\nabla_{0}\psi_{N}=\begin{cases}0&(\text{open})\\ +\psi_{1}&(\text{periodic})\\ -\psi_{1}&(\text{anti-periodic})\end{cases} \tag{3.3}\] **Definition 3.4**.: Denoting the functional determinant with the boundary condition by \(\det D_{\rm bc}\) (\({\rm bc}\in\{\rm op\ (open),p\ (periodic),ap\ (anti-periodic)\}\)), we define the effective Dirac determinant, \[\det\widetilde{D}_{\rm op}=\frac{\det D_{\rm op}}{\det D_{\rm ap }}\,,\qquad\det\widetilde{D}_{\rm p}=\frac{\det D_{\rm p}}{\det D_{\rm ap}}\,. \tag{3.4}\] The denominator contribution with the anti-boundary contribution is known to be the Pauli-Villars regulator. Then, we have the following. **Proposition 3.5**.: Taking the large scale limit \(Na\to\infty\), and then the continuum limit \(a\to 0\) in \(0\)-direction, we have the \(d\)-dimensional effective Dirac operator given by \[\overline{D}_{\rm op}:=\lim_{a\to 0}\lim_{Na\to\infty} \widetilde{D}_{\rm op}=\frac{1}{2}(1+V)\,,\qquad\overline{D}_{\rm p}:=\lim_{a \to 0}\lim_{Na\to\infty}\widetilde{D}_{\rm p}=V\,, \tag{3.5}\] where we define \[V=\gamma\,{\rm sgn}\,H=\gamma\frac{H}{\sqrt{H^{2}}}\,. \tag{3.6}\] Hence, from the effective Dirac operator with the periodic boundary condition, we obtain the band flattened Hamiltonian \(\overline{H}=\operatorname{sgn}(H)\). **Theorem 3.6**.: We have \[\overline{H}=\gamma\overline{D}_{\mathrm{p}}\,. \tag{3.7}\] Proof.: It immediately follows from Proposition 3.5. On the other hand, the effective Dirac operator with the open boundary condition provides the so-called overlap Dirac operator \(\overline{D}_{\mathrm{op}}=D_{\mathrm{ov}}\)[7, 8, 9]. We will discuss it in more detail in Sec. 4. _Remark 3.7_.: Redefining the \(V\)-operator \(V\to-V\), and changing the normalization, we have the determinant of \(\overline{D}_{\mathrm{op}}\) as follows, \[\det(1-V)=\sum_{i=0}^{\operatorname{rk}V}(-1)^{i}\operatorname{ tr}\wedge^{i}V\,, \tag{3.8}\] which is interpreted as an equivariant analogue of Euler characteristic. See also Remark 3.18. Moreover, the ratio of determinants used in Definition 3.4 also implies a K-theory formulation, which involves the difference of vector bundles associated with each boundary condition. The operator \(V\) is unitary, \(V^{\dagger}=V^{-1}\) since \(\gamma\) and \(\operatorname{sgn}H\) are hermitian, and \(\gamma^{2}=(\operatorname{sgn}H)^{2}=1\). In fact, it has been known that the non-hermitian point-gap Hamiltonian is topologically equivalent to the unitary operator [30]. We remark that such a unitary operator is also discussed in the context of Floquet systems (see, e.g., [31, 32]). For each symmetry class based on the AZ tenfold way classification, we have the following. **Proposition 3.8**.: For class \(\mathscr{C}\) system, the unitary operator \(V\), hence the effective Dirac operator \(\overline{D}_{\mathrm{p}}\) takes value in the corresponding classifying space \(S_{\mathscr{C}}\) in the \(d\)-dimensional bulk limit. The remaining part of this Section is devoted to a proof of Proposition 3.5 and Proposition 3.8 for each symmetry class \(\mathscr{C}\). **Corollary 3.9**.: For class \(\mathscr{C}_{p}\) system, the operator \(H_{V}\) defined by \(V=e^{iH_{V}}\in S_{\mathscr{C}_{p}}\) belongs to class \(\mathscr{C}_{p+1}\). In other words, the symmetry of \(H_{V}\) agrees with that of the Hamiltonian of class \(\mathscr{C}_{p}\) in the gapless limit. Proof.: This follows from that the classifying space \(S_{\mathscr{C}_{p}}\) agrees with the space of time-evolution operators of class \(\mathscr{C}_{p+1}\) as shown in Table 1. In the gapless limit, the mass matrix \(\gamma\) plays a role of the additional symmetry operator, which changes the symmetry class \(\mathscr{C}_{p}\) to \(\mathscr{C}_{p+1}\). ### Wigner-Dyson class We first apply the bulk extension formalism to the Wigner-Dyson class (class A, AI, AII; threefold way). We in particular discuss the class A case (\(\mathbb{C}\)-hermitian Hamiltonian with no symmetry). The class AI and AII cases can be discussed in parallel by replacing the \(\mathbb{C}\)-Hamiltonian with those for \(\mathbb{R}\) and \(\mathbb{H}\). #### 3.1.1 Class A The class A Hamiltonian is given by a \(\mathbb{C}\)-hermitian matrix with no additional symmetry. **Definition 3.10**.: We consider a \(d\)-dimensional gapped system of class A, which is described by the following size \(k\) Hamiltonian, \[H=\begin{pmatrix}\mathsf{A}&\mathsf{C}\\ \mathsf{C}^{\dagger}&\widetilde{\mathsf{A}}\end{pmatrix}\in\mathsf{H}(k, \mathbb{C})\,, \tag{3.9}\] where \(k=k_{1}+k_{2}\) and \[\mathsf{A}\in\mathsf{H}(k_{1},\mathbb{C})\,,\qquad\widetilde{ \mathsf{A}}\in\mathsf{H}(k_{2},\mathbb{C})\,,\qquad\mathsf{C}\in\mathbb{C}^{k_ {1}\times k_{2}}\,. \tag{3.10}\] This block matrix structure is taken with respect to the mass matrix, \[\gamma\equiv\gamma_{0}=\begin{pmatrix}\mathbb{1}_{k_{1}}&0\\ 0&-\mathbb{1}_{k_{2}}\end{pmatrix}\,, \tag{3.11}\] and hence we have the projection operators, \[P_{+}=\frac{1+\gamma}{2}=\begin{pmatrix}\mathbb{1}_{k_{1}}&0\\ 0&0\end{pmatrix}\,,\qquad P_{-}=\frac{1-\gamma}{2}=\begin{pmatrix}0&0\\ 0&\mathbb{1}_{k_{2}}\end{pmatrix}\,. \tag{3.12}\] In this case, applying Definition 3.2, the Wilson-Dirac operator is given by \[aD=\begin{pmatrix}A&C\\ -C^{\dagger}&B\end{pmatrix}-P_{+}\nabla_{0}-P_{-}\nabla_{0}^{\dagger} \tag{3.13}\] where \[A=\mathbb{1}_{k_{1}}+a\mathsf{A}\,,\qquad B=\mathbb{1}_{k_{2}}- a\widetilde{\mathsf{A}}\,,\qquad C=a\mathsf{C}\,. \tag{3.14}\] The next step is to compute the determinant of size \(Nk=N(k_{1}+k_{2})\) to consider the effective Dirac operator, \[\det aD=\begin{vmatrix}A&C&0&&&&Y&0\\ -C^{\dagger}&B&0&-\mathbb{1}_{k_{2}}&&&0&0\\ -\mathbb{1}_{k_{1}}&0&A&C&0&&&\\ &0&-C^{\dagger}&B&0&-\mathbb{1}_{k_{2}}&&&\\ &&-\mathbb{1}_{k_{1}}&0&A&C&\ddots&&\\ &&&&&\ddots&\ddots&\ddots&&\\ 0&0&&&&&&&A&C\\ 0&X&&&&&&&-C^{\dagger}&B\end{vmatrix} \tag{3.15}\] where we take \(X\) and \(Y\) depending on the boundary condition in the \(0\)-direction, \[X,Y=\begin{cases}0&\text{(open)}\\ -\mathbb{1}&\text{(periodic)}\\ +\mathbb{1}&\text{(anti-periodic)}\end{cases} \tag{3.16}\] In order to write down the determinant, we define the following operator. **Definition 3.11**.: We define the hermitian \(T\)-operator (transfer matrix) as follows, \[T=\begin{pmatrix}CB^{-1}C^{\dagger}+A&CB^{-1}\\ B^{-1}C^{\dagger}&B^{-1}\end{pmatrix}\,. \tag{3.17}\] _Remark 3.12_.: The determinant of the \(T\)-operator is given by \[\det T=\det\left(CB^{-1}C^{\dagger}+A-CB^{-1}\cdot B\cdot B^{-1}C^{\dagger} \right)\det\left(B^{-1}\right)=\frac{\det A}{\det B}\,. \tag{3.18}\] **Lemma 3.13**.: The Wilson-Dirac operator determinant is given as follows, \[\det aD =(-1)^{n}\det A^{N}\det\left(\begin{pmatrix}\mathbb{1}_{k_{1}}& \\ &-X\end{pmatrix}-T^{-N}\begin{pmatrix}-Y&\\ &\mathbb{1}_{k_{2}}\end{pmatrix}\right)\] \[=\begin{cases}(-1)^{n}\det A^{N}\det\frac{1}{2}\Big{(}1-T^{-N}+ \big{(}1+T^{-N}\big{)}\gamma\Big{)}&\text{(open)}\\ (-1)^{n}\det A^{N}\det\big{(}1-T^{-N}\big{)}&\text{(periodic)}\\ (-1)^{n}\det A^{N}\det\big{(}1+T^{-N}\big{)}\gamma&\text{(anti-periodic)} \end{cases} \tag{3.19}\] where \(n=(N-1)k_{2}^{2}+Nk_{2}\). A proof of Lemma 3.13 is given in Appendix A. From this expression, we obtain the effective Dirac operator determinant (Definition 3.4). **Lemma 3.14**.: Define the effective Hamiltonian \(\mathcal{H}\) through the \(T\)-operator \(T=:e^{a\mathcal{H}}\). Then, the Wilson-Dirac operator determinant is given by \[\det\widetilde{D}_{\mathrm{op}} =\frac{\det D_{\mathrm{op}}}{\det D_{\mathrm{ap}}}=\det\frac{1}{2} \bigg{(}1+\gamma\frac{1-T^{-N}}{1+T^{-N}}\bigg{)}=\det\frac{1}{2}\bigg{(}1+ \gamma\tanh\bigg{(}\frac{Na}{2}\mathcal{H}\bigg{)}\bigg{)}\,, \tag{3.20a}\] \[\det\widetilde{D}_{\mathrm{p}} =\frac{\det D_{\mathrm{p}}}{\det D_{\mathrm{ap}}}=\det\bigg{(} \gamma\frac{1-T^{-N}}{1+T^{-N}}\bigg{)}=\det\bigg{(}\gamma\tanh\bigg{(}\frac{ Na}{2}\mathcal{H}\bigg{)}\bigg{)}\,, \tag{3.20b}\] from which we obtain the effective Dirac operator, \[\widetilde{D}_{\mathrm{op}}=\frac{1}{2}\bigg{(}1+\gamma\tanh \bigg{(}\frac{Na}{2}\mathcal{H}\bigg{)}\bigg{)}\,,\qquad\widetilde{D}_{ \mathrm{p}}=\gamma\tanh\bigg{(}\frac{Na}{2}\mathcal{H}\bigg{)}\,. \tag{3.20c}\] Proof.: This follows from Lemma 3.13. In order to prove Proposition 3.5 in this case, we take the following limits. 1. Large scale limit: \(Na\to\infty\) In this limit, the \(\tanh\) function behaves as \[\lim_{Na\to\infty}\tanh\bigg{(}\frac{Na}{2}\mathcal{H}\bigg{)}= \operatorname{sgn}\mathcal{H}=\frac{\mathcal{H}}{\sqrt{\mathcal{H}^{2}}}\,,\] (3.21) where the spectrum is given by \(\operatorname{Spec}(\operatorname{sgn}\mathcal{H})=\{\pm 1\}\). 2. Continuum limit: \(a\to 0\) In this limit, we have the expansion, \[T=\mathbb{1}_{\,k}+a\mathcal{H}+O(a^{2})\,.\] (3.22) On the other hand, we also have \[T=\begin{pmatrix}a\mathsf{C}(\mathbb{1}_{\,k_{1}}-a\widetilde{ \mathsf{A}})^{-1}a\mathsf{C}^{\dagger}+\mathbb{1}_{\,k_{1}}+a\mathsf{A}&a \mathsf{C}(\mathbb{1}_{\,k_{2}}-a\widetilde{\mathsf{A}})^{-1}\\ (\mathbb{1}_{\,k_{2}}-a\widetilde{\mathsf{A}})^{-1}a\mathsf{C}^{\dagger}&( \mathbb{1}_{\,k_{2}}-a\widetilde{\mathsf{A}})^{-1}\end{pmatrix}=\mathbb{1}_{\,k }+aH+O(a^{2})\,,\] from which we obtain \[\lim_{a\to 0}\mathcal{H}=H\,.\] (3.24) **Proposition 3.15**.: Proposition 3.5 holds for class A. Proof.: Taking the large scale limit, and then the continuum limit, we have \[\lim_{a\to 0}\lim_{Na\to 0}\gamma\tanh\bigg{(}\frac{Na}{2} \mathcal{H}\bigg{)}=\gamma\operatorname{sgn}H=V\,. \tag{3.25}\] Then, it follows from Lemma 3.14. **Proposition 3.16**.: Proposition 3.8 holds for class A. Proof.: We first remark that the unitary operator \(V\) obeys \(\gamma V\gamma=\operatorname{sgn}(H)\gamma=V^{\dagger}\). Hence, parametrizing \(V=e^{X}\), we obtain \(X^{\dagger}=-X\) and also \(\{\gamma,X\}=0\), which implies that \(V\) takes a value in the complex Grassmannian, which becomes the classifying space of class A in the inductive limit, \[V\in\bigcup_{k_{1}+k_{2}=k}\frac{\operatorname{U}(k)}{\operatorname{U}(k_{1}) \times\operatorname{U}(k_{2})}\xrightarrow{k\to\infty}C_{0}\,. \tag{3.26}\] This large \(k\) limit corresponds to the thermodynamic limit of the \(d\)-dimensional bulk system (bulk limit). The classifying space plays an important role to discuss the topological property of the system. The zero-th homotopy group of \(C_{0}\) is given by \(\pi_{0}(C_{0})=\mathbb{Z}\), and in general we have \(\pi_{d}(C_{0})=\mathbb{Z}\) for \(d\in 2\mathbb{Z}_{\geq 0}\). In this case, we obtain the topological invariant of the \(d\)-dimensional gapped system \(\nu\in\mathbb{Z}\). **Proposition 3.17**.: Identifying the case \(\nu=0\) as a topologically trivial case, we parametrize \(k_{1}=n-\nu\), \(k_{2}=n+\nu\). Then, we obtain \[\nu=-\frac{1}{2}\operatorname{tr}\operatorname{sgn}H=-\frac{1}{2}\eta(H)\,, \tag{3.27}\] where the eta invariant \(\eta(H)\) is defined by \[\eta(H)=\operatorname{tr}\operatorname{sgn}H\,. \tag{3.28}\] This bulk topological invariant agrees with the index of the overlap Dirac operator, \(\operatorname{ind}(D_{\operatorname{ov}})=\nu\)[11, 24]. See Sec. 4. We also remark that the eta invariant appears in the formalism of the domain-wall fermion from the Dirac determinant together with the Pauli-Villars regularization, which is analogous to the definition of \(\widetilde{D}_{\operatorname{p}}\) and \(\overline{D}_{\operatorname{p}}\). See, e.g., a recent review [33] for details. _Remark 3.18_.: Writing the mass matrix \(\gamma=(-1)^{F}\) and the \(V\)-operator \(V=e^{iH_{V}}\), the overlap operator index (bulk topological invariant) is written in the form of the equivariant Witten index, \[\nu=-\frac{1}{2}\operatorname{tr}\left[(-1)^{F}e^{iH_{V}}\right]\,. \tag{3.29}\] See also Remark 3.7. #### Wilson-Dirac fermion in \(\boldsymbol{d=2}\) Let us demonstrate the bulk extension formalism for a gapped class A system in \(d=2\). We consider the following Wilson-Dirac Hamiltonian, \[H=\begin{pmatrix}m+2-(\cos p_{1}+\cos p_{2})&\sin p_{1}-i\sin p_{2}\\ \sin p_{1}+i\sin p_{2}&-m-2+(\cos p_{1}+\cos p_{2})\end{pmatrix}\,, \tag{3.30}\] and the two-dimensional part of the corresponding Wilson-Dirac operator, \[D=\gamma H=\begin{pmatrix}m+2-(\cos p_{1}+\cos p_{2})&\sin p_{1}-i\sin p_{2}\\ -\sin p_{1}-i\sin p_{2}&m+2-(\cos p_{1}+\cos p_{2})\end{pmatrix}\,, \tag{3.31}\] where the mass matrix is given by \(\gamma=\sigma_{3}\). We define the intermediate Hamiltonian and the \(V\)-operator of the finite size \(N\) as follows, \[H_{N}=\tanh\left(\frac{N}{2}\mathcal{H}\right)\,,\qquad V_{N}=\gamma H_{N}\,. \tag{3.32}\] The band spectra \(E\) of the Hamiltonian \(H_{N}\) with \(m=-1\) are presented in Fig. 1. The case \(N=0\) shows the spectrum of the Hamiltonian (3.30) itself. We see that the spectrum becomes flat as \(N\) becomes large. The complex spectrum of the Wilson-Dirac operator (3.31) for \(m=-1\) is given in Fig. 2: The horizontal and vertical axes are for the real part and imaginary part of the spectrum. We show the spectra of the \(V\)-operator at finite \(N\) denoted by \(V_{N}\) in Fig. 3. The spectrum approaches to a unit circle as \(N\) becomes large. #### 3.1.2 Class AI, AII Let us consider the other Wigner-Dyson classes, class AI and AII. In these cases, we can apply the same analysis after replacing the \(\mathbb{C}\)-hermitian Hamiltonian of class A (3.9) Figure 1: The band spectra \(E\) of the Wilson–Dirac Hamiltonian \(H_{N}\) with \(m=-1\). The case \(N=0\) shows the spectrum of the Hamiltonian (3.30). with the \(\mathbb{R}\)-symmetric and the \(\mathbb{H}\)-self-dual matrices for class AI and AII, respectively, \[\text{(AI)}\quad\mathsf{A}\in\mathsf{H}(k_{1},\mathbb{R})\,,\qquad \widetilde{\mathsf{A}}\in\mathsf{H}(k_{2},\mathbb{R})\,,\qquad\mathsf{C}\in \mathbb{R}^{k_{1}\times k_{2}}\,, \tag{3.33a}\] \[\text{(AII)}\quad\mathsf{A}\in\mathsf{H}(k_{1},\mathbb{H})\,, \qquad\widetilde{\mathsf{A}}\in\mathsf{H}(k_{2},\mathbb{H})\,,\qquad \mathsf{C}\in\mathbb{H}^{k_{1}\times k_{2}}\,. \tag{3.33b}\] **Proposition 3.19**.: Proposition 3.5 and Proposition 3.8 hold for class AI and AII. Proof.: In the real and quaternion cases, we should replace the imaginary unit \(i=\sqrt{-1}\) with the gamma matrix \(\gamma\) obeying \(\gamma^{2}=-1\). Since we have not used it explicitly in Sec. 3.1.1, we may apply the same argument to these cases as before. The \(V\)-operator takes a value in the real and quaternion Grassmannians, which become the classifying spaces of class AI and AII in the bulk limit, \[V\in\begin{cases}\bigcup_{k_{1}+k_{2}=k}\frac{\mathrm{O}(k)}{ \mathrm{O}(k_{1})\times\mathrm{O}(k_{2})}&\xrightarrow{k\to\infty}\ R_{0}\quad \text{(AI)}\\ \bigcup_{k_{1}+k_{2}=k}\frac{\mathrm{Sp}(k)}{\mathrm{Sp}(k_{1}) \times\mathrm{Sp}(k_{2})}&\xrightarrow{k\to\infty}\ R_{4}\quad\text{(AII)} \end{cases} \tag{3.34}\] For class AI and AII, we have \(\pi_{0}(R_{0})=\pi_{0}(R_{4})=\mathbb{Z}\). We can similarly obtain the bulk topological invariant (3.27), which agrees with the index of overlap Dirac operator. ### Chiral class Let us then consider the chiral class. We focus on the complex case (class AIII) for the moment. The other classes (class BDI, CII) are discussed in the same way by replacing by \(\mathbb{R}\) and \(\mathbb{H}\) matrices. Figure 2: The complex spectrum \(\lambda\) of \(d=2\) Wilson–Dirac operator with \(m=-1\). **Definition 3.20**.: We define the \(d\)-dimensional gapped class AIII Hamiltonian of size \(2n\) by \[H=\begin{pmatrix}0&\mathsf{C}\\ \mathsf{C}^{\dagger}&0\end{pmatrix}\,,\qquad\mathsf{C}\in\mathbb{C}^{n\times n }\,. \tag{3.35}\] This Hamiltonian is obtained by taking \(\mathsf{A}\), \(\widetilde{\mathsf{A}}\to 0\) of Hamiltonian of class A (3.9) with \(k_{1}=k_{2}\equiv n\) (\(k=2n\)). This Hamiltonian possesses the chiral symmetry, \(\{\Gamma,H\}=0\) with \(\Gamma=\sigma_{3}\otimes\mathbb{1}_{n}\), and the mass matrix is taken to be \(\gamma\equiv\gamma_{0}=\sigma_{1}\otimes\mathbb{1}_{n}\). Due to this chiral symmetry, all the non-zero eigenvalues make a pair, \(\pm\lambda\in\mathrm{Spec}(H)\). Since we assume that the Hamiltonian is gapped, the matrix size must be even, and the Hamiltonian takes a form as given in Definition 3.20. In order to apply the same analysis as in Sec. 3.1.1 to the current case, we apply an orthogonal transformation. We define an orthogonal matrix, \[O=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\otimes\mathbb{1}_{n}\,,\qquad O^{2}=\mathbb{1}_{2n}\,, \tag{3.36}\] Figure 3: The complex spectra of \(V_{N}\). The horizontal and vertical axes are for the real part and imaginary part of the spectrum. which converts the gamma matrices, \[\tilde{\Gamma}=O\Gamma O=\sigma_{1}\otimes 1_{n}\,,\qquad\tilde{\gamma}=O\gamma O =\sigma_{3}\otimes 1_{n}\,, \tag{3.37}\] and the Hamiltonian, \[\tilde{H}=OHO=\frac{1}{2}\begin{pmatrix}\mathsf{C}+\mathsf{C}^{\dagger}&- \mathsf{C}+\mathsf{C}^{\dagger}\\ \mathsf{C}-\mathsf{C}^{\dagger}&-\mathsf{C}-\mathsf{C}^{\dagger}\end{pmatrix}\,. \tag{3.38}\] Applying the bulk extension formalism to this case, we obtain the band flattened Hamiltonian \(\overline{H}\) from the unitary operator \(V=\tilde{\gamma}\operatorname{sgn}(\tilde{H})\) having the following properties. **Lemma 3.21**.: The unitary operator \(V=\tilde{\gamma}\operatorname{sgn}(\tilde{H})\) obeys \[V^{\dagger}=V^{-1}\,,\qquad\tilde{\gamma}V\tilde{\gamma}=V^{\dagger}\,,\qquad \tilde{\Gamma}V\tilde{\Gamma}=V\,. \tag{3.39}\] Proof.: The first two relations are straightforward. The third relation can be shown using \(\{\tilde{\gamma},\tilde{\Gamma}\}=0\) and \(\{\tilde{\Gamma},\tilde{H}\}=0\). **Proposition 3.22**.: Proposition 3.5 and Proposition 3.8 hold for the chiral classes AIII, BDI, and CII. Proof.: Proposition 3.5 can be shown in the same way as class A. For Proposition 3.8, we parametrize the \(V\)-operator as \(V=e^{X}\) for class AIII. We can fix it from the relations in Lemma 3.21 as follows, \[X=\begin{pmatrix}0&Y\\ Y&0\end{pmatrix}\,,\qquad Y^{\dagger}=-Y\,, \tag{3.40}\] which transforms under the unitary transformation, \(X\to UXU^{\dagger}\) with \(U\in\mathrm{U}(n)\times\mathrm{U}(n)/\mathrm{U}(n)=\mathrm{U}(n)\). In other words, \(X\in\operatorname{Lie}(\mathrm{U}(n)\times\mathrm{U}(n)/\mathrm{U}(n))= \mathfrak{u}(n)\). Hence, the \(V\)-operator takes a value in the unitary group, which becomes the classifying space of class AIII in the bulk limit, \[V\in\mathrm{U}(n)\xrightarrow{n\to\infty}C_{1}\quad\text{(AIII)}\,. \tag{3.41}\] For the other chiral classes (class BDI and CII), we can show by replacing the \(\mathbb{C}\)-matrix with \(\mathbb{R}\)- and \(\mathbb{H}\)-matrices that the \(V\)-operator takes a value in the corresponding classifying space, \[V\in\begin{cases}\mathrm{O}(n)&\xrightarrow{n\to\infty}R_{1}\quad\text{(BDI) }\\ \mathrm{Sp}(n)&\xrightarrow{n\to\infty}R_{5}\quad\text{(CII)}\end{cases} \tag{3.42}\] We recall that \(\pi_{0}(R_{1})=\mathbb{Z}_{2}\) and \(\pi_{0}(R_{5})=0\), and the mod-two bulk topological invariant of class (B)DI system denoted by \(\nu\) is determined by the determinant of \(V\) (see, e.g., [5]), \[(-1)^{\nu}=\det V\,, \tag{3.43}\] which would be identified with the mod-two index of the corresponding overlap Dirac operator \(\operatorname{ind}(D_{\text{ov}})\in\mathbb{Z}_{2}\). See Sec. 4.2. ### BdG class There are four Bogoliubov-de Gennes (BdG) classes (class D, DIII, C, CI) described by the following Hamiltonian. **Definition 3.23**.: We define the \(d\)-dimensional gapped Hamiltonian of size \(2n\), \[H=\begin{pmatrix}\mathsf{A}&\mathsf{C}\\ \mathsf{C}^{\dagger}&-\mathsf{A}^{\mathrm{T}}\end{pmatrix} \tag{3.44}\] with \[\mathsf{A}\in\mathsf{H}(n,\mathbb{C})\,,\qquad\mathsf{C}\in \mathbb{C}^{n\times n}\,, \tag{3.45}\] which describes four BdG classes, \[\mathrm{D}: \quad\mathsf{C}^{\mathrm{T}}=-\mathsf{C}\,,\qquad\mathrm{DIII}: \quad\mathsf{C}^{\mathrm{T}}=-\mathsf{C}\,,\ \mathsf{A}=0\,, \tag{3.46a}\] \[\mathrm{C}: \quad\mathsf{C}^{\mathrm{T}}=+\mathsf{C}\,,\qquad\quad\mathrm{ CI}:\quad\mathsf{C}^{\mathrm{T}}=+\mathsf{C}\,,\ \mathsf{A}=0\,. \tag{3.46b}\] #### 3.3.1 Class D We consider the gapped Hamiltonian of class D in the form of (3.44) with the condition \(\mathsf{C}^{\mathrm{T}}=-\mathsf{C}\). We define a unitary matrix, \[U=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ i&-i\end{pmatrix}\otimes\mathbb{1}_{n}\in\mathrm{U}(2n)\,, \tag{3.47}\] which converts the mass matrix \(\gamma=\sigma_{3}\otimes\mathbb{1}_{n}\) to \(\tilde{\gamma}=U\gamma U^{\dagger}=\sigma_{2}\otimes\mathbb{1}_{n}\), and the Hamiltonian, \[\tilde{H}=UHU^{\dagger}=i\begin{pmatrix}\alpha_{I}+\beta_{I}&- \alpha_{R}+\beta_{R}\\ \alpha_{R}+\beta_{R}&\alpha_{I}-\beta_{I}\end{pmatrix}\,, \tag{3.48}\] where we denote \(\mathsf{A}=\alpha_{R}+i\alpha_{I}\), \(\mathsf{C}=\beta_{R}+i\beta_{I}\) with \(\alpha_{R},\alpha_{I},\beta_{R},\beta_{I}\in\mathbb{R}^{n\times n}\). We remark that \(\alpha_{R}^{\mathrm{T}}=\alpha_{R}\), \(\alpha_{I}^{\mathrm{T}}=-\alpha_{I}\), \(\beta_{R}^{\mathrm{T}}=-\beta_{R}\), \(\beta_{I}^{\mathrm{T}}=-\beta_{I}\), and hence \(M:=-i\tilde{H}\in\tilde{\mathsf{H}}(2n,\mathbb{R})=\mathfrak{o}(2n)\). **Proposition 3.24**.: Proposition 3.5 and Proposition 3.8 hold for class D. Proof.: The proof of Proposition 3.5 is the same as before. Applying the bulk extension formalism for class D, we obtain the flat band Hamiltonian from \(V=U(\gamma\operatorname{sgn}H)U^{\dagger}=\tilde{\gamma}\operatorname{sgn} \tilde{H}\), which is an orthogonal matrix \(V^{\mathrm{T}}=V^{-1}\) with the property \(\tilde{\gamma}V\tilde{\gamma}=V^{-1}\). Parametrizing \(V=e^{X}\), the matrix \(X\) is given in the form of \[X=\begin{pmatrix}\alpha&\beta\\ \beta&-\alpha\end{pmatrix} \tag{3.49}\] where \(\alpha^{\rm T}=-\alpha\), \(\beta^{\rm T}=-\beta\). On the other hand, a generic \(\mathbb{R}\)-skew-symmetric matrix \(Z\in\mathfrak{o}(2n)\) has a decomposition, \[Z=\begin{pmatrix}\alpha+\delta&\beta+\beta^{\prime}\\ \beta-\beta^{\prime}&-\alpha+\delta\end{pmatrix}=\begin{pmatrix}\alpha&\beta \\ \beta&-\alpha\end{pmatrix}+\begin{pmatrix}\delta&\beta^{\prime}\\ -\beta^{\prime}&\delta\end{pmatrix}\,, \tag{3.50}\] where \(\delta^{\rm T}=-\delta\), \(\beta^{\prime\rm T}=\beta^{\prime}\). Writing the second matrix as \(\mathbb{1}_{2}\otimes\delta+i\sigma_{2}\otimes\beta^{\prime}\), it is isomorphic to an anti-hermitian matrix, which is an element of the Lie algebra \(\mathfrak{u}(n)\). Hence, we have \(X\in\mathrm{Lie}(\mathrm{O}(2n)/\mathrm{U}(n))\), which shows that the \(V\)-operator takes a value in the classifying space of class D in the bulk limit, \[V\in\frac{\mathrm{O}(2n)}{\mathrm{U}(n)}\ \xrightarrow{n\to\infty}\ R_{2}\quad \text{(class D)}\,. \tag{3.51}\] _Remark 3.25_.: Recalling \(\pi_{0}(R_{2})=\mathbb{Z}_{2}\), the mod-two topological invariant is given in the same way as class BDI (3.43). #### 3.3.2 Class C Let us discuss the class C system described by the BdG Hamiltonian (3.44) with \(\mathsf{C}^{\rm T}=\mathsf{C}\). We apply the same basis change matrix (3.47), and define an \(\mathbb{H}\)-matrix of size \(n\) as follows, \[\check{H}_{jk}=i\begin{pmatrix}\alpha_{I,jk}-i\beta_{I,jk}&-\alpha_{R,jk}+i \beta_{R,jk}\\ \alpha_{R,jk}+i\beta_{R,jk}&\alpha_{I,jk}+i\beta_{I,jk}\end{pmatrix}\in i \mathbb{H}\,,\quad j,k=1,\ldots,n\,, \tag{3.52}\] where \(\alpha_{R}^{\rm T}=\alpha_{R}\), \(\alpha_{I}^{\rm T}=-\alpha_{I}\), \(\beta_{R}^{\rm T}=\beta_{R}\), \(\beta_{I}^{\rm T}=\beta_{I}\). We then define \(M:=-i\check{H}\in\mathbb{H}^{n\times n}\). In fact, \(M\in\mathfrak{sp}(n)\). **Proposition 3.26**.: Proposition 3.5 and Proposition 3.8 hold for class C. Proof.: The proof of Proposition 3.5 is the same as before. In this case, we have the \(\mathbb{H}\)-valued \(V\)-operator \(V=\tilde{\gamma}\operatorname{sgn}\check{H}\), and hence the flat band Hamiltonian is given by \(\overline{H}=\tilde{\gamma}V\). Parametrizing \(V=e^{X}\), each element of \(X\) is given by \[X_{jk}=i\begin{pmatrix}\alpha_{jk}&\beta_{ji}\\ \beta_{jk}&-\alpha_{jk}\end{pmatrix}\in\mathbb{H} \tag{3.53}\] where \(\alpha=(\alpha_{jk})_{j,k=1,\ldots,n}\) and \(\beta=(\beta_{jk})_{j,k=1,\ldots,n}\) are \(\mathbb{R}\)-symmetric matrices. Compared with a generic \(\mathfrak{sp}(n)\) element \[Z_{jk}=\begin{pmatrix}\delta_{jk}+i\alpha_{jk}&\beta^{\prime}_{jk}+i\beta_{jk} \\ -\beta^{\prime}_{jk}+i\beta_{jk}&\delta_{jk}-i\alpha_{jk}\end{pmatrix}=i\begin{pmatrix} \alpha_{jk}&\beta_{jk}\\ \beta_{jk}&-\alpha_{jk}\end{pmatrix}+\begin{pmatrix}\delta_{jk}&\beta^{ \prime}_{jk}\\ -\beta^{\prime}_{jk}&\delta_{jk}\end{pmatrix}\,, \tag{3.54}\] with \(\delta^{\mathrm{T}}=-\delta\) and \(\beta^{\prime\mathrm{T}}=\beta^{\prime}\), we have \(X\in\mathrm{Lie}(\mathrm{Sp}(n)/\mathrm{U}(n))\). Hence, the \(V\)-operator takes a value in the classifying space of class C in the bulk limit, \[V\in\frac{\mathrm{Sp}(n)}{\mathrm{U}(n)}\ \xrightarrow{n\to\infty}\ R_{6}\quad \text{(class C)}\,. \tag{3.55}\] #### 3.3.3 Class DIII For class DIII, the Hamiltonian is given by \[H=\begin{pmatrix}0&\mathsf{C}\\ \mathsf{C}^{\dagger}&0\end{pmatrix}\,,\qquad\mathsf{C}^{\mathrm{T}}=-\mathsf{C }\,, \tag{3.56}\] which has \(\mathcal{C}\) and \(\mathcal{T}\) symmetries, such that \(\mathcal{C}^{2}=+1\), \(\mathcal{T}^{2}=-1\) (See Table 1). Provided that the Hamiltonian has a gap, we consider the matrix \(\mathsf{C}\) of size \(2n\), \(\mathsf{C}\in\mathbb{C}^{2n\times 2n}\). Hence, in this case, we may apply the following form of the symmetry matrices, \[C=\sigma_{1}\otimes\sigma_{3}\otimes\mathbb{1}_{n}\,, T=i\sigma_{2}\otimes\sigma_{3}\otimes\mathbb{1}_{n}\,, \tag{3.57a}\] \[\Gamma=\sigma_{3}\otimes\mathbb{1}_{2}\otimes\mathbb{1}_{n}\,, \gamma=\sigma_{1}\otimes\sigma_{1}\otimes\mathbb{1}_{n}\,. \tag{3.57b}\] **Lemma 3.27**.: We have the \(V\)-operator, \(V=\gamma\operatorname{sgn}H\), which behaves as follows, \[CVC^{-1}=TVT^{-1}=V^{*}\,,\quad\Gamma V\Gamma=V\,,\quad\gamma V\gamma=V^{ \dagger}\,. \tag{3.58}\] Proof.: It follows from Definition 2.3 together with the relations \(\{C,\gamma\}=0\), \([T,\gamma]=0\), \([\Gamma,\gamma]=0\). **Proposition 3.28**.: Proposition 3.5 and Proposition 3.8 hold for class DIII. Proof.: The proof of Proposition 3.5 is the same as before. We parametrize \(V=e^{X}\). From the behavior under the \(\Gamma\)-matrix shown in Lemma 3.27, we have \(X=i\begin{pmatrix}Y&0\\ 0&\tilde{Y}\end{pmatrix}\) where \(Y\), \(\tilde{Y}\in\mathsf{H}(2n,\mathbb{C})\). We denote \(\Sigma_{i}=\sigma_{i}\otimes\mathbb{1}_{n}\) for \(i=1,2,3\). Then, from the other relations, we have \[\begin{cases}\Sigma_{3}Y\Sigma_{3}=-\tilde{Y}^{*}\\ \Sigma_{3}\tilde{Y}\Sigma_{3}=-Y^{*}\end{cases}\,,\qquad\begin{cases}\Sigma_ {1}Y\Sigma_{1}=-\tilde{Y}\\ \Sigma_{1}\tilde{Y}\Sigma_{1}=-Y\end{cases}\,. \tag{3.59}\] Hence, we have \(\Sigma_{2}Y\Sigma_{2}=Y^{*}\) and \(\Sigma_{2}\tilde{Y}\Sigma_{2}=\tilde{Y}^{*}\), from which we deduce that they are isomorphic to \(\mathbb{H}\)-self-conjugate matrices. Recalling \(\mathsf{H}(n,\mathbb{H})=\mathrm{Lie}(\mathrm{U}(2n)/\mathrm{Sp}(n))\), the \(V\)-operator takes a value in the classifying space of class DIII in the bulk limit, \[V\in\frac{\mathrm{U}(2n)}{\mathrm{Sp}(n)}\xrightarrow{n\to\infty}\ R_{3}\,. \tag{3.60}\] #### 3.3.4 Class CI For class CI, we again have the Hamiltonian in the form (3.56) with a symmetric \(\mathsf{C}\), \(\mathsf{C}^{\mathrm{T}}=\mathsf{C}\). It has \(\mathcal{C}\) and \(\mathcal{T}\) symmetries, such that \(\mathcal{C}^{2}=-1\), \(\mathcal{T}^{2}=+1\) (See Table 1), and we consider the matrix \(\mathsf{C}\) of size \(2n\), \(\mathsf{C}\in\mathbb{C}^{2n\times 2n}\). In this case, we may apply the following form of the symmetry matrices, \[C=i\sigma_{2}\otimes\sigma_{1}\otimes 1_{n}\,, T=\sigma_{1}\otimes\sigma_{1}\otimes\mathbb{1}_{n}\,, \tag{3.61a}\] \[\Gamma=\sigma_{3}\otimes 1_{2}\otimes 1_{n}\,, \gamma=\sigma_{2}\otimes\sigma_{2}\otimes 1_{n}\,. \tag{3.61b}\] **Proposition 3.29**.: Proposition 3.5 and Proposition 3.8 hold for class CI. Proof.: The proof of Proposition 3.5 is the same as before. Having \(V=\gamma\operatorname{sgn}H\), we have the same relations as shown in Lemma 3.27 with the symmetry matrices (3.61). As in the case of class DIII, under the parametrization \(V=e^{X}\), we have \(X=i\begin{pmatrix}Y&0\\ 0&\tilde{Y}\end{pmatrix}\) where \(Y\), \(\tilde{Y}\in\mathsf{H}(2n,\mathbb{C})\). From the other relations, we have \[\begin{cases}\Sigma_{1}Y\Sigma_{1}=-\tilde{Y}^{*}\\ \Sigma_{1}\tilde{Y}\Sigma_{1}=-Y^{*}\end{cases}\,,\qquad\begin{cases}\Sigma_{2 }Y\Sigma_{2}=-\tilde{Y}\\ \Sigma_{2}\tilde{Y}\Sigma_{2}=-Y\end{cases}\,, \tag{3.62}\] from which we deduce that \(Y,\tilde{Y}\in\mathsf{H}(2n,\mathbb{R})\). Recalling \(\mathsf{H}(n,\mathbb{H})=\operatorname{Lie}(\mathrm{U}(n)/\mathrm{O}(n))\), the \(V\)-operator takes a value in the classifying space of class CI in the bulk limit, \[V\in\frac{\mathrm{U}(2n)}{\mathrm{O}(2n)}\xrightarrow{n\to\infty}\ R_{7}\,. \tag{3.63}\] ## 4 Overlap Dirac operator In this Section, we discuss the symmetry of the overlap Dirac operator of class \(\mathscr{C}\), \[D\equiv D_{\mathrm{ov}}=\frac{1}{a}(1+V)\,,\qquad V=\gamma\operatorname{sgn} (H)\in S_{\mathscr{C}}\,, \tag{4.1}\] where we change the normalization of the operator for simplicity: We denote the lattice spacing parameter by \(a\) with mass dimension \([a]=-1\). ### Ginsparg-Wilson relation First of all, it is clear from the unitarity of the \(V\)-operator, \(V^{\dagger}=V^{-1}\), that the overlap operator obeys the following relation, that we call Ginsparg-Wilson (GW) relation. **Proposition 4.1** (Bietenholz-Nishimura [14]).: The overlap Dirac operator obeys Ginsparg-Wilson relation, \[D+D^{\dagger}=aD^{\dagger}D=aDD^{\dagger}\,. \tag{4.2}\] _Remark 4.2_.: GW relation is originally formulated as "a remnant of chiral symmetry" [12], and hence the relation shown in (4.3) is usually called GW relation. We remark that the RHS of (4.2) is suppressed in the continuum limit \(a\to 0\), from which we deduce a simplified relation, \(D+D^{\dagger}=0\). Namely, \(D\) becomes anti-hermitian, \(D^{\dagger}=-D\) in this limit, which is a generic property of gapless Dirac operators. From this point of view, GW relation (4.2) is interpreted as a non-linear deformation of the anti-hermiticity of gapless Dirac operator. #### 4.1.1 Chiral symmetry We then discuss GW relation of the overlap operator with additional symmetries. We use the parametrization \(V=e^{X}\) again. For the class with the chiral symmetry in the gapless limit (e.g., class A), we have \(\{\gamma,X\}=0\), which gives rise to the \(\gamma\)-hermiticity, \(\gamma D\gamma=D^{\dagger}\). Hence, we may rewrite GW relation (4.2) as follows. **Proposition 4.3** (Neuberger [9]).: The overlap operator for the class having the chiral symmetry in the gapless limit obeys Ginsparg-Wilson relation, \[\gamma D+D\gamma=aD\gamma D\,. \tag{4.3}\] This was shown originally for class A. As discussed before, this is interpreted as a non-linear deformation of the chiral symmetry, which reproduces \(\{\gamma,D\}=0\) in the limit \(a\to 0\). From GW relation, we can discuss a non-linear deformation of chiral transformation. We may rewrite the relation (4.3) as follows, \[\gamma D+D\hat{\gamma}=0\,,\qquad\hat{\gamma}=\gamma(1-aD)=\gamma V\,. \tag{4.4}\] Then, the Dirac Lagrangian (2.3) is invariant under the following transformation, \[\psi\longrightarrow\hat{\gamma}\psi\,,\qquad\bar{\psi} \longrightarrow\bar{\psi}\gamma\,, \tag{4.5}\] which, on the other hand, gives rise to a non-trivial contribution to the Jacobian providing the chiral anomaly [34, 35, 11]. We remark that this is not a unique way to write down the transformation: In general, we may rewrite GW relation (4.3) as \((1-abD)\gamma D+D\gamma(1-ab^{\prime}D)=0\) where \(b+b^{\prime}=1\). #### 4.1.2 \(\mathcal{C}\) and \(\mathcal{T}\) symmetries For the system with \(\mathcal{C}\), \(\mathcal{T}\) symmetry, we have \(\mathcal{C}\), \(\mathcal{T}\) analog of GW relation as follows. **Theorem 4.4**.: For the class with \(\mathcal{C}\), \(\mathcal{T}\) symmetry in the gapless limit, the overlap Dirac operator obeys \(\mathcal{C}\) and \(\mathcal{T}\) analog of Ginsparg-Wilson relation, \[CD+D^{\rm T}C=aD^{\rm T}CD\,,\qquad TD+D^{*}T=aD^{*}TD\,. \tag{4.6}\] Proof.: We parametrize \(V=e^{iH_{V}}\) with \(H_{V}^{\dagger}=H_{V}\). For the class with \(\mathcal{C}\) symmetry in the gapless limit, we have \(CH_{V}C^{-1}=-H_{V}^{*}\), which gives rise to \(CVC^{-1}=V^{*}\). Noticing \(D^{\dagger}=(CDC^{-1})^{\rm T}\), we obtain GW relation with respect to \(\mathcal{C}\) symmetry, \[CD+D^{\rm T}C=aD^{\rm T}CD\,. \tag{4.7}\] For the class with \(\mathcal{T}\) symmetry in the gapless limit, we instead have \(TVT^{-1}=V^{\rm T}\) and \(D^{\dagger}=(TDT^{-1})^{*}\), from which we obtain the corresponding GW relation, \[TD+D^{*}T=aD^{*}TD\,. \tag{4.8}\] They are again interpreted as a non-linear deformation of \(\mathcal{C}\) and \(\mathcal{T}\) symmetries of the gapless Dirac operator. Let us discuss the corresponding non-linear transformations. We may rewrite GW relations (4.6) as follows, \[CD+D^{\rm T}\hat{C}=0\,,\quad TD+D^{*}\hat{T}=0\,,\qquad\hat{C}=CV\,,\;\hat{T}= TV\,. \tag{4.9}\] The corresponding non-linear \(\mathcal{C}\) and \(\mathcal{T}\) transformations are given by \[\mathcal{C}\ :\ \psi\longrightarrow\hat{C}\bar{\psi}^{\rm T}\,,\quad\bar{\psi} \longrightarrow\psi^{\rm T}C^{-1}\,,\qquad\mathcal{T}\ :\ \psi\longrightarrow\hat{T}\psi\,,\quad\bar{\psi} \longrightarrow\bar{\psi}T^{-1}\,. \tag{4.10}\] Hence, under these transformations, the fermion path integral measure behaves as \[\mathrm{d}\psi\mathrm{d}\bar{\psi}\longrightarrow(\det V)^{-1}\,\mathrm{d} \psi\mathrm{d}\bar{\psi}\,. \tag{4.11}\] This Jacobian factor is related to the anomalous behavior of Majorana(-Weyl) fermion (hence, \(\mathcal{C}\) transformation) [15, 16, 17, 18, 19], and of the \(\mathcal{T}\)-invariant system [20, 21]. We also remark that a similar discussion is applied for the parity anomaly [14]. These arguments are consistent with that the mod-two bulk topological invariant is given by the sign of \(\det V\) as shown in (3.43). ### Index theorem It has been known that the overlap formalism provides a concise way to understand the index theorem. As mentioned in Proposition 1.4, the \(\mathbb{Z}\)-valued index coincides with the bulk topological invariant of the corresponding system. We have the following result for the mod-two index. **Theorem 4.5**.: The mod-two index of overlap Dirac operator, \(\nu=\operatorname{ind}(D)=\dim\ker(D)\), is given by \[(-1)^{\nu}=\det V\,. \tag{4.12}\] We may have a non-trivial mod-two index when \(\pi_{d}(R_{p})=\mathbb{Z}_{2}\). For \(d=0\), we have \(\pi_{0}(R_{1})=\pi_{0}(R_{2})=\mathbb{Z}_{2}\) corresponding to class BDI and class D. In fact, all the cases in \(d>0\) are reduced to these two classes via the dimensional reduction. Hence, we focus on the case \(d=0\) to prove this Theorem. We first consider a simplified situation. **Lemma 4.6**.: Let \(V\in\operatorname{O}(2)\) and \(D=1+V\). Then, the mod-two index \(\nu=\dim\ker(D)\) is given by \[(-1)^{\nu}=\det V\,. \tag{4.13}\] Proof.: We consider the following two elements of \(\operatorname{O}(2)\), \[V_{+}=\begin{pmatrix}\cos\lambda&\sin\lambda\\ -\sin\lambda&\cos\lambda\end{pmatrix}\,,\qquad V_{-}=\begin{pmatrix}\cos \lambda&\sin\lambda\\ \sin\lambda&-\cos\lambda\end{pmatrix}\,, \tag{4.14}\] with the determinant \(\det V_{\pm}=\pm 1\). Then, we have \[\dim\ker(1+V_{+})=\begin{cases}0&(\lambda\neq\pi)\\ 2&(\lambda=\pi)\end{cases}\,,\quad\dim\ker(1+V_{-})=1\,. \tag{4.15}\] Hence, the mod-two index of \(D\) depends only on the sign of \(\det V\). Then, we apply this result to prove Theorem 4.5. Proof of Theorem 4.5.: We consider the case \(V\in\operatorname{O}(n)\) for the moment. Let \(v_{i}\in\operatorname{O}(2)\) (\(i=1,\ldots,m\), \(m\leq n/2\)) and \(\sigma_{j}\in\{\pm 1\}=\operatorname{O}(1)\) (\(j=2m+1,\ldots,n\)). Then, there exists an orthogonal matrix \(O\), such that \[OVO^{\rm T}=\begin{pmatrix}v_{1}&&&&\\ &\ddots&&0&\\ &&v_{m}&&\\ &&&\sigma_{2m+1}&&\\ &0&&\ddots&\\ &&&&\sigma_{n}\end{pmatrix}\,. \tag{4.16}\] Hence, in this basis, we can apply Lemma 4.6 for each block to obtain the index of \(D_{\text{ov}}\), \(\nu=\text{ind}(D)=\dim\ker(D)\) as follows, \[(-1)^{\nu}=\det V\,. \tag{4.17}\] For the case \(V\in\text{O}(2n)/\text{U}(n)\), we may apply the same argument as in the case \(V\in\text{O}(2n)\). Taking the inductive limit \(n\to\infty\), the \(V\)-operator takes a value in the corresponding classifying space. ## Appendix A Proof of Lemma 3.13 We follow the approach discussed in [9, 36]. We first define the permutation matrix, \[\mathbf{P}=\left(\begin{array}{cc}0&\mathbb{1}_{k_{1}}\\ \mathbb{1}_{k_{2}}&0\end{array}\right)\,,\qquad\det\mathbf{P}=(-1)^{k_{1}k_{2} }\,.\] (A.1) Defining \[\Pi=\text{diag}\left(\mathbf{P},\mathbf{P},\ldots,\mathbf{P}\right)\,,\qquad \det\Pi=(-1)^{Nk_{1}k_{2}}\,,\] (A.2) we have \[\det aD=\det(aD\Pi)\det\Pi^{-1}\] \[=\begin{vmatrix}C&A&0&&0&Y\\ B&-C^{\dagger}&-\mathbb{1}_{k_{2}}&0&&0&0\\ 0&-\mathbb{1}_{k_{1}}&C&A&0&&\\ &0&B&-C^{\dagger}&-\mathbb{1}_{k_{2}}&\ddots&\\ &&\ddots&\ddots&\ddots&\ddots&0\\ 0&0&&&-\mathbb{1}_{k_{1}}&C&A\\ X&0&&&0&B&-C^{\dagger}\end{vmatrix}\times(-1)^{Nk_{1}k_{2}}\] \[=\begin{vmatrix}A&0&&&&Y&C\\ -C^{\dagger}&-\mathbb{1}_{k_{2}}&0&&&0&B\\ -\mathbb{1}_{k_{1}}&C&A&0&&&\\ 0&B&-C^{\dagger}&-\mathbb{1}_{k_{2}}&0&&&\\ &\ddots&\ddots&\ddots&\ddots&\ddots&\\ &&&-\mathbb{1}_{k_{1}}&C&A&0\\ &&&0&B&-C^{\dagger}&X\end{vmatrix}\times(-1)^{(N-1)k_{2}^{2}}\,.\] (A.3) We then define the following matrices, \[\alpha=\begin{pmatrix}A&0\\ -C^{\dagger}&-\mathbb{1}_{k_{2}}\end{pmatrix}\,,\quad\tilde{\alpha}=\begin{pmatrix} A&0\\ -C^{\dagger}&X\end{pmatrix}\,,\quad\beta=\begin{pmatrix}-\mathbb{1}_{k_{1}}&C\\ 0&B\end{pmatrix}\,,\quad\tilde{\beta}=\begin{pmatrix}Y&C\\ 0&B\end{pmatrix}\,,\] (A.4) from which we deduce a simple form, \[\det aD=\begin{vmatrix}\alpha&&&\tilde{\beta}\\ \beta&\alpha&&\\ &\ddots&\ddots&\\ &&&\beta&\tilde{\alpha}\end{vmatrix}\times(-1)^{(N-1)k_{2}^{2}}\,.\] (A.5) Noticing that \(\det\alpha=(-1)^{k_{2}}\det A\), we evaluate the Dirac determinant as follows, \[\det aD =(-1)^{(N-1)k_{2}^{2}}\det\alpha^{N}\det\left(\alpha^{-1}\tilde{ \alpha}-(-\alpha^{-1}\beta)^{N}\beta^{-1}\tilde{\beta}\right)\] \[=(-1)^{n}\det A^{N}\det\left(\begin{pmatrix}\mathbb{1}_{k_{1}}&0 \\ 0&-X\end{pmatrix}-T^{-N}\begin{pmatrix}-Y&0\\ 0&\mathbb{1}_{k_{2}}\end{pmatrix}\right)\] (A.6) where \(n=(N-1)k_{2}^{2}+Nk_{2}\) and the \(T\)-operator is defined in Definition 3.11. This is the expression shown in (3.19). \(\square\)
2309.04070
Anneal-free ultra-low loss silicon nitride integrated photonics
Heterogeneous and monolithic integration of the versatile low loss silicon nitride platform with low temperature materials such as silicon electronics and photonics, III-V compound semiconductors, lithium niobate, organics, and glasses, has been inhibited by the need for high temperature annealing as well as the need for different process flows for thin and thick waveguides. New techniques are needed to maintain the state-of-the-art losses, nonlinear properties, and CMOS compatible processes while enabling this next generation of 3D silicon nitride integration. We report a significant advance in silicon nitride integrated photonics, demonstrating the lowest losses to date for an anneal-free process at a maximum temperature of 250 C, with the same deuterated silane based fabrication flow, for nitride and oxide, for an order of magnitude range in nitride thickness without requiring stress mitigation or polishing. We report record low losses for anneal-free nitride core and oxide cladding, enabling 1.77 dB/m loss and 14.9 million Q for 80 nm nitride core waveguides, more than half an order magnitude lower loss than previously reported 270 C processes, and 8.66 dB/m loss and 4.03 million Q for 800 nm thick nitride. We demonstrate laser stabilization with over 4 orders of magnitude frequency noise reduction using a thin nitride reference cavity. And using a thick nitride micro-resonator, we demonstrate parametric gain and Optical Parametric Oscillation (OPO) with the lowest reported OPO threshold per unit resonator length for low temperature fabricated nitride, and supercontinuum generation over two octaves. These results represent a significant step towards a uniform ultra-low loss silicon nitride homogeneous and heterogeneous platform for both thin and thick waveguides capable of linear and nonlinear photonic circuits and integration with low temperature materials and processes.
Debapam Bose, Mark W. Harrington, Andrei Isichenko, Kaikai Liu, Jiawei Wang, Zachary L. Newman, Daniel J. Blumenthal
2023-09-08T02:02:47Z
http://arxiv.org/abs/2309.04070v3
# Anneal-free ultra-low loss silicon nitride integrated photonics ###### Abstract Heterogeneous and monolithic integration of the versatile low loss silicon nitride platform with low temperature materials such as silicon electronics and photonics, III-V compound semiconductors, lithium niobate, organics, and glasses, has been inhibited by the need for high temperature annealing as well as the need for different processes for thin and thick waveguides. New techniques are needed to maintain the state-of-the-art losses, nonlinear properties, and CMOS compatible process while enabling this next level of 3D integration. We report a significant advance in silicon nitride integrated photonics, demonstrating the same anneal-free process, with a maximum temperature 250 \({}^{\circ}\)C, for an order of magnitude range in nitride thickness without requiring stress mitigation and polishing, using inductively coupled plasma-plasma enhanced chemical vapor deposition with a deuterated silane precursor gas. We report 1.77 dB/m loss and 14.9 million Q for thin waveguides, over half an order magnitude lower loss than previous low temperature processes, and 8.66 dB/m loss and 4.03 million Q for thick nitride, the highest reported Q for a low temperature process with similar device area. Our thick nitride devices demonstrate anomalous dispersion and over two-octave supercontinuum generation, from 650 nm to 2.7 \(\upmu\)m, and four-wave mixing parametric gain, with the lowest threshold per unit resonator length of 15.2 mW/mm, for a low temperature process. These results represent a significant step towards a uniform ultra-low loss silicon nitride homogeneous and heterogenous platform for both thin and thick waveguides capable of linear and nonlinear photonic circuits and integration with low temperature materials and processes. Introduction Ultra-low loss silicon nitride photonic integrated circuits[1] (PICs) can reduce the size, weight, and cost, and improve the reliability of a wide range of applications spanning the visible to infrared, including quantum computing and sensing[2, 3, 4, 5], atomic clocks[6, 7], atomic navigation[8], metrology[9], and fiber optic communications[10] as well as enabling new portable applications[11]. In addition to replacing costly systems such as lasers and optical frequency combs that are relegated to bulky table-top systems, there is the potential to improve the performance for precision sciences, such as the frequency noise which is important for the manipulation and interrogation of atom, ions, and qubits[12, 13]. In this integration platform, by varying waveguide parameters such as nitride core thickness and optical confinement, it is possible to tradeoff characteristics such as loss, dispersion, nonlinearity, and device footprint[14, 15, 16] to realize a wide range of linear and nonlinear components including ultra-low linewidth lasers[17, 18, 19, 20, 21], optical frequency combs[22], optical modulators[23, 24], and atom and ion beam emitters[2, 25, 26, 27]. Today, a major transformation in silicon nitride photonics is needed, where the state of the art in ultra-low loss and wafer-scale CMOS foundry compatible processes are maintained while uniformizing the processing of linear and nonlinear waveguides with the added functionality of gain, high bandwidth modulation, electronics, and engineered thermal properties, through heterogeneous integration with materials that cannot withstand high annealing temperatures, such as silicon photonic circuits[28], GaAs and InP semiconductor circuits[29, 30], and nonlinear materials such as lithium niobate[31] and tantalum pentoxide (tantala)[32] as well as materials for thermal engineering such as quartz substrates[33]. However, integration of these materials with both ultra-low loss thin and thick core nitride waveguides is inhibited by their incompatibility with the high temperature nitride growth and high-temperature post-oxide cladding annealing process that is used to achieve ultra-low losses. Additionally, this problem is compounded by the fact that high performance thin and thick nitride waveguides use different fabrication processes with added process complexity to mitigate stress related issues in thick nitrides as well as costly chemical mechanical polishing (CMP). Therefore, new techniques are needed that can provide the functionality of heterogeneous and monolithic integration to the silicon nitride platform without annealing, while maintaining the ultra-low loss advantage and provide the same wafer-scale processing steps for both thin and thick silicon nitride waveguides. Heterogeneous and monolithic integration of ultra-low loss linear and nonlinear silicon nitride circuits requires a uniform anneal-free fabrication process that is compatible with a wide range of nitride core thickness, over an order of magnitude range (e.g. 20 nm to 800 nm), while maintaining the loss and other planar and high performance platform properties. Today, state of the art thin (<100 nm) waveguide silicon nitride photonics achieve losses as low as 0.034 dB/m in the infrared[34, 35] and sub-dB/m losses in the visible[36]. These losses are achieved using Low Pressure Chemical Vapor deposited (LPCVD) silicon nitride core waveguides patterned on top of low absorption thermal silicon dioxide lower cladding, on silicon, and a Tetraethyl orthosilicate-plasma enhanced chemical vapor deposition (TEOS-PECVD) deposited fully annealed upper cladding, with other fabrication loss reduction techniques[34, 35]. The LPCVD nitride growth requires temperatures as high as 850 \({}^{\circ}\)C[37], and a post process annealing temperature of 1150 \({}^{\circ}\)C is used to drive out hydrogen from the LPCVD silicon nitride and the upper cladding[34, 38], due to SiN-H bond absorption[39, 40]. Efforts to reduce processing temperatures have yielded less than 1 dB/m losses using an LPCVD nitride core annealed at 1050 \({}^{\circ}\)C with an unannealed deuterated upper cladding oxide[41]. State of the art thick (> 650 nm) nitrides measure losses of 0.4 dB/m[42] but require structures for stress mitigation, complicated chemical-mechanical polishing (CMP) steps, on top of annealing temperatures as high as 1050 \({}^{\circ}\)C similar to state-of-the-art thin nitrides. The losses in these thick nitride devices are tightly coupled to the confinement factor and the device area. To date, the lowest losses achieved using sputtered silicon nitride waveguide cores measure 2.4 dB/m at 1550 nm using an etchless liftoff technique[43] and 50 dB/m for etched nitride structures[44]. Prior efforts with deuterated Inductively Coupled Plasma-Plasma Enhanced Chemical Vapor Deposition (ICP-PECVD) processes for the nitride core have focused on 270 \({}^{\circ}\)C processes with thick core (> 650 nm) high confinement waveguides for Kerr comb generation, as ICP-PECVD nitride exhibits less stress compared to LPCVD, achieving losses in the range of 6-30 dB/m and intrinsic Qs up to 5.3 million[45, 46, 39], and did not explore low confinement thin(< 100 nm) core waveguides in which the effect of scattering losses is reduced. ICP-PECVD based thick film nitride nonlinear resonators also show 1 THz free-spectral-range (FSR) 900nm bandwidth modulation-instability microresonator Kerr combs and octave-spanning supercontinuum generation[45], with optical parametric oscillation (OPO) thresholds down to 13.5mW[39] and threshold per unit length down to 23.6 mW/mm[46]. For heterogeneous and monolithic integration, it is important to limit the process temperature to under 400 \({}^{\circ}\)C, for example to prevent crystallization in low loss and nonlinear tantala waveguides[32], processing ultra-low loss waveguides directly on preprocessed silicon electronic or silicon photonic circuits[47], processing on thin film lithium niobate[31], and III-V semiconductors[29, 30, 48]. Further limiting the processing temperature to 250 \({}^{\circ}\)C enables a much broader class of heterogeneous and monolithic cointegration ultra-low loss thin and thick waveguide silicon nitride photonics with organic electronics[49], and directly processing on organic polymers like polyimide (Kapton)[50] which are ubiquitous in consumer electronics, as the electrical and mechanical properties of these materials degrade exponentially and irreversibly above this temperature. Temperatures as low as 250 \({}^{\circ}\)C are also beneficial to minimize thermal stress when processing on prepackaged electronics[51] or on fragile substrates like quartz for athermalization[33, 52]. While sputtering and conventional Plasma Enhanced Chemical Vapor Deposition (PECVD) can be done at low temperatures[53, 54], both methods suffer from high particle counts, which cause high scattering losses, and conventional PECVD-grown silicon nitride has high hydrogen content, due to using ammonia and silane precursors[55] that causes high absorption losses. Inductively Coupled Plasma - Plasma Enhanced Chemical Vapor Deposition (ICP-PECVD) with deuterated silane as a precursor gas, is an emerging lower temperature method to grow silicon nitride and silicon dioxide that eliminates absorption due to hydrogen bonds and has low particle counts[39, 45, 46]. This method needs only nitrogen gas instead of ammonia as a precursor to grow silicon nitride, as the concentrated Inductively Coupled Plasma (ICP) power is able to dissociate N\({}_{2}\) which conventional parallel plate PECVD cannot[56]. In this work we report a significant advance in ultra-low loss linear and nonlinear silicon nitride integrated photonics, where the exact same anneal-free fabrication process, with a maximum oxide and nitride temperature of 250 \({}^{\circ}\)C, is used to fabricate waveguides an order of magnitude variation in thickness, 80 nm and 800 nm, without requiring complex steps such as stress mitigation and CMP. The maximum processing temperature we use is 20 \({}^{\circ}\)C lower than the lowest temperatures currently demonstrated for making ultra-low loss waveguides[46], however this temperature difference is exponentially significant for processing organic materials[49, 50]. We achieve 1.77 dB/m with intrinsic Qs of almost 15 million for thin 80 nm thick core waveguides, over half an order of magnitude lower loss than previous low temperature processes[46, 77], and record-low 8.66 dB/m loss with 4.03 million intrinsic Q for 800 nm thick core waveguides, 39 % more Q as previous thick nitride low temperature process with similar device area while being 7.5 times smaller in area than the record high Q low temperature fabricated device, which has similar Qs[46]. Our thin nitride waveguides are 5.36 cm long, almost 20X longer than the longest low temperature processed waveguide reported to date[46]. Importantly, we confirm the superb quality of the thick nitride waveguides and resonators by demonstrating two key nonlinear processes with our 800 nm nitride devices, namely: 1) resonant optical parametric oscillation (OPO) and Kerr-comb formation in microresonators and 2) non-resonant supercontinuum generation in linear waveguides. We demonstrate anomalous dispersion with over 2 octave supercontinuum generation from 650 nm to 2.7 \(\upmu\)m as well as four-wave mixing parametric gain with the near-lowest reported threshold of 16.7 mW for silicon nitride waveguides made with a low temperature process, and the lowest threshold per unit length of resonator of 15.2 mW/mm than the previous record for such a process[39, 46]. The waveguide losses are comparable with that of unannealed LPCVD nitride core waveguides of the same geometry (Supplementary Section S7). We illustrate examples of heterogeneous and monolithic integration (Fig. 1) that can be enabled using our anneal-free process. These include, but are not limited to, deposition of ultra-low loss waveguides on III-V semiconductors (Fig. 1a) for high performance lasers and compound semiconductor photonic integrated circuits[58, 59], preprocessed electronic circuits and silicon photonics[23, 60] (Fig. 1b), organic material based integrated circuits[61] for cointegration with silicon nitride PICs and biophotonics[62] (Fig. 1c), thin film lithium niobate[31] (Fig. 1d), and materials like quartz for athermalization of resonators and reference cavities[33] (Fig. 1e). Additionally, this process can be used to realize sophisticated multi-level silicon nitride photonic circuits[63], homogeneously and monolithically integrated with other materials, to combine high-performance thin-waveguide components like spectrally-pure Brillouin lasers[17] and thick waveguide nonlinear components including optical frequency combs[45, 46] (Fig. 1f). ## Results ### Anneal Free Fabrication Process and Waveguide Design The high level flow of our process is shown in Fig. 2a. We demonstrate that this same process can be used to fabricate buried silicon nitride core channel waveguides that have over an order of magnitude variation in nitride core thickness. We demonstrate 80 nm resonators (Fig. 2b, c) and 800 nm resonators and spirals with lengths of up to 35 cm (Fig. 2d, e)). The process independence with respect to waveguide thickness demonstrates the potential for co-integration of thin and thick nitride core devices (Fig. 1f) and 3D monolithic and homogeneous integration [63, 64] as well as monolithic and heterogeneous integration on a variety of other platforms. The anneal-free process (Fig. 2a) starts with a 1 mm thick silicon wafer substrate with pre-processed 15 um thick thermal oxide lower cladding. A uniform silicon nitride layer (e.g., 80 nm, 800 nm) is then deposited using a deuterated silane pre-cursor ICP-PECVD process at 250 \({}^{\circ}\)C. The nitride layer is then patterned and etched at 50 \({}^{\circ}\)C using an Inductively Coupled Plasma Reactive Ion Etcher (ICP-RIE) etch. A final silicon dioxide cladding layer is deposited using the same deuterated silane pre-cursor ICP-PECVD process at 250 \({}^{\circ}\)C. In the future, the lower cladding can also be deposited using our 250 \({}^{\circ}\)C process for co-integration with other materials and platforms. For the thin nitride, we use a standard silicon nitride bus-coupled ring resonator to access the loss and compare it with devices made using unannealed LPCVD grown silicon nitride processes (see supplementary section S7). The thin nitride waveguide design is a 6 \(\upmu\)m wide, 80 nm thick Si\({}_{3}\)N\({}_{4}\) waveguide core with a 15 \(\upmu\)m thick thermal oxide SiO\({}_{2}\) lower cladding layer and 5 \(\upmu\)m thick ICP-PECVD oxide upper cladding layer (Fig. 2c) for both the ring and bus waveguides. The bus-ring coupling gap is 3.5 \(\upmu\)m and the ring radius is 8530.8 \(\upmu\)m. This waveguide is designed to support one quasi-TE and one quasi-TM mode for all of the process variations (see Supplementary Section S4). This design is the same as that processed using our fully annealed LPCVD nitride and TEOS-PECVD SiO[65] to provide fair comparison. Our thick nitride devices use an 800 nm thick nitride core with a 15 \(\upmu\)m thick thermal oxide SiO\({}_{2}\) lower cladding layer and 4 \(\upmu\)m thick ICP-PECVD device. Figure 1: **Examples of different applications of the no-anneal silicon nitride process.** : Cointegration with **a** Compound semiconductors for high performance lasers, **b** preprocessed silicon circuits and silicon photonics, **c** organic electronics/photonics, and **d** thin film lithium niobate. **e** Thermal and substrate engineering such as with quartz substrates. **f** Homogenous integration of thick ( \(>\) 650 nm) and thin nitride core devices, each used for different applications. thick ICP-PECVD oxide upper cladding layer (Fig. 2d). For nonlinear applications the waveguides and resonators are designed for the lowest parametric oscillation threshold, including parameters such as high extinction ratio, high intrinsic Qs, and small modal area[15]. An optimized design was determined for devices with a 2 \(\upmu\)m waveguide width and a 300 nm ring-to-bus waveguide gap. This design was determined based on measuring design splits with waveguide widths varying from 1.4 to 2.4 \(\upmu\)m for both ring resonator and bus waveguides, varying the ring radii from 165 to 177 \(\upmu\)m, and ring-bus coupling gaps varying from 200 to 600 nm, as well as spirals with lengths of up to 35 cm (Fig. 2e). We take Scanning Electron Microscopy (SEM) images of our fabricated devices before upper cladding deposition as in Fig. 3a which shows no signs of abnormalities in our etched 800 nm thick film, as well our thin nitride devices (Fig. 3b). We also take a cross-sectional SEM of the thick nitride waveguide (Supplementary section S2) Figure 2: **Ameal free process and devices.****a** fabrication flow. **b** Thin nitride ring resonator chip and **c** waveguide geometry. **d** Thick nitride waveguide geometry and **e** spiral chip. ### Loss/Q results The losses and Qs are calculated by measuring resonances using a calibrated Mach Zehnder interferometer (MZI) technique[17, 20, 35], for the fundamental Transverse Magnetic (TM) modes only, and is described in further details in the Methods section. For each resonator design, we characterize 3 different devices, and this measurement is repeated for all devices from 1520 and 1630 nm in steps of 10 nm for the TM mode only (Fig. 4a). For the TM mode, we measure losses at 1550 nm down to 1.77 dB/m for an intrinsic Q of 14.9 million from an over-coupled resonance with a loaded Q of 4.0 million and FWHM of 49.1 MHz (Fig. 4b). The median of the intrinsic Q and loss for the TM mode throughout the above wavelength ranges measured is 7.77 million and 3.26 dB/m respectively, while the average intrinsic Q and loss were 7.55 million and 4.31 dB/m respectively. The same calibrated MZI method is used to measure the loss and Q for a 800 nm nitride ring resonator between 1550 nm to 1630 nm (Fig. 5a). Example measured resonances for this 2 um wide waveguide 175 um radius ring resonator is shown in Figs. 5b, c for the TE and TM modes yielding losses as low as 8.66 dB/m and 16.4 dB/m for the modes respectively, and intrinsic Qs as high as 4.03 million and 2.19 million for the TE and TM respectively. The loaded Qs of these TE and TM modes were 2.30 million and 1.11 respectively, with FWHMs of 82.5 MHz and 172 MHz respectively. The median and average intrinsic Q as well as loss for both these modes in the wavelength ranges measured is given in Table TS3 in Supplementary Section S6. Figure 4: **Thin nitride Q and loss measurements.****a** TM mode loss and intrinsic Q variation vs wavelength for 3 different devices. **b** Transverse Magnetic (TM) mode resonance Q measurement. Figure 3: **Scanning Electron Microscopy (SEM) characterizations.** Top-down Scanning Electron Microscopy (SEM) images of fabricated ring resonators near their coupling gap before upper cladding deposition for **b** a 800 nm thick nitride waveguide with a width of 1.4 μm and gap of 400 nm on mask, with the measured gap of 0.44 μm and measured waveguide widths of 1.32 μm and 1.27 μm respectively; and **c** a 80 nm thick nitride waveguide with a width of 6 μm and gap of 3.5 μm on mask. Measured gap is 3.44 μm, and waveguide widths 6.01 and 5.96 μm respectively. ## Thick Nitride OPO and Kerr-Comb Formation, and Supercontinuum Generation Nonlinear integrated photonics plays an enabling role in technologies including next-generation optical atomic clocks [7], next-generation telecommunications sources and data communications [66, 67], trace gas sensing [68], and quantum computing [2, 3, 4]. Nonlinear photonic waveguides with wavelength-scale dimensions (\(\sim\)1 \(\upmu\)m) offer the high optical confinement needed to increase the effective nonlinearity. Additionally, the high confinement allows for geometric tuning of the waveguide dispersion and arbitrary control of phase matching. We demonstrate two key nonlinear processes with our anneal-free 800 nm nitride waveguides and resonators (Fig. 6a, b), namely: 1) resonant optical parametric oscillation (OPO) and Kerr-comb formation in microresonators and 2) non-resonant supercontinuum generation in linear waveguides. We first report OPO and Kerr-comb formation in a 175 \(\upmu\)m radius microring resonator with cross-sectional dimensions of 800 x 2000 nm, for a resonance at 1566.7 nm with Q\({}_{\mathrm{L}}\)\(\sim\) 1.6 million and Q\({}_{\mathrm{i}}\)\(\sim\) 2.0 million [6]. Figure 6b shows an optical micrograph of one such device. Fig. 6c shows OPO at a low on-chip pump power of 25 mW. As the pump power is increased, Turing pattern formation modulation-instability comb states [22] are also observed (see supplementary section S8). We measure a threshold power, P\({}_{\mathrm{th}}\), for OPO of \(\sim\)16.7 mW corresponding to an effective nonlinear index, n\({}_{2}\)\(\sim\) 1.5x10\({}^{-19}\) m\({}^{2}\)/W (see methods section for more details) which is only slightly lower than measurements of n\({}_{2}\) for stoichiometric nitride devices [69, 70], as well as the lowest threshold power per unit length of 15.2mW/mm for any low temperature silicon nitride process. Next we report broadband supercontinuum generation in 4 mm long, 800 nm thick, straight waveguides (Fig. 6d) with widths ranging from 1.6 to 2.4 um. Fig. 6(d) shows supercontinuum spectra by coupling light from a 1550 nm, 100 MHz repetition rate mode-locked laser with 100 fs pulse duration and on-chip pulse energies \(\sim\)200-400 pJ for the waveguides respectively. The resulting supercontinuum emission covers two octaves, from \(\sim\)650 nm to \(\sim\)2.7 \(\upmu\)m; CO\({}_{2}\) absorption lines in the spectrum analyzer are evident at the long wave side of the spectrum. While the dispersion of these initial devices is not favorable for mid-infrared supercontinuum generation, we have measured absorption Figure 5: **Thick nitride Q and loss measurements.****a** loss and intrinsic Q variation vs wavelength for the Transverse Electric (TE) and Transverse Magnetic (TM ) modes. **b** TE mode resonance Q measurement at 1581 nm. **c** TM mode resonance Q measurement at 1560 nm. All measurements shown here are for 175 \(\upmu\)m radius ring resonators with 2 \(\upmu\)m wide waveguides. spectra of our deuterated nitride and oxide layers[41] and, in principle, our films should support waveguiding and supercontinuum generation out to 4 \(\upmu\)m. ## 5 Discussion In this paper, we have demonstrated for the first time, to the best of our knowledge, the lowest loss integrated waveguides and highest Q ring resonators fabricated completely with the identical anneal-free silicon nitride photonics low temperature process for both thin and thick waveguides with maximum processing temperature off 250 \({}^{\circ}\)C for all steps. We demonstrate that our process can be used to fabricate a diverse range of geometries of waveguides and ring resonators for linear and nonlinear applications with 10X dynamic range in nitride thicknesses and without any process modification, stress mitigation, or chemical mechanical polishing (CMP). This low temperature and uniformity of process will enable a wide range of systems on-chip applications and novel integration approaches including direct processing on organics, circuit cards, silicon photonic and III-V compound semiconductors, lithium niobate as well as enabling 3D integration stacking geometries that combine circuits with different nitride core thickness[63, 71]. We report a loss of 1.77 dB/m and intrinsic Q of almost 15 million for thin 80 nm thick core waveguides and record-low 8.66 dB/m loss with 4.03 million intrinsic Q for 800 nm thick core waveguides. The thin core losses are over half an order of magnitude lower than previous low temperature processes, while our thick core devices have 39 % more Q than previous thick nitride low temperature processes with similar device area, while being 7.5 times smaller in area than the record high Q low temperature fabricated devices, having similar Qs[46]. Our thin nitride waveguides are 5.36 cm long, almost 20X longer than the longest low temperature processed waveguide reported to date[46]. The losses in the thin nitride devices are thought to be limited due to absorption loss from the unannealed lower cladding, which can be further improved by depositing deuterated SiO\({}_{2}\) for the lower cladding. The small amount of hydrogen present in the deuterated silane precursor also increases the absorption loss as evidenced by the increase in waveguide loss towards 1520 nm (Fig. 4a) which is near the 1st overtone of the SiN-H bond absorption. Towards 1630 nm, the loss increase is most likely due to overtones of the SiO-D bond in the upper cladding[41]. We additionally see that these losses are comparable to devices of the same geometry made with unannealed LPCVD nitride (Supplementary section S7) confirming that our losses are competitive with respect to process temperature. At the Figure 6: **Nonlinear application results of thick 800 nm waveguides and resonators.****a** Large field of view image of a thick nitride chip with a broad scan of ring resonator designs and straight waveguides. The green and blue highlighted regions correspond to devices tested in (**c**) and (**d**). **b** Dark-field optical micrograph of the ring resonator device used for Kerr-comb measurement. **c** Optical spectrum of the ring resonator output showing the onset of optical parametric oscillation. **d** Broadband supercontinuum spectra from the 2.2 \(\upmu\)m width waveguide resonator highlighted in light blue in (**a**). same time, the more tightly confined modes in 800 nm thick devices with etched sidewalls have higher scattering losses than their thin nitride counterparts, and could be improved by using a hard mask with a smaller grain size such as those made with Atomic Layer Deposition (ALD)[72, 73] or RF sputtering[74]. It should also be noted that most of our highest intrinsic Q thick nitride resonances exhibit splitting (see Supplementary Section S6). A summary of published losses and intrinsic Qs near the C-band for processes as a function of maximum processing temperature and their nitride processing methods is given in Fig. 6, and compared to this work. Our reported lowest losses fall in an "optimum" region between loss and process temperature. It should be noted also that the record low loss thick nitride devices had a width of 10 \(\upmu\)m.[42] To demonstrate the quality of our thick film nitride waveguides, we report Kerr-comb formation (Supplementary Section S8) and supercontinuum generation in the 800 nm thick devices. We report a record-low OPO threshold relative to resonator length, span of supercontinuum generation and confirm MI comb formation and anomalous dispersion. These results demonstrate the compatibility of this process between thin ultra-low loss waveguides and nonlinear LPCVD waveguides without requiring fabrication steps and features for stress mitigation[75] that increase fabrication complexity and increase the difficulty of monolithic and heterogeneous integration with other material systems. The platform is shown to be capable of generating resonant solitons and other mode-locked pulses, and has the potential to support guiding and supercontinuum generation into the mid-IR wavelengths. In summary, our anneal-free process, with a maximum processing temperature of 250 \({}^{\circ}\)C, and uniformity for core thickness spanning an order of magnitude, is fully CMOS-compatible and will pave the way to monolithic and heterogeneous integration of ultra-low loss silicon nitride photonics with material systems not possible before such as III-V semiconductors[30, 48], lithium niobate[31], preprocessed silicon circuits and photonics[47], and organic electronic materials[50], with applications in metrology[9], navigation[8], telecommunications[10], and quantum information sciences[2, 3, 4], and consumer electronics where organic electronics is widely used[76]. This process could also be used to monolithically and homogeneously integrate both thin low confinement and thick high confinement silicon nitride waveguides, enabling 3D integration with optimized device footprint and linear and nonlinear performance. In the future, the temperature of our process has the potential to be modified for as low as 50 \({}^{\circ}\)C using further process development on our ICP-PECVD tool (which supports 50 \({}^{\circ}\)C processes), enabling the monolithic integration of ultra-low loss photonic integrated circuits on most organic electronic materials. ## Methods ### Fabrication Process The thick and thin SiN core, and SiO\({}_{2}\) upper cladding depositions are performed using an Unaxis VLR ICP-PECVD tool with the same processes used for all core thicknesses and devices. Before any deposition on a device wafer, we run a deposition on a test 100 mm silicon wafer and measure the particle counts with a KLA/Tencor Surfscan, as well as the film thickness and refractive index with a Woollam Ellipsometer. The deposition on the device wafer is performed only if the particle counts increase by less than 300. The fabrication starts with the 250 \({}^{\circ}\)C silicon nitride Figure 7: **Q and loss vs temperature in the C-band for different published works based on their silicon nitride growth methods and processing compared to this work.** Our lowest losses (thin nitride) are near an “optimum”, denoted by the oval, between low loss and temperature, the current record low loss also being a thin nitride device. Our thick nitride structures have double the Qs of the current record calculated for low temperature fabricated devices with similar areas[46] as marked with the C, and very similar loss overall to the absolute record, while having an area 7.5 times smaller. The different works compared include Inductively Coupled Plasma-Plasma Enhanced Chemical Vapor Deposition (ICP-PECVD) processes like (I) This work, (II) Y. Xie et al.[46], (III) J. Chiles et al.[45], Sputtering such as (IV) A. Frigg[4] ; Plasma Enhanced Chemical Vapor Deposition (PECVD) in conjunction with Chemical-Mechanical Polishing(CMP) (V) X. Ji et al.[79]; Pulsed Laser Deposition - (VI) N. Golshani et al.[76] ; And Low Pressure Chemical Vapor Deposition (LPCVD) together with annealing, such as (VII) Z. Ye et al.[79], (VIII) X. Ji et al.[42], and (IX) K. Liu et al.[34] deposition on Si wafers with 15 \(\mu\)m of thermal oxide, with the thick nitride depositions merely being done for longer than the thin nitride deposition, in a single step. After the nitride deposition step, the thick nitride wafers only get 40 nm of Ruthenium DC sputtered. Both the thick and thin nitride wafers are then patterned in a 248 nm DUV stepper, using the same lithography parameters. The thin nitride is then etched in a ICP-RIE using a CF4/CHF3/O2 chemistry, after which it is ashed in a O2 plasma in a Inductively Coupled Plasma (ICP) tool to remove etch byproducts. Any remaining photoresist is stripped by sonicating in a hot N-methyl-2-pyrrolidone (NMP) solution and rinsing in isopropanol. We additionally perform a standard piranha clean at 100 \({}^{\circ}\)C followed by a base piranha clean at 70 \({}^{\circ}\)C, both in freshly prepared solutions, making the thin nitride wafers ready for upper cladding deposition. For the thick nitride fabrication, the Ru on the thick nitride is etched in a ICP-RIE too, to create a hard mask, using a Cl2/O2 chemistry. The thick nitride wafer is then stripped of photoresist the same way as the thin one, using hot NMP solution and isopropanol. It is then etched in a ICP-RIE using CF4 only, after which the same O2 plasma as the thin nitrides is done. Any remaining Ru is stripped in a wet etch, and then the same piranha cleans done for the thin nitrides are performed. The requisite amount of ICP-PECVD SiO2 upper cladding is then deposited at 250 \({}^{\circ}\)C on both the thin and thick nitride wafers. The flow diagrams of these fabrication processes can be found in Supplementary Section S5. ### Quality factor measurements and calculation The loaded quality factors of the ring resonators are measured using three different calibrated unbalanced fiber MZIs with MZI fringe widths of 5.87 MHz, 18 MHz, and 200 MHz. We have seen in our previous works that Q values measured with this method match well with cavity ring-down measurements [80]. Two Newport Velocity TLB-6700 tunable lasers are used, one with a tuning range of 1520 to 1570 nm, and another one with a tuning range from 1550 to 1630 nm. These lasers are tuned in wavelength with piezo actuators, by applying a ramp signal to the same. A polarization controller is present before the input to the thin nitride devices, which is edge-coupled to a single mode cleaved fiber, while there is a polarization beam splitter present before the input to the thick nitride devices. The full setup for the thin nitride measurements is shown in Supplementary Fig. S2. Loaded and intrinsic quality factors are extracted by fitting the resonance transmission to a Lorentzian (thin nitride) or coupled-Lorentzian (thick nitride) curves. Coupling and loss parameters are determined by measuring the ring-to-bus couplings on independent ring-bus coupling structures as well as simulating the same [35]. Additional details can be found in supplementary section S6. ### Threshold power for optical parametric oscillation We determine the effective nonlinear index for our deuterated nitride by measuring the threshold power for OPO, Pth, according to the following [15] : \[n_{2}=\frac{\pi\ n\ \nu_{0}\ A_{\mathrm{eff}}}{\delta\ P_{\mathrm{th}}\ \nu_{\mathrm{FSR}}\ Q_{\mathrm{f}}^{2}}\frac{(l+K)^{3}}{K}\] where n is the effective refractive index, Aeff is the effective mode area, vFSR = 133.5 GHz is the resonator free spectral range, vo is the pump frequency, Qi is the resonator intrinsic Q, and K is a resonator coupling constant K = Qi/Qc, where Qc is the resonator coupling Q. We extract values of Qi and Qc through the Lorentzian curve fitting method described above. We then use the software Lumerical MODE to calculate Aeff and n as a function of wavelength (in this case 1.35 \(\mu\)m\({}^{2}\) and 1.85 respectively). Based on our analysis, we determine n2 \(\sim\) (1.5 \(\pm\) 0.2) x 10\({}^{-19}\) m\({}^{2}\)/W. Measurement uncertainty is propagated from measurement resolution of the threshold power and the one standard deviation error of the curve fitting parameters which determine Q values. ## Data Availability The data that support the plots within the paper and other findings of this study are available from the corresponding author upon reasonable request. ## References * (1) Blumenthal, D. J., Heideman, R., Geuzebroek, D., Leinse, A. & Roeloffzen, C. Silicon Nitride in Silicon Photonics. _Proceedings of the IEEE_**106**, 2209-2231 (2018). * [2] Niffenegger, R. J. _et al._ Integrated multi-wavelength control of an ion qubit. _Nature_**586**, 538-542 (2020). * [3] Elshaari, A. W., Pernice, W., Srinivasan, K., Benson, O. & Zwiller, V. Hybrid integrated quantum photonic circuits. _Nat. Photonics_**14**, 285-298 (2020). * [4] Wang, J., Sciarrino, F., Laing, A. & Thompson, M. G. Integrated photonic quantum technologies. _Nat. Photonics_**14**, 273-284 (2020). * [5] Meyer, D. H., Castillo, Z. A., Cox, K. C. & Kunz, P. D. Assessment of Rydberg atoms for wideband electric field sensing. _J. Phys. B: At. Mol. Opt. Phys._**53**, 034001 (2020). * [6] Bloom, B. J. _et al._ An optical lattice clock with accuracy and stability at the 10\({}^{-18}\) level. _Nature_**506**, 71-75 (2014). * [7] Newman, Z. L. _et al._ Architecture for the photonic integration of an optical atomic clock. _Optica, OPTICA_**6**, 680-685 (2019). * [8] Petrov, A. A. _et al._ Features of magnetic field stabilization in caesium atomic clock for satellite navigation system. _J. Phys.: Conf. Ser._**1038**, 012032 (2018). * [9] Ye, J., Kimble, H. J. & Katori, H. Quantum State Engineering and Precision Metrology Using State-Insensitive Light Traps. _Science_**320**, 1734-1738 (2008). * [10] Brodnik, G. M. _et al._ Optically synchronized fibre links using spectrally pure chip-scale lasers. _Nat. Photon._**15**, 588-593 (2021). * [11] Ely, T. A., Burt, E. A., Prestage, J. D., Seubert, J. M. & Tjoelker, R. L. Using the Deep Space Atomic Clock for Navigation and Science. _IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control_**65**, 950-961 (2018). * [12] Dick, G. J. Local Oscillator Induced Instabilities In Trapped Ion Frequency Standards. Proceedings of the 19th Annual Precise Time and Time Interval Systems and Applications Meeting, Redondo Beach, California, December 1987, pp. 133-147. * [13] Audoin, C., Candelier, V. & Diamarcq, N. A limit to the frequency stability of passive frequency standards due to an intermodulation effect. _IEEE Transactions on Instrumentation and Measurement_**40**, 121-125 (1991). * [14] Huffman, T. A. _et al._ Integrated Resonators in an Ultralow Loss Si3N4/SiO2 Platform for Multifunction Applications. _IEEE Journal of Selected Topics in Quantum Electronics_**24**, 1-9 (2018). * [15] Briles, T. C., Yu, S.-P., Drake, T. E., Stone, J. R. & Papp, S. B. Generating Octave-Bandwidth Soliton Frequency Combs with Compact Low-Power Semiconductor Lasers. _Phys. Rev. Appl._**14**, 014006 (2020). * [16] Corato-Zanarella, M. _et al._ Widely tunable and narrow-linewidth chip-scale lasers from near-ultraviolet to near-infrared wavelengths. _Nat. Photon._ 1-8 (2022) doi:10.1038/s41566-022-01120-w. * [17] Gundavarapu, S. _et al._ Sub-hertz fundamental linewidth photonic integrated Brillouin laser. _Nature Photon_**13**, 60-67 (2019). * [18] Jin, W. _et al._ Hertz-linewidth semiconductor lasers using CMOS-ready ultra-high-Q microresonators. _Nat. Photonics_**15**, 346-353 (2021). * [19] Liu, K. _et al._ Photonic integrated cascade-inhibited Brillouin laser with sub-100-mHz fundamental linewidth. in _Conference on Lasers and Electro-Optics_ SF2K.1 (Optica Publishing Group, 2022). doi:10.1364/CLEO_SI.2022.SF2K.1. * [20] Chauhan, N. _et al._ Visible light photonic integrated Brillouin laser. _Nat Commun_**12**, 4685 (2021). * [21] Isichenko, A., Chauhan, N., Liu, K., Harrington, M. W. & Blumenthal, D. J. Chip-Scale, Sub-Hz Fundamental Sub-kHz Integral Linewidth 780 nm Laser through Self-Injection-Locking a Fabry-Perot laser to an Ultra-High Q Integrated Resonator. Preprint at [https://doi.org/10.48550/arXiv.2307.04947](https://doi.org/10.48550/arXiv.2307.04947) (2023). * [22] Kippenberg, T. J., Gaeta, A. L., Lipson, M. & Gorodetsky, M. L. Dissipative Kerr solitons in optical microresonators. _Science_**361**, eaan8083 (2018). * [23] Alexander, K. _et al._ Nanophotonic Pockels modulators on a silicon nitride platform. _Nat Commun_**9**, 3444 (2018). * [24] Wang, J., Liu, K., Harrington, M. W., Rudy, R. Q. & Blumenthal, D. J. Silicon nitride stress-optic microresonator modulator for optical control applications. _Opt. Express, OE_**30**, 31816-31827 (2022). * [25] Hummon, M. T. _et al._ Photonic chip for laser stabilization to an atomic vapor with \(10^{-11}\) instability. _Optica, OPTICA_**5**, 443-449 (2018). * [26] Spektor, G. _et al._ Universal visible emitters in nanoscale integrated photonics. _Optica, OPTICA_**10**, 871-879 (2023). * [27] Isichenko, A. _et al._ Photonic integrated beam delivery for a rubidium 3D magneto-optical trap. _Nat Commun_**14**, 3080 (2023). * [28] Tran, M. A. _et al._ Ring-Resonator Based Widely-Tunable Narrow-Linewidth Si/InP Integrated Lasers. _IEEE Journal of Selected Topics in Quantum Electronics_**26**, 1-14 (2020). * [29] Verrinder, P. A. _et al._ Gallium Arsenide Photonic Integrated Circuit Platform for Tunable Laser Applications. _IEEE Journal of Selected Topics in Quantum Electronics_**28**, 1-9 (2022). * [30] Nicholes, S. C. _et al._ An \(8\times 8\) InP Monolithic Tunable Optical Router (MOTOR) Packet Forwarding Chip. _Journal of Lightwave Technology_**28**, 641-650 (2010). * [31] Shams-Ansari, A. _et al._ Reduced material loss in thin-film lithium niobate waveguides. _APL Photonics_**7**, 081301 (2022). * [32] Jung, H. _et al._ Tantala Kerr nonlinear integrated photonics. _Optica, OPTICA_**8**, 811-817 (2021). * [33] Zhao, Q. _et al._ Low-loss low thermo-optic coefficient Ta2O5 on crystal quartz planar optical waveguides. _APL Photonics_**5**, 116103 (2020). * [34] Liu, K. _et al._ Ultralow 0.034 dB/m loss wafer-scale integrated photonics realizing 720 million Q and 380 \(\mu\)W threshold Brillouin lasing. _Opt. Lett., OL_**47**, 1855-1858 (2022). * [35] Puckett, M. W. _et al._ 422 Million intrinsic quality factor planar integrated all-waveguide resonator with sub-MHz linewidth. _Nat Commun_**12**, 934 (2021). * [36] Chauhan, N. _et al._ Ultra-low loss visible light waveguides for integrated atomic, molecular, and quantum photonics. _Opt. Express, OE_**30**, 6960-6969 (2022). * [37] Sharma, N., Hooda, M. & Sharma, S. K. Synthesis and Characterization of LPCVD Polysilicon and Silicon Nitride Thin Films for MEMS Applications. _Journal of Materials_**2014**, 954618 (2014). * [38] Blumenthal, D. J., Heideman, R., Geuzebroek, D., Leinse, A. & Roeloffzen, C. Silicon Nitride in Silicon Photonics. _Proceedings of the IEEE_**106**, 2209-2231 (2018). * [39] Wu, Z. _et al._ Low-noise Kerr frequency comb generation with low temperature deuterated silicon nitride waveguides. _Opt. Express, OE_**29**, 29557-29566 (2021). * [40] Osinsky, A. V. _et al._ Optical loss mechanisms in GeSiON planar waveguides. _Appl. Phys. Lett._**81**, 2002-2004 (2002). * [41] Jin, W. _et al._ Deuterated silicon dioxide for heterogeneous integration of ultra-low-loss waveguides. _Opt. Lett., OL_**45**, 3340-3343 (2020). * [42] Ji, X. _et al._ Ultra-low-loss on-chip resonators with sub-milliwatt parametric oscillation threshold. _Optica, OPTICA_**4**, 619-624 (2017). * [43] Zhao, Q. _et al._ Low-loss D-shape Silicon Nitride Waveguides Using a Dielectric Lift-off Fabrication Process. in _Conference on Lasers and Electro-Optics (2020), paper STh1J.3_ STh1J.3 (Optica Publishing Group, 2020). doi:10.1364/CLEO_SI.2020.STh1J.3. * [44] Frigg, A. _et al._ Optical frequency comb generation using low stress CMOS compatible reactive sputtered silicon nitride waveguides. in _Integrated Photonics Platforms: Fundamental Research, Manufacturing and Applications_ vol. 11364 72-79 (SPIE, 2020). * [45] Chiles, J. _et al._ Deuterated silicon nitride photonic devices for broadband optical frequency comb generation. _Opt. Lett., OL_**43**, 1527-1530 (2018). * [46] Xie, Y. _et al._ Soliton frequency comb generation in CMOS-compatible silicon nitride microresonators. _Photon. Res., PRJ_**10**, 1290-1296 (2022). * [47] Mahajan, R. _et al._ Co-Packaged Photonics For High Performance Computing: Status, Challenges And Opportunities. _Journal of Lightwave Technology_**40**, 379-392 (2022). * [48] Wong, M. S., Nakamura, S. & DenBaars, S. P. Review--Progress in High Performance III-Nitride Micro-Light-Emitting Diodes. _ECS J. Solid State Sci. Technol._**9**, 015012 (2019). * [49] Gumyusenge, A. & Mei, J. High Temperature Organic Electronics. _MRS Advances_**5**, 505-513 (2020). * [50] DuPont(tm) Kapton(r) Summary of Properties. _DuPont(tm)_[https://www.dupont.com/content/dam/dupont/amer/us/en/ei-transformation/public/documents/en/EI-10142_Kapton-Summary-of-Properties.pdf](https://www.dupont.com/content/dam/dupont/amer/us/en/ei-transformation/public/documents/en/EI-10142_Kapton-Summary-of-Properties.pdf). * [51] Lau, J. _Thermal stress and strain in microelectronics packaging_. (Springer US, 1993). doi:10.1007/978-1-4684-7767-2. * [52] He, L. _et al._ Broadband athermal waveguides and resonators for datacom and telecom applications. _Photon. Res., PRJ_**6**, 987-990 (2018). * [53] Frigg, A. _et al._ Low loss CMOS-compatible silicon nitride photonics utilizing reactive sputtered thin films. _Opt. Express, OE_**27**, 37795-37805 (2019). * [54] Huang, H. _et al._ Effect of deposition conditions on mechanical properties of low-temperature PECVD silicon nitride films. _Materials Science and Engineering: A_**435-436**, 453-459 (2006). * [55] Hainberger, R. _et al._ PECVD silicon nitride optical waveguide devices for sensing applications in the visible and \(<\)1\(\upmu\)m near infrared wavelength region. in _Integrated Optics: Design, Devices, Systems, and Applications V_ vol. 11031 40-47 (SPIE, 2019). * [56] John, D. D. Etchless Core-Definition Process for the Realization of Low Loss Glass Waveguides. (University of California, Santa Barbara, 2012). * [57] Bose, D., Wang, J. & Blumenthal, D. J. 250C Process for \(<\) 2dB/m Ultra-Low Loss Silicon Nitride Integrated Photonic Waveguides. in _Conference on Lasers and Electro-Optics (2022), paper SF3O.1_ SF3O.1 (Optica Publishing Group, 2022). doi:10.1364/CLEO_SI.2022.SF3O.1. * [58] Blumenthal, D. J. _et al._ Integrated Photonics for Low-Power Packet Networking. _IEEE Journal of Selected Topics in Quantum Electronics_**17**, 458-471 (2011). * [59] Smit, M. _et al._ An introduction to InP-based generic integration technology. _Semicond. Sci. Technol._**29**, 083001 (2014). * [60] Xiang, C. _et al._ High-Performance Silicon Photonics Using Heterogeneous Integration. _IEEE Journal of Selected Topics in Quantum Electronics_**28**, 1-15 (2022). * [61] Koos, C. _et al._ Silicon-Organic Hybrid (SOH) and Plasmonic-Organic Hybrid (POH) Integration. _Journal of Lightwave Technology_**34**, 256-268 (2016). * [62] Kohler, D. _et al._ Biophotonic sensors with integrated Si3N4-organic hybrid (SiNOH) lasers for point-of-care diagnostics. _Light Sci Appl_**10**, 64 (2021). * [63] Moreira, R., Barton, J., Belt, M., Huffman, T. & Blumenthal, D. Optical Interconnect for 3D Integration of Ultra-Low Loss Planar Lightwave Circuits. in _Advanced Photonics 2013 (2013), paper IT2A.4_ IT2A.4 (Optica Publishing Group, 2013). doi:10.1364/IPRSN.2013.IT2A.4. * [64] Huffman, T. Integrated Si3N4 Waveguide Circuits for Single- and Multi-layer Applications. (UC Santa Barbara, 2018). * [65] Zhao, Q. _et al._ Integrated reference cavity with dual-mode optical thermometry for frequency correction. _Optica, OPTICA_**8**, 1481-1487 (2021). * [66] Marin-Palomo, P. _et al._ Microresonator-based solitons for massively parallel coherent optical communications. _Nature_**546**, 274-279 (2017). * [67] Lundberg, L. _et al._ Phase-coherent lightwave communications with frequency combs. _Nat Commun_**11**, 201 (2020). * [68] Hansel, A. & Heck, M. J. R. Opportunities for photonic integrated circuits in optical gas sensors. _J. Phys. Photonics_**2**, 012002 (2020). * [69] Gaeta, A. L., Lipson, M. & Kippenberg, T. J. Photonic-chip-based frequency combs. _Nature Photonics_**13**, 158-169 (2019). * [70] Ikeda, K., Saperstein, R. E., Alic, N. & Fainman, Y. Thermal and Kerr nonlinear properties of plasma-deposited silicon nitride/silicon dioxide waveguides. _Optics express_**16**, 12987-12994 (2008). * [71] Huffman, T. A. Integrated Si3N4 Waveguide Circuits for Single- and Multi-Layer Applications. (University of California, Santa Barbara, 2017). * [72] Aaltonen, T., Alen, P., Ritala, M. & Leskela, M. Ruthenium Thin Films Grown by Atomic Layer Deposition. _Chemical Vapor Deposition_**9**, 45-49 (2003). * [73] Mitchell, W. J., Thibeault, B. J., John, D. D. & Reynolds, T. E. Highly selective and vertical etch of silicon dioxide using ruthenium films as an etch mask. _Journal of Vacuum Science & Technology A_**39**, 043204 (2021). * [74] Maurya, D. K., Sardarinejad, A. & Alameh, K. Recent Developments in R.F. Magnetron Sputtered Thin Films for pH Sensing Applications--An Overview. _Coatings_**4**, 756-771 (2014). * [75] Liu, J. _et al._ High-yield, wafer-scale fabrication of ultralow-loss, dispersion-engineered silicon nitride photonic circuits. _Nat Commun_**12**, 2236 (2021). * [76] Ji, D., Li, T., Hu, W. & Fuchs, H. Recent Progress in Aromatic Polyimide Dielectrics for Organic Electronic Devices and Circuits. _Advanced Materials_**31**, 1806070 (2019). * [77] Ji, X. _et al._ Ultra-Low-Loss Silicon Nitride Photonics Based on Deposited Films Compatible with Foundries. _Laser & Photonics Reviews_**17**, 2200544 (2023). * [78] Golshani, N. _et al._ Low-loss, low-temperature PVD SiN waveguides. in _2021 IEEE 17th International Conference on Group IV Photonics (GFP)_ 1-2 (2021). doi:10.1109/GFP51802.2021.9673874. * [79] Ye, Z. _et al._ Foundry manufacturing of tight-confinement, dispersion-engineered, ultralow-loss silicon nitride photonic integrated circuits. _Photon. Res., PRJ_**11**, 558-568 (2023). * [80] Blumenthal, D. J. Photonic integration for UV to IR applications. _APL Photonics_**5**, 020903 (2020). **Supplementary section** **Table of Contents** **Section S1 : ICP-PECVD processes and development.** **Section S2 : Film Material Characterization.** **Section S3 : Refractive indices of materials.** **Section S4 : Waveguide mode and dispersion simulations.** **Section S5 : Fabrication process flow.** **Section S6 : Quality factor measurement and loss extraction/calculation.** **Section S7 : Thin nitride loss comparison with LPCVD nitride.** **Section S8 : Additional measurements/calculations of thick nitride devices.** ## S1 ICP-PECVD processes and development The 250 \({}^{\circ}\)C nitride deposition step uses deuterated silane, nitrogen, and argon respectively. The deuterated silane used is measured to have an isotropic purity of 98%. Before running any actual device wafer, a seasoning process is run with a non-device wafer to coat the chamber. Further, before the nitride deposition step on a device wafer, an Ar preclean is run with said device wafer in the chamber. Particle counts added to wafers after nitride deposition are measured over a 100 mm wafer for sizes between 160 nm to 1.6 \(\upmu\)m to be less than 300 consistently using a KLA/Tencor Surfscan. The nitride film etches at a rate of 7.1 nm/min in a Transene UN2817 buffered HF solution, and the deposition rate of the film in the ICP-PECVD tool is measured to be 42 nm/min using an ellipsometer. For a 336 nm nitride film on a 100 mm silicon wafer, the compressive stress is measured to be 666 MPa using a Tencor Flexus FLX-2320 film stress measurement tool. The 250 \({}^{\circ}\)C oxide deposition step is very similar to the work by Jin et al[1] and uses deuterated silane, oxygen, and argon respectively. Seasoning and argon preclean steps are also done before oxide deposition on actual wafers, as well as measurement of particle counts and stress etc. The process is regularly characterized by the UCSB cleanroom also.[2] ## S2 Film Material Characterization We take cross-sectional SEM measurements of our 800nm thick nitride waveguides confirming the dimensions and quality of these waveguides as in Figure S1 below. ## S3 Refractive indices of materials The thin and thick nitride devices are fabricated more than 1.5 years between each other in a university cleanroom, and hence the indexes of the deposited materials are slightly different even if using the same recipe, as given below in tables TS1 and TS2. All measurements are from a Woollam Ellipsometer. **Table TS1. Refractive indices of different materials for thin ICP-PECVD nitride core devices at 1550 nm** \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.95 & 1.445 & 1.456 \\ \hline \end{tabular} **Table TS2. Refractive indices of different materials for thick ICP-PECVD nitride core devices at 1550 nm** \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.963 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4 : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.963 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4 : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.95 & 1.445 & 1.456 \\ \hline \end{tabular} **Table TS2. Refractive indices of different materials for thick ICP-PECVD nitride core devices at 1550 nm** \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.963 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4 : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.963 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4** : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.963 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4** : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.963 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4** : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.963 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4** : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.95 & 1.445 & 1.456 \\ \hline \end{tabular} **Table TS2. Refractive indices of different materials for thick ICP-PECVD nitride core devices at 1550 nm** \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.953 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4** : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.963 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4** : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.963 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4** : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.95 & 1.445 & 1.456 \\ \hline \end{tabular} **Table TS1. Refractive indices of different materials for thin ICP-PECVD nitride core devices at 1550 nm** \begin{tabular}{|c|c|c|c|} \hline **Material** & **Si\({}_{3}\)N\({}_{4}\)** & **SiO\({}_{2}\)lower cladding** & **Upper cladding** \\ & & & **ICP-PECVD SiO\({}_{2}\)** \\ \hline **n** & 1.953 & 1.445 & 1.459 \\ \hline \end{tabular} **Section S4** : Waveguide mode and dispersion simulations** Figure S2 below shows mode simulations for the TE and TM modes for the 80 nm x 6 \(\upmu\)m thin and 800 nm x 2 \(\upmu\)m thick nitride core devices respectively, from Lumerical MODE solver. **Section S5 : Fabrication process flow** Figure S4 below shows our complete fabrication process flow for our thin nitrides. ## S6 Quality factor measurement and loss extraction/calculation The calibrated Q setup used to measure the Quality factors and loss for the thin nitrides is given Figure S6 below. The setup for the thick nitrides is similar, except uses a polarization beam splitter before the Device-Under-Test (DUT). The full-width-at-half-maximum resonance width of the single bus ring resonators is measured with the radio frequency calibrated Mach-Zehnder interferometer (MZI) to extract the quality factor. The propagation loss of the waveguide is extracted based on the following equation [4, 5], \[\mathrm{Q}_{Load}=\frac{\lambda_{res}}{FWHM}=\frac{\pi n_{g}L\sqrt{ra}}{ \lambda_{res}\left(1-ra\right)}\] (ES1) where \(\mathrm{Q}_{Load}\) is the loaded quality factor. \(n_{g}\) is the group index of the waveguide, \(L=2\pi\mathrm{R}\) is the perimeter of the ring resonator, \(\lambda_{res}\) is the resonant wavelength, \(r=\sqrt{1~{}-~{}\kappa^{2}}\) is the self-coupling coefficient and \(\kappa^{2}\) is the power coupling coefficient, \(a\) is the single-pass amplitude transmission and is related to the power attenuation coefficient \(\alpha\) as \(a^{2}=exp(\cdot\alpha L)\). The intrinsic Q of the resonator can be calculated with the extraction of waveguide propagation loss \(\alpha\) using the following equation[6]. \[\mathrm{Q}_{\mathrm{int}}=\frac{2\pi n_{g}}{\lambda_{res}\alpha}\] (ES4) The group indexes we use for equation ES4 for loss calculations are from Free Spectral Range (FSR) measurements and are 1.4642 for the 80 nm x 6 \(\mathrm{\SIUnitSymbolMicro m}\) thin nitride TM mode[7], and 2.025 and 2.053 in the TE and TM modes for the 2 \(\mathrm{\SIUnitSymbolMicro m}\) wide 800 nm thick devices. For the ICP-PECVD thin nitride resonators, the TM resonances for all devices below 1550 nm are undercoupled, while almost all resonances 1550 nm and above are overcoupled. As an example to determine whether a TM mode is under or overcoupled, we take the case of our lowest loss 1.77 dB/m resonance at 1550 nm. The simulated ring-bus field coupling (k) using refractive indices from ellipsometry at 1550 nm (Table TS1) and the actual waveguide dimensions (Fig. 3b) using Lumerical FDTD is 0.2141. The undercoupled solution for this resonance gives a k of 0.14, while the overcoupled solution gives a k of 0.2379 which is within the tolerance of our measurement to the simulated value. The overcoupled value of k here also agrees better with measurements of k from ring-bus coupling structures present on the same chip (Fig. 2b). The TE mode resonances for the thin nitrides are all undercoupled, and are difficult to measure accurately to calculate Qi and loss for all wavelengths for all of the devices because of the low extinction of the resonances. For the unannealed Low Pressure Chemical Vapor Deposited (LPCVD) devices in Section S7, the TM modes are all overcoupled. The resonances for the thick nitride resonators shown are all undercoupled for both the TE and TM modes. These resonances are fit to a modified lorentzian curve to account for resonance splitting caused by backscattering in the ring[8]. Some statistics about the Qs and losses measured of the 2 \(\mathrm{\SIUnitSymbolMicro m}\) waveguide width and 300 nm gap thick nitride device is given in Table TS3 below. \begin{tabular}{|c|c|c|c|c|} \hline **Mode** & **Median of** & **Average of** & **Median of** & **Average of** \\ & **intrinsic Qs** & **intrinsic Qs** & **losses** & **losses** \\ & **(millions)** & **(millions)** & **(dB/m)** & **(dB/m)** \\ \hline TE & 2.59 & 2.60 & 13.9 & 14.8 \\ \hline TM & 1.07 & 1.11 & 32.9 & 34.1 \\ \hline \end{tabular} The loss and Qs of the low temperature thick nitride devices with similar area[9] were calculated using a resonance with a loaded Q of 1.5 million and 9 dB of extinction, at 1560.39 nm, with a Free Spectral Range (FSR) of 150 GHz, yielding an intrinsic Q of 2.9 million and loss of 11.9 dB/m. ## S7 Thin nitride loss comparison with LPCVD nitride We compare the losses of devices made using the thin nitride geometry (80 nm x 6 \(\mathrm{\SIUnitSymbolMicro m}\)) between those using deuterated ICP-PECVD nitride cores to those using unannealed Low Pressure Chemical Vapor Deposited (LPCVD) cores, both using the same deuterated upper cladding, for a fair comparison (Figure S7). We see that at 1550 nm and above, the losses are very much comparable. ## S8 Additional measurements/calculations of thick nitride devices The Four Wave Mixing (FWM) thresholds and thresholds per unit length of various works are calculated and shown below in Table TS4. Kerr comb formation was measured using a widely tunable ECDL amplified by a high power EDFA. The laser frequency was tuned to be slightly blue detuned from a TE mode resonance located at 1566.7 nm (at low optical power), and the resonator output was monitored with an optical spectrum analyzer. As optical power was increased, the laser frequency was slowly tuned to maintain the smallest possible blue detuning between laser and resonator. On-chip power was calculated by subtracting half the total throughput coupling loss of the resonator from the measured input optical power to the chip. At on-chip powers higher than 25 mW, the comb transitions into the modulation instability regime, as seen in Figure S8 below.
2309.12612
WattScope: Non-intrusive Application-level Power Disaggregation in Datacenters
Datacenter capacity is growing exponentially to satisfy the increasing demand for emerging computationally-intensive applications, such as deep learning. This trend has led to concerns over datacenters' increasing energy consumption and carbon footprint. The basic prerequisite for optimizing a datacenter's energy- and carbon-efficiency is accurately monitoring and attributing energy consumption to specific users and applications. Since datacenter servers tend to be multi-tenant, i.e., they host many applications, server- and rack-level power monitoring alone does not provide insight into their resident applications' energy usage and carbon emissions. At the same time, current application-level energy monitoring and attribution techniques are intrusive: they require privileged access to servers and require coordinated support in hardware and software, which is not always possible in cloud. To address the problem, we design WattScope, a system for non-intrusively estimating the power consumption of individual applications using external measurements of a server's aggregate power usage without requiring direct access to the server's operating system or applications. Our key insight is that, based on an analysis of production traces, the power characteristics of datacenter workloads, e.g., low variability, low magnitude, and high periodicity, are highly amenable to disaggregation of a server's total power consumption into application-specific values. WattScope adapts and extends a machine learning-based technique for disaggregating building power and applies it to server- and rack-level power meter measurements in data centers. We evaluate WattScope's accuracy on a production workload and show that it yields high accuracy, e.g., often <10% normalized mean absolute error, and is thus a potentially useful tool for datacenters in externally monitoring application-level power usage.
Xiaoding Guan, Noman Bashir, David Irwin, Prashant Shenoy
2023-09-22T04:13:46Z
http://arxiv.org/abs/2309.12612v1
# WattScope: Non-intrusive Application-level Power Disaggregation ###### Abstract Datacenter capacity is growing exponentially to satisfy the increasing demand for many emerging computationally-intensive applications, such as deep learning. This trend has led to concerns over datacenters' increasing energy consumption and carbon footprint. The most basic prerequisite for optimizing a datacenter's energy- and carbon-efficiency is accurately monitoring and attributing energy consumption to specific users and applications. Since datacenter servers tend to be multi-tenant, i.e., they host many applications, server- and rack-level power monitoring alone does not provide insight into the energy usage and carbon emissions of their resident applications. At the same time, current application-level energy monitoring and attribution techniques are _intrusive_: they require privileged access to servers and necessitate coordinated support in hardware and software, neither of which is always possible in cloud environments. To address the problem, we design WattScope, a system for non-intrusively estimating the power consumption of individual applications using external measurements of a server's aggregate power usage and without requiring direct access to the server's operating system or applications. Our key insight is that, based on an analysis of production traces, the power characteristics of datacenter workloads, e.g., low variability, low magnitude, and high periodicity, are highly amenable to disaggregation of a server's total power consumption into application-specific values. WattScope adapts and extends a machine learning-based technique for disaggregating building power and applies it to server- and rack-level power meter measurements that are already available in data centers. We evaluate WattScope's accuracy on a production workload and show that it yields high accuracy, e.g., often \(<\sim\)10% normalized mean absolute error, and is thus a potentially useful tool for datacenters in externally monitoring application-level power usage. + Footnote †: journal: Performance Evaluation ## 1 Introduction Datacenter capacity is growing exponentially to satisfy the increasing demand for many emerging computationally intensive applications. For example, a recent analysis estimated a 6\(\times\) increase in datacenter capacity from 2010-2018 (or \(\sim\)22% per year) [1] with capacity doubling in the past five years [2]. This capacity increase is being driven by a variety of emerging application classes, such as cryptomining [3], machine learning (ML), and other big-data processing. As one example, over the past decade, the cycles devoted to training state-of-the-art ML models has been doubling every 3.4 months, which is much faster than Moore's Law [4]. Of course, increases in datacenter capacity have also led to increases in their energy consumption despite substantial improvements in their energy-efficiency over the past decade [5; 6; 7; 8; 9]. Unfortunately, datacenter energy usage is poised to increase substantially in the coming decade due to the end of Dennard scaling and limited opportunities for further significant improvements in datacenter energy efficiency. For example, Google datacenters' Power Usage Effectiveness (PUE)--the ratio of their total energy to the energy of IT equipment--is now \(\sim\)1.1, which is already near the optimal value of 1 [10]. The trends above have led to increasing concern and criticism over datacenters' energy consumption and their resulting carbon footprint. As a result, many cloud providers and datacenter operators have begun to increase their emphasis on energy-efficient and sustainable operations. Indeed, prominent technology companies, including Google, Amazon, Meta, and Microsoft, have set ambitious goals to become carbon-neutral [11; 12; 13], carbon-free [14], or even carbon-negative [15] within the next 10-20 years. Importantly, _the simplest and most basic prerequisite for optimizing a datacenter's energy- and carbon-efficiency is providing applications visibility into their power consumption, as they cannot optimize a metric they cannot measure_. Datacenters are well-instrumented with external power meters typically attached to rack-level power distribution units (PDUs) and individual servers. However, rack- and server-level power monitoring does not provide insight into the power consumed by individual applications, since servers are multi-tenant and host multiple applications. Even when a server runs a single application, external power usage data is often not exposed to the application. While some servers may support internal hardware-level power monitoring, such as Intel's RAPL [16], they cannot directly monitor power at the granularity of individual applications. In addition, internal hardware-level power monitoring is typically a highly inaccurate measurement of the total system power (with up to 70% error), as we discuss SS2, since it only measures the power of CPU sockets and memory, and thus does not capture the power usage of any other important system components, such as the power supply, motherboard, I/O devices, and GPUs. Notably, these other system components are accounting for an increasingly large share of server power consumption. Prior work has developed techniques for attributing server-level power to applications, which generally involve apportioning a server's total power usage based on each application's resource utilization. One common approach involves training a model that takes per-process hardware performance counters as input and infers a corresponding power usage, e.g., from RAPL. For example, PowerAPI is an open-source toolkit that uses such techniques to monitor application-level power [17; 18; 19]. However, such approaches require access to the hardware performance counters. There are many scenarios, which we summarize below, where access to hardware counters is either not available or too intrusive. (i) Most importantly, prior approaches do not apply well to cloud users. While cloud providers can use prior approaches to track the power consumption of users' virtual machines, users that host multiple applications within each virtual machine cannot attribute power to each application, since they lack privileged access to hardware counters. Thus, existing techniques are not applicable to cloud users. (ii) In addition, process-level power monitoring techniques are intrusive, as they require server resources that scale with the number of processes tracked, as well as the power data resolution. This overhead can be high when tracking many applications at high resolution, and becomes prohibitive at some point. (iii) Further, since hardware interfaces are not standardized, existing _in situ_ techniques are not general and must be tailored to specific hardware platforms. For example, PowerAPI uses RAPL, which is only available on Intel processors. Since the set of hardware counters also varies by platform, existing power monitoring toolkits are primarily designed for Intel or AMD platforms, but generally do not support Power-, ARM-, and RISCV-based platforms. The limitations above are the primary reason that fine-grained application-level power monitoring is not offered by cloud providers. This lack of support in-turn prevents cloud applications from optimizing their energy consumption and carbon emissions. Ultimately, the lack of application-level visibility into energy consumption is a key impediment to achieving the ambitious sustainability goals above, as it is impossible to optimize a metric that cannot be effectively measured. To address the problem, we design WattScope, a system for non-intrusively monitoring application-level power consumption using aggregate server-level power measurements. WattScope uses disaggregation techniques to apportion power data from external server- and rack-level power meters, which are typically available in power distribution units (PDUs), into individual application-level power usage without requiring intrusive access to system and application software. WattScope recognizes that datacenters already collect server- and rack-level power data for thermal management and billing purposes, which can be leveraged to also provide application-level power monitoring. Thus, WattScope analyzes power data collected from these external meters to infer each application's power usage. More formally, WattScope disaggregates a time-series of power readings \(P(t)\), over some sampling interval \(\Delta t\), into a separate time-series \(p_{i}(t)\) for each individual application \(i\), such that \(\forall t,P(t)=\sum_{i}p_{i}(t)\). Importantly, WattScope does not require any server-level access or specific hardware/software support, and instead can run externally as part of the facility management system. As a result, WattScope can be deployed in nearly any datacenter facility with PDUs that measure server- and rack-level power. Our key insight is that, based on a large-scale analysis of production traces, the power characteristics of datacenter workloads, e.g., low variability, low magnitude, and high periodicity, are highly amenable to disaggregation. WattScope adapts and extends a deep learning-based technique, originally designed for disaggregating building power, and applies it to servers and racks. We implement WattScope and experimentally evaluate its accuracy on a production workload. Our hypothesis is that disaggregating server- and rack-level power using WattScope can enable highly accurate and non-intrusive application-level power monitoring without requiring any server-level hardware/software support. In evaluating our hypothesis, we make the following contributions. **Production Workload Analysis**. We first analyze the job characteristics of a large-scale production workload from a major cloud provider that includes 5-minute resource usage readings for 2.7 million jobs over a 30 day period encompassing more than 100 million job-hours. Our analysis reveals that job usage patterns exhibit multiple characteristics, including low variability, low magnitude, and high periodicity, that WattScope can potentially exploit for disaggregation. Our analysis also shows that, while server applications can operate arbitrarily and irregularly in general, they have a high degree of regularity in practice. **WattScope Design**. We present WattScope's design, which adapts and extends a deep learning-based disaggregation algorithm originally applied to building power data. WattScope's design includes a library of models trained for different classes of applications based on their variability, magnitude, and periodicity. WattScope then integrates with a cluster scheduler to learn the number and type of applications running on each server, i.e., based on their attributes, to select an appropriate model for disaggregation. **Implementation and Evaluation**. Finally, we implement and evaluate a WattScope prototype. We implement WattScope's disaggregation technique by modifying nilmtk-contrib [20], an open-source reference implementation of multiple algorithms for building energy disaggregation, to instead disaggregate server- and rack-level power, and evaluate accuracy across multiple dimensions using our production workload trace. We evaluate WattScope's accuracy on a production workload and show that it yields high accuracy, e.g., often \(<\sim\)10% normalized mean absolute error, and is thus a potentially useful tool for broadly enabling application-level power monitoring in datacenters. ## 2 Motivation and Background A key motivation for our work is that providing application-level visibility and monitoring of power usage is essential to satisfying companies' ambitious sustainability goals. Indeed, the U.S. Securities and Exchange commission may soon require companies, including those using shared server/cloud resources, to report their carbon emissions from energy usage [21]. In addition, while there has long been a strong incentive to optimize computing's energy-efficiency, since energy incurs a cost, optimizing computing's carbon emissions is different, as energy's carbon-intensity varies over time and by region based on the energy mix, e.g., fossil fuels, nuclear, and renewables [22]. As a result, reducing carbon emissions often necessitates monitoring and adapting application power usage over time in response to changes in energy's carbon-intensity. Our work assumes that a datacenter has external power meters deployed at each server (or rack) that are capable of continuously monitoring server (or rack) power \(P(t)\) (in watts) over some interval \(\Delta t\), e.g., every 5 minutes. Datacenter servers generally include external power meters and make them available programmatically in real-time via network protocols, such as IPMI [23] or Redfish [24], to facility management systems. External power monitoring is necessary for basic datacenter operations, such as fault identification and billing. In addition, even if individual servers do not include external power meters, power distribution units (PDUs) that provide power to servers in one or more racks also typically include them. In general, the data above is collected as part of facility management and is not exposed to application- or system-level software. Instead, application- and system-level power monitoring typically uses internal hardware and software support. For example, PowerAPI [17; 18; 19] leverages a model that maps hardware counters and RAPL readings to application-level power consumption. However, since hardware support is not standardized, this approach is not general. In addition, as mentioned in SS1, RAPL, which measures CPU socket and memory power, often does not provide an accurate measurement of full system power. To illustrate Figure 1 shows the absolute (a) and percentage (b) error in RAPL power measurements for a traditional server compared to an external power meter that directly measures the server's power. In this case, the server's maximum power at 100% utilization is 175W. The figure illustrates that since RAPL measurements only account for a subset of the server's hardware resources, they capture only between 30-40% of the total power a server actually consumes. In addition, RAPL measurement errors vary widely--from 75-110W (a) or 60-70% (b)--depending on the server's utilization. In addition, these errors would likely be much worse for modern servers with GPUs, since RAPL measurements do not include GPUs. While GPUs often have their own internal interfaces for monitoring power, these are also not standardized or general. Of course, RAPL only measures CPU socket and memory power, and _not_ application power, so even if RAPL measurements were accurate, an additional server-specific model is necessary to map application resource usage, e.g., using hardware counters, to the fraction of power an application consumes. Importantly, WattScope_does not_ assume any hardware or software support on any server, and does not require training a server-specific model. However, after disaggregating a server's power into the power usage of individual applications, mapping the applications to specific application names running on servers is often important. Thus, \(\mathsf{WattScope}\) assumes a minimal interface to integrate with a cluster-level scheduler that provides, in addition to the number of applications running on a server, the names of the applications running on it and a minimal amount of coarse summary usage characteristics, e.g., variability, magnitude, and periodicity. As we discuss, schedulers typically provide the former, while the latter is simple to implement. \(\mathsf{WattScope}\) builds on prior work on energy disaggregation or non-intrusive load monitoring (NILM) for buildings, which infers the average power usage \(p_{i}(t)\) of each building load \(i\) at time \(t\) given the building's average power usage \(P(t)\) measured at an external smart meter over some interval \(\Delta t\). Importantly, the prior work on NILM has shown that disaggregation accuracy varies widely based on loads' power signatures, i.e., their pattern of power usage [20]. For example, disaggregating large loads, such as electric dryers, is more accurate than small ones, since the power signature of such loads is more distinctive in a building's aggregate power usage. Similarly, disaggregating highly periodic loads and those with discrete power states, such as refrigerators or water heaters, is more accurate than noisy loads that are highly variable, such as many electronics. Finally, larger buildings with more loads decrease disaggregation accuracy, as it becomes more difficult for algorithms to extract power signatures for individual loads. Given the importance of power usage characteristics on the effectiveness of disaggregation, we next analyze a production workload trace to understand the resource and power usage characteristics of real cloud applications. Since server applications can exhibit highly variable resource and power usage, there is no guarantee that their characteristics will be amenable to disaggregation, as with electric dryers, water heaters, refrigerators, etc. In addition, unlike buildings, which consist of a common set of appliances, the number and types of server applications is not fixed. As a result, it is not clear a priori whether disaggregation methods can be applied to server applications. ## 3 Production Workload Analysis To guide \(\mathsf{WattScope}\)'s design, we conduct a large-scale analysis of production workloads, that provide information for individual virtual machines (VMs), containers, or application tasks, to quantify their regularity, variability, and intensity i.e., magnitude, in resource and power usage. ### Analysis Setup Below, we provide details on the workload traces, power models, and metrics we use in our analysis. Figure 1: _Absolute (a) and percentage (b) error in measuring a traditional server’s power using RAPL, which is Intel’s internal power monitoring platform._ **Workload Traces.** To evaluate WattScope's efficacy, we require a dataset that provides ground truth power information for different VMs or processes running on a server along with the server's aggregate power usage. While external power meters can record server-level power consumption, it is not possible to instrument individual VMs or processes with a physical power meter and record their usage, as they are virtual and not physical. As discussed in SS1, prior work has developed other methods using hardware performance counters to build models that estimate per-VM or per-process power consumption [25]. However, such methods are highly intrusive, incur an overhead, and are thus not widely deployed in practice. As a result, to the best of our knowledge, there is no publicly available dataset that provides power usage information for individual VMs and application processes on servers. Consequently, we construct such a ground-truth trace from publicly-available CPU and memory workload traces and use them to derive server power consumption, i.e., by mapping the usage information to power. To generate power consumption traces for our analysis, and later evaluation of WattScope in SS6, we use two of the most commonly used industry workload traces: Microsoft Azure Traces (V2) [26] and Google Cluster Workload Traces (V3) [27]. The Azure trace provides the minimum, maximum, and average CPU utilization and memory allocation for \(\sim\)2.7 million production VMs every 5 minutes over a 30-day period. The Azure trace has a size of 235GB and contains \(\sim\)1.9 billion readings. The Google cluster workload trace provides average CPU usage, CPU usage histograms, and memory usage information of jobs for each 5 minute period over a 31-day period. The Google trace contains data for \(\sim\)2.5 million jobs from 96.4k physical servers spread across 8 datacenters. In the Azure dataset, we assume each VM hosts a different application or job. For uniformity and ease of exposition, henceforth, we also refer to Azure VMs as jobs. **Power Model.** A traditional server consists of multiple components that consume power including CPUs, memory, and I/O devices, e.g., network card, disk, etc. Prior work shows that a traditional server's power consumption primarily depends on its CPU utilization [25], since the contribution from other components is nearly constant and not dependent on their usage. However, as memory sizes in servers increase to support data-intensive applications and memory technology improves to provide a more dynamic power range, memory power consumption is becoming both a significant part of system-level power and also usage-dependent. As a result, to construct our power usage traces, we use both CPU and memory usage information in the traces above to estimate server power consumption. To derive our server power consumption traces, we conduct an empirical study that maps a job's CPU utilization and memory usage to its power consumption on an example physical server. We randomly sample 10,000 usage readings from each of the Azure and Google traces. We then replay them on our physical server Figure 2: _Relationship between power consumption of a server and its CPU utilization (at fixed memory usage) and memory usage (at fixed CPU utilization) when replaying production workload traces on a physical server and monitoring power consumption using an external power meter._ connected to an external power meter that records server-level power consumption. To replay traces, we use the stress-ng tool [28] that stresses a server's CPUs, memory, and network interface based on the time-varying resource usage information provided in the workload traces. We only stress CPU and memory as both traces only provide CPU and memory usage information and measure the resulting power consumption of the server under that workload. Figure 2(a) shows the actual measurements of power when only the CPU was stressed, along with a cubic polynomial fit to the data using the ordinary least squares method. We note that even though power and CPU utilization exhibit a clear relationship, it is non-linear and will vary across different types of servers. Figure 2(b) then shows the actual measurement of power at varying memory usage at a fixed CPU utilization. We can see that the power consumption varies both with CPU and memory usage. We sample power data from the results of these experiments to define a power model, which converts usage information in our traces to power consumption. For example, if a job has a 50% CPU utilization and consumes 1GB of memory, we sample a random data point from the previous experiments that were run with these configurations. The variations in power for a given configuration are due to the use of other server components. Figure 2 also illustrates that, in general, servers are not energy-proportional, and thus may consume substantial, roughly static baseload power when idle. In Figure 2, the baseload power (105W) is \(\sim\)60% of peak power (175W). The baseload-to-peak power ratio varies widely across servers, generally between 30-70%. Our work focuses primarily on attributing a server's marginal power, i.e., between its baseload and peak power, to applications based on their resource usage, since attributing baseload power is largely a subjective policy choice. Our dissaggregation approach is compatible with any policy. As we discuss in SS4, we attribute a server's baseload power to applications in proportion to their resource usage (at any give time). However, another policy choice might be to first remove baseload power, and attribute it equally to all applications (regardless of their resource usage). ### Qualitative Analysis Using the power consumption traces we construct above, we analyze the workload characteristics relevant to disaggregation accuracy including power usage _variability_, _regularity_, and _intensity_. **Variability** refers to the extent or degree of fluctuation or variation in the power consumption of a given job over time. Our evaluation results in SS6 show that variability is one of the most important factors in determining disaggregation accuracy. This is intuitive: if a job has a non-variable, or constant, power consumption pattern, it is simple to disaggregate, as a model need only learn this constant pattern. We quantify variability using the Coefficient of Variation (CoV), which is defined as the ratio between the standard deviation of a time series over its mean. CoV can have values between 0 and \(\infty\) with a CoV greater than 1 typically considered high, i.e., with a standard deviation greater than the average. To illustrate, Figure 3 shows example time-series of average power (on the \(y\)-axis) for four jobs in the Azure trace with different values of coefficient of variation of 2, 0.5, 0.1, and 0. As expected, high CoV values translate to more frequent and larger variations in power usage, although, as the figure shows, these variations are not necessarily random. For a CoV of 2 (a), the power usage is highly random and does not repeat with any specific pattern. In contrast, a job with CoV of 0.5 (b) exhibits a distinct pattern of variability in power usage and appears to have a regular pattern that repeats over both 24 hour and 7 day intervals. Of course, a lower CoV does not always indicate a regular pattern of usage. Indeed, the job with CoV of 0.1 (c) does not exhibit any repeated pattern of usage, and is more volatile than the job with CoV of 0.5 Finally, a CoV of 0 (d) indicates a constant power consumption, such that jobs with low CoV values can be more easily disaggregated with higher accuracy. CoV is only one metric that correlates with disaggregation accuracy. We next look at the regularity of the power consumption, which is also related to disaggregation accuracy. **Regularity** refers to the degree to which a given job's power consumption follows a repeating _pattern_ over time. Prior work on building energy disaggregation shows that a variable time-series with periodic behavior improves disaggregation accuracy, regardless of its variability. If the pattern of power consumption is perfectly regular and always repeats the same pattern at regular intervals, e.g., every 24 hours, then a model need only learn this pattern to disaggregate a job's power consumption. To quantify regularity, we use time-series decomposition that distills our power usage time-series data into its trend, seasonality, and residual (or noise) components, and then apply period detection to the seasonal component [29]. The seasonal component represents patterns in the data that repeat over time, while the time between these repeated patterns represents the period. Of course, our time-series data is noisy such that similar, but not exact, patterns of power usage may repeat, and at periodic intervals that vary slightly. Thus, given the noise in the power usage data, simply translating it into the frequency domain and applying a threshold or using autocorrelation is not sufficient for accurate period detection, as discussed in prior work [30]. Specifically, application power usage, even when periodic, is often noisy with many interruptions and random load periods; in addition, periodic behavior also often exhibits increasing or decreasing trends with potentially wide variations in the peaks and troughs power usage [30]. Given the size of the dataset, we leverage builtin functions in the Azure Data Explorer [31] tool to detect regular periods in our power consumption, which includes an optimized implementation of the basic time-series decomposition functions above. Specifically, we used the series_periods_detect() function in Azure DataExplorer [32]. This period detection algorithm detects time-series periods and assigns them a score in the range \([0,1)\) where higher values indicate more intense and regular periodicity, i.e., with less deviation and noise in both the pattern and interval of repetition. The algorithm reports any periods detected with a non-zero score, and in most cases, it reports many periods for any given time-series. For example, a time-series that has a 4-hour period likely also has a 24-hour period, although variance in the pattern and interval of repetition may cause the 24-hour period to have a different score. In general, we focus our analysis on the most dominant period that has the highest score. Finally, our time-series decomposition analysis also assigns jobs without any periods a score of 0. To illustrate, Figure 4 shows power usage time-series for four jobs that have the same 24-hour period, but with different scores of 0.9, 0.5, 0.1, and 0. The figure shows how high scores translate to both high similarity in the pattern of power usage along with the interval between the patterns. Note that we call the repetitive pattern of power usage the _power signature_. For a score of 0.9 (a), the power signature is relatively simple, and nearly the same each time, and repeats at nearly precise 24 hour intervals. In contrast, a job with a 0.5 score (b) is more variable: there is clearly a repetitive pattern roughly every 24 hours (and also every 4 hours and at even smaller intervals), but the magnitude of the power signature, while often similar, exhibits some distinct variability. In particular, there is a large spike on the second day along with some smaller variations across the other days. Similarly, the job with 0.1 score (c) exhibits even more noise with a less apparent 24-hour period, while the job with 0 score (d) has no discernible period and appears to be random noise. Similarly, Figure 5 shows data from jobs with the same score of 0.5, but for different periods. Figure 3: _Illustrative time-series of power consumption for jobs (a-c) with different coefficient of variation (2, 0.5, 0.1), along with an example of a job with \(0\) coefficient of variation._ This figure demonstrates that time-series decomposition can recognize a range of different period intervals. The figures above show that high periodicity scores translate to power signatures that repeat at a regular periodic interval, such that a higher score represents more similarity and regularity with less noise. As the score decreases, the similarity in the power signatures and strength of the periodicity both decrease, but are still clearly evident, while the noise level, i.e., variability, increases. These empirical observations of periodicity on these and other jobs in our dataset indicate that any positive score represents some periodicity in the signal that is potentially useful in disaggregating application power usage. Likewise, Figure 4(d) shows that a 0 score indicates a random or noisy power with no discernible regular pattern of usage. **Intensity** refers to a job's average magnitude of power consumption. We quantify intensity using a job's average power with a range between 0 and the maximum server power. A high or low value of average power is better for disaggregation. A job with very high or very low intensity is easier to disaggregate compared with a medium-intensity job, since the high/low-intensity jobs have less room for variation in their power consumption. That is, if the average power consumption is near the server's maximum or minimum power it means that any deviations from the average must be brief. To illustrate, Figure 6 shows example time-series of power usage (on the \(y\)-axis) for three jobs that have different average power magnitudes of 15W, 60W, and 180W. The figure shows that both high and low magnitudes (a,c) have less room for variation and a less dynamic range of power consumption. As a result, the power consumption patterns at both extremes are, almost by definition, relatively constant. However, the average magnitude values, e.g., 100W (b), can come from highly variable power usage patterns, which makes accurate disaggregation more challenging. Figure 4: _Illustrative time-series of power consumption for jobs with detected periods of length 24-hours (a-c) with different scores (0.9, 0.5, 0.1), along with an example of a job with no period and a score of \(0\)._ Figure 5: _Illustrative time-series of power consumption for jobs with detected periods of length \(6\)-hours, \(3\)-days, and \(7\)-days, all with scores of \(0.5\)._ ### Large-scale Quantitative Analysis In this section, so far, we have defined the various characteristics of real-world workloads that can impact disaggregation accuracy and presented illustrative figures to develop an intuitive understanding of each metric. Below, we present results quantifying the presence of these characteristics in real workloads. Figure 7(a) shows a histogram of the CoV for each quintile between 0 and 1 for 10,000 jobs randomly sampled from the Azure trace, as well as the percentage of jobs with CoV greater than 1. In general, CoV's below 1 are considered low, i.e., with standard deviation less than the mean, and those above 1 are considered high. The graph shows that the vast majority (74.5%) of jobs in the Azure trace have CoV's less than 0.6. While most of these jobs have some variation (63.2% have CoVs between 0.2 and 0.6), it is generally low. In addition, only 12.3% of jobs have high CoV's greater than 1 that would make accurate disaggregation especially challenging. Thus, our large-scale analysis of variability indicates that the vast majority of jobs have low variability that is amenable to accurate disaggregation. Figure 7(b) next shows a histogram of the regularity in job power usage, where the \(x\)-axis represents periodicity scores in deciles, and the \(y\)-axis is the fraction of jobs with their highest periodicity score in that range. The analysis shows that over 91% of jobs exhibit some non-zero periodicity with over half of jobs exhibiting strong periodicity scores above 0.5. In contrast, only 9% of jobs exhibit no detectable periodic behavior in their power usage. Next, Figure 7(c) shows the distribution of average power consumption for the jobs. The graph shows that most jobs have very lower power consumption; 43% have less than 10W power consumption, assuming they run on a server with 200W maximum power, and 84.9% have less than 30W power consumption. These power consumption values correspond to roughly 5% and 15% resource utilization on a 200W server. Finally, in addition to the individual distributions, we show the distribution of CoV, periodicity, and magnitude for a randomly sampled 1000 VMs from the 10,000 jobs in Figure 7(d). In this graph, the magnitude of average power consumption is on the \(x\)-axis, the CoV is on the \(y\)-axis, and the size of each datapoint represents the periodicity score. The overall takeaway is that majority of jobs in our trace have low magnitude and CoV, while also frequently exhibiting regular and periodic patterns of power consumption. As a result, real-world workloads are highly amenable to energy disaggregation. In particular, our analysis above indicates that, while server applications do not have to exhibit regularity in their power usage, at production scales they tend to be highly regular with little noise. This is likely due to the fact that most jobs at large scales are deployed to serve a specific purpose and type of workload with a regular pattern of power usage. In addition, the periodic intervals cover a wide range, which indicates that different jobs have widely different patterns of power usage. In addition to distinctive periods, real jobs also tend to have distinctive power signatures during their active periods, i.e., the magnitude and pattern of power usage when active, relative to other jobs. Yet, these distinctive power signatures for each job tend to be highly similar across different active periods. The example jobs in Figures 4 and 5 illustrate both of these characteristics, i.e., distinctive power signatures across different jobs but similar within the same jobs. ### Implications of Analysis Our analysis above demonstrates not only is there significant potential to disaggregate application-level server power, but production workload characteristics suggest that disaggregation may actually be _much more effective_ when disaggregating server power compared to its original use in disaggregating building power into individual electrical loads for numerous reasons. Specifically, while many large electrical loads exhibit periodicity, they are often thermostatically-driven, so the periodic interval varies based on environmental conditions or user behavior. In contrast, from our analysis, many jobs' power usage tend be have deterministic periods, i.e., likely in some cases driven by timers as cron jobs. In addition, job power signatures tend to be Figure 6: **Illustrative time-series of power consumption for jobs with average magnitude of \(10\,\mathbf{W}\), \(100\,\mathbf{W}\), and \(180\,\mathbf{W}\).** highly distinctive and thus identifiable in the aggregate power due to the wide variability in how applications exercise resources. In contrast, electrical loads typically exercise power in highly similar ways, which makes distinguishing them in the aggregate power signal \(P(t)\) more challenging. For example, the large majority of electrical loads consist of either resistive heating elements (e.g., coffee makers, toasters, ovens, etc.), motors (e.g., vacuums, AC compressors, fans, etc.), or both (e.g., electric dryers), which have similar power signatures [33; 34]. Finally, given our analysis above, there will likely be few co-located jobs on any server that either have no periods, or have the same overlapping period interval (which would reduce disaggregation accuracy). Further, even if multiple "noisy" jobs with no period were co-located, their power usage is likely to be low, and thus likely to only minimally affect the accuracy of disaggregating other jobs that oscillate between periods of high power usage and low power usage. ## 4 WattScope Design In this section, we present the design WattScope, our system for non-intrusively monitoring application-level power consumption by disaggregating power data from external server- and rack-level power meters. Figure 7: _Distribution of coefficient of variance (a), periodicity score (b), and magnitude (c) from a random sample of 10,000 jobs in Azure workload trace. The analysis shows that job power consumption at large scales is less variable and highly periodic. The average power consumption is also low and consistent across jobs regardless of their CoV or score. Finally, (d) shows the CoV, periodicity score, and magnitude for for 1000 jobs to illustrate most jobs score low on CoV and magnitude._ Below, we describe WattScope's overall system architecture and its different components for training disaggregation models and using them to disaggregate external power data. Figure 8 shows WattScope's architecture for non-intrusive application-level power monitoring. WattScope is implemented as a cluster-level system for monitoring application-level power in datacenters that does not require any hardware or software support on the servers running the applications. The only requirement is a network-accessible external power meter that monitors each server's power, which most datacenters already have. As in a typical datacenter, WattScope assumes users submit their workload to a cluster manager in the form of individual jobs or tasks, which run inside containers or VMs. The cluster manager schedules the jobs on one of the servers depending on resource availability and job placement constraints. The cluster manager, or the node manager, keeps track of the high-level category and placement for each job, such as their scheduling priority, nature of the job (service, batch, or interactive), and user information. All datacenters need such information for scheduling and billing purposes. Optionally, the cluster manager may collect information on the resource usage for all the jobs, such as CPU utilization and memory usage. While such information is _not necessary for disaggregation_, WattScope can opportunistically use it during the training process if available. There is also an external power monitoring server that records the power consumption at the server- or rack-level over a pre-defined sampling interval \(\Delta t\) and exposes that information through an API. WattScope takes the server- or job-level meta information and rack- or server-level aggregate power consumption and reports the server- or job-level power consumption. While WattScope can disaggregate rack-level power consumption into servers, we focus primarily on disaggregating server-level power into application-level power consumption information in the remainder of this section. Importantly, WattScope assumes that resource allocations for applications are reserved and not best-effort. If resources are allocated best-effort, then applications' variations in resource usage and power is a function of not only their own behavior, but also the behavior of co-located applications. In this case, the scheduler would dictate applications' resource and power variations rather than their own behavior, which would prevent training our models below. However, our assumption of reserved resources generally holds for production schedulers in industry, such as Borg [35]. While production schedulers do overcommit resources, i.e., by reserving allocating more resources than exist on a server, they attempt to minimize application throttling, i.e., where applications attempt to use their reserved resources but they are not available. Prior work shows that production schedulers rarely throttle applications [36]. As a result, a cluster manager's effectiveness at packing jobs onto servers has no impact on the variability of a job's resource usage. In addition, the type of server a job runs on also affects its resource usage characteristics and power, which also affects the efficacy of disaggregation models. In many cases, since jobs are not throttled, changing server types will only alter the magnitude of the resource usage and not its variations. In addition, datacenters often operate large clusters of homogeneous servers, which facilitates training models for each server family. In general, there is a tradeoff between model accuracy and modeling cost. WattScope consists of three key components (or modules): an offline model trainer, an online disaggregator, and an online performance monitor. Below, we describe the function of each module in detail. ### Model Trainer The model trainer module's task is to train a library of models that can then be used by the disaggregator. While the model trainer can be modified to work as an online module, we design it as an offline module that Figure 8: WattScope _overview and its three key components: model trainer, disaggregator, and performance monitor._ trains and stores multiple models. The model trainer takes three inputs: (i) ground truth application-level power usage, (ii) aggregate server-level power usage, and (iii) meta information about the applications. **Inputs.** First, to train an energy disaggregation model, we need ground truth application-level power data. However, as discussed in SS3, physically monitoring per-application power usage is not possible. To solve this problem, we use alternative methods that provide approximate power consumption with varying levels of accuracy. In particular, our approach is to use data collected by an intrusive software-based method for application-level power monitoring, such as PowerAPI [25]. This data can be collected in the same datacenter on a subset of machines running the representative workload or in another datacenter that has similar workload characteristics. However, such fine-grained per-application power monitoring is not deployed in practice. Thus, another option is to use the resource usage information, such as CPU and memory utilization, as a proxy for estimating power consumption using a power model, such as the one we used in SS3.1. Second, we need the aggregate power consumption information for the server, which is typically collected in datacenters for power management, billing, and cooling purposes. Third, the metadata information about the servers and applications is used as a key for distinguishing trained models, which can later be used by the disaggregator to select a model depending on the characteristics of the workload running on the server. This information can include job type, hashed user information, job priority, or any other information that can be used to identify a given job or class of jobs. **Training.** Given our problem's similarity to energy disaggregation in buildings, we examined numerous existing approaches from the domain of building energy disaggregation [37]. In general, building energy disaggregation techniques require a per-load model that captures characteristics of each load's pattern of energy usage. These models were initially simple and pre-configured, e.g., by specifying a small number of discrete power states for each load, based on _a priori_ knowledge of each load. However, recent approaches have instead captured loads using machine learning models, e.g., neural networks, trained on datasets of buildings where each load's power is separately monitored to provide ground truth [37, 20]. While we evaluate many of these approaches in SS5, we adapt and extend a recently proposed sliding window approach that uses deep neural networks (DNNs) as the basis for WattScope[38]. As we show in SS5, this technique provides the highest accuracy, in part, because it is best suited for the characteristics of loads like server applications that have multiple (or continuous) power states. Specifically, our sliding window approach takes a window \(w\) of data points as input that represent the past \(w\) samples of a server's aggregate power, e.g., \([P(t-100),P(t-99),...,P(t-1),P(t)]\). As discussed in SS3, since we use the server's aggregate power, rather than its marginal power, our approach implicitly attributes a server's baseload power to applications in proportion to their resource usage (at any give time). This input feeds into a convolutional layer, which in-turn feeds into two bidirectional GRU (Gated Recurrent Unit) layers and two dense layers, such that dropout units are inserted between these layers. Each of the layers uses an ReLU (rectified linear unit) activation function except the last dense layer, which uses a linear activation function. Ultimately, the output is the disaggregated power usage \(p_{i}(t)\) for load \(i\) at time \(t\). Prior work has shown that replacing the GRU layers with LSTM layers results in similar accuracy across a wide range of loads, but requires both more memory and computation for training. The model inserts dropout units, which probabilistically drops outputs, to both prevent over-fitting and improve robustness with respect to missing values. The window size is generally set to \(50-100\) datapoints, although the optimal window \(w\) may vary for different loads. We discuss our model's specific implementation details in SS5, including the configuration and hyperparameters of each layer. Note that a specific application's load model is trained using data from multiple servers, which may operate different sets of applications with different characteristics. **Model library.** The set of background applications running on a server can significantly vary, over time and across different servers, in terms of the number of applications and their characteristics. As a result, it is not possible to train a model for each combination. If our power usage trace provides information on the co-location of various applications, we train models for the most common co-location scenarios. There is a trade-off between the number of models trained and disaggregation accuracy; the higher the number of models, the higher the accuracy and vice versa. If the power usage trace does not provide co-location information, such as in our Azure-based power trace, we randomly select different applications on a server and train models for wide range of application combinations. All of the models are indexed by the metadata information about the applications whose data was used to train a given model. For example, a model trained with applications that have low variability, low regularity, and medium intensity is saved with this information as a label, enabling it to be selected by the disaggregator's model selector for applications with similar characteristics. ### Disaggregator Formally, our disaggregation problem can be stated as follows: given a certain number of applications running on a server, as indicated by the cluster-level scheduler, and the server's aggregate power consumption \(P(t)\) at time \(t\), we need to infer the average power consumption \(p_{i}(t)\) (over a sampling interval \(\Delta t\)) attributable to each application \(i\). WattScope operates in real-time by inferring each application's average power usage over \(\Delta t\) immediately after the external power meter reports a new power sample for the server. While we assume a 5-minute sampling interval \(\Delta t\) to match the resolution of our production trace, our approach is applicable to any sampling interval on the order of seconds-to-minutes. Note that our approach can also disaggregate average power usage (or equivalently energy) over coarser time intervals than the sampling interval by simply averaging the inferred power usage over the coarser interval, e.g., to infer an application's energy usage over a month for billing purposes. In general, the longer the interval, the more accurate the disaggregation. The model selector component of the disaggregator selects a model from the library for disaggregation depending on the characteristics of the applications running on the server. As the metadata information provides high-level information for the applications, collectively used as a label for a model during the training phase, the model selector uses that information to pick an appropriate model. As the current combination of applications on the server may not exactly match any of the trained models, the model selector chooses the closest model to use for disaggregation. The metric used to quantify "closeness" is subjective and depends on the operator's choice. ### Performance Monitor The performance monitor module keeps track of the currently deployed disaggregation model and sends feedback to the model selector if the accuracy of the current model starts to degrade. To quantify the performance of a given model, it compares the total allocated power to the applications with the ground truth aggregate power consumption. Under normal circumstances, the error should be under some pre-defined threshold. However, the error will increase if the number of applications or their characteristics change. If a high error persists for more than a specified period of time, it sends a signal to the model selector to select a new model for disaggregation. ## 5 Implementation We implemented WattScope and seeded it with multiple job models trained using data from large-scale Azure and Google production workload traces described in SS3. The Google trace includes job co-location information, i.e., which jobs are co-located on the same server, while the Azure trace does not. Since the Azure trace only includes job-level average resource usage statistics, every 5 minutes and not server-level placement information or power data, we use the trace to construct our own synthetic ground truth power data for training. Specifically, we assume all jobs run within VMs (or containers), and each is given an equal number of resources on a server and not throttled. To provide some context, we ascribe a maximum power of 200W to each job based on its resource allotment, such that each VM can independently consume up to 200W when operating at full utilization, as in prior work [39]. We also assume that power consumption is related to utilization based on the function from Figure 2. We then construct training datasets by simulating the mixing of different jobs together on the same server, such that the server's total power is 200W\(\times n\), where \(n\) is the number of jobs. So, for example, a server in our training data that runs 5 jobs has a peak power of 1,000W (or 5\(\times\)200W). Our contextual parameterization of 200W is arbitrary, and only relevant to experiments that cite power values. In most cases, we report normalized results that are not dependent on server power. Note that, since we derive our ground truth power data from a resource-power model, our experimental results using this data do not incorporate inaccuracies due to this model, but only quantify inaccuracy due to disaggregation. There is substantial prior work on accurately modeling the relationship between resource usage and power [25]. While we leverage this work, our approach is orthogonal to it. Our approach above is general, and can apply to servers at any level of power consumption. Also, note that the approach above would not mimic reality if the jobs running on a server consumed the entire CPU, since at that point, they would conflict with each other in a way that would affect their utilization and power consumption. However, prior work has shown that such conflicts are exceedingly rare, and cluster schedulers include sophisticated algorithms to avoid them even when overcommitting resources [36]. For the Azure trace, we use the approach above to generate a large number of synthetic training datasets for different specific jobs and job classes running on servers with a range of different other jobs and job classes. As discussed in SS4, the training data set for a particular job takes a prior window \(w\) of aggregate server-level power as input, and produces a job's disaggregated power. In SS6, we evaluate disaggregation accuracy with models trained in a variety of different ways, from more-to-less specific. We implement a number of disaggregation models by modifying nilmtk-contrib [20], an open-source toolkit implementation of numerous algorithms for energy disaggregation of buildings. In particular, we replace nilmtk-contrib's existing training data sets with the synthetic training data above, and also eliminate pre-existing configuration meta-data that is specific to particular building loads, e.g., refrigerators, ACs, etc. to make our implementation generic. The toolkit includes numerous other benchmark algorithms, which we evaluate below. We use two primary metrics for evaluating WattScope's accuracy: Mean Absolute Error (MAE) and Normalized Mean Absolute Error (NMAE), shown below. MAE is simply the average of the absolute difference between the inferred power \(p_{i}(t)\) of VM \(i\) and its actual power \(p_{i}(t)\) over all times \(t\), while the NMAE is the MAE normalized by the job's mean power. \[MAE_{i}=\sum_{t=1}^{T}\frac{|p_{i}(t)-p_{i}(t)|}{T} \tag{1}\] \[NMAE_{i}=\frac{MAE_{i}}{\frac{1}{T}\sum_{t=1}^{T}p_{i}(t)} \tag{2}\] The MAE is in units of watts (W) and shows the absolute error in WattScope's inferred power, while the NMAE quantifies the error as a percentage of a job's mean power. In general, low power jobs tend to have higher NMAEs even when their MAE is low in an absolute sense, especially since low power jobs are more challenging to disaggregate from server power that may be much higher. Likewise, high power jobs may have low NMAEs even if when their MAE may be comparatively high. Thus, in our evaluation, we contextualize these results relative to standard benchmark approaches. In particular, we compare with the NMAE and MAE for a baseline approach that infers a job's power is always equal to its mean power over the time interval. We call this the 'Mean' model. We implemented our WattScope, and trained and evaluated our models, on an Intel Xeon Silver 4214R CPU with 12 Cores at 2.4 GHz and 128 GB RAM. WattScope's model trainer leverages a neural network that takes as input a sliding window of aggregate power values to infer a job's power, as discussed in SS4.1. We train WattScope's neural network for 50 epochs with a batch size of 1024. We also optimized the training by fine-tuning the hyperparameters based on prior work [38]. Specifically, our sliding window model uses a window size \(w\) of the 100 previous datapoints, i.e., aggregate power values that first feed into a convolutional layer with 16 filters of size 4 with stride and a rectified linear unit (ReLu) activation function; this layer feeds into a bidirectional gated-recurrent unit (GRU) with size 64 and a concat merge mode followed by a drop-out unit with weight 0.5; this layer then feeds into another similar layer from before but of size 128; this layer finally feeds into two dense layers of \begin{table} \begin{tabular}{||c|c|c|c|c|c|c|c|c|c|c||} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{8}{c|}{**MAE (W)**} & \multicolumn{8}{c||}{**NMAE (\%)**} \\ \cline{2-11} & **job1** & **job2** & **job3** & **job4** & **job5** & \begin{tabular}{c} Aver \\ -aged \\ \end{tabular} & **job1** & **job2** & **job3** & **job4** & **job5** & \begin{tabular}{c} Aver \\ -aged \\ \end{tabular} \\ \hline \hline Mean & 20.88 & 4.95 & 20.33 & 41.80 & 2.78 & 18.15 & 36.93 & 27.46 & 38.99 & 29.55 & 29.71 & 32.53 \\ \hline CO & 56.44 & 23.00 & 53.85 & 134.27 & 9.33 & 55.38 & 99.81 & 127.63 & 103.28 & 94.91 & 99.70 & 105.07 \\ \hline \begin{tabular}{c} Exact \\ -FHMM \\ \end{tabular} & 16.40 & 5.75 & 18.63 & 41.28 & 3.36 & 17.08 & 29.00 & 31.90 & 35.73 & 29.18 & 35.96 & 32.35 \\ \hline DAE & 16.79 & 5.00 & 18.37 & 39.54 & 2.73 & 16.49 & 29.70 & 27.76 & 35.23 & 27.95 & 29.22 & 29.97 \\ \hline RNN & 17.85 & 4.89 & 19.72 & 39.43 & 2.74 & 16.93 & 31.57 & 27.11 & 37.83 & 27.87 & 29.33 & 30.74 \\ \hline Seq2Seq & 18.30 & 4.51 & 18.09 & 37.66 & 2.63 & 16.24 & 32.37 & 25.03 & 34.70 & 26.62 & 28.12 & 29.37 \\ \hline Seq2Point & 16.31 & 4.39 & 17.29 & 35.08 & 2.58 & 15.13 & 28.84 & 24.35 & 33.16 & 24.80 & 27.55 & 27.74 \\ \hline \multicolumn{11}{||c||}{_WattScope_**11.02**} & **3.76** & **13.10** & **29.61** & **2.39** & **11.98** & **19.49** & **20.87** & **25.12** & **20.93** & **25.56** & **22.40** \\ \hline \hline \end{tabular} \end{table} Table 1: _Errors in disaggregating the power of five different representative jobs running on the same physical server._ size 128 (with ReLU activation function) and 1 (with linear activation function), respectively, with another dropout unit of weight 0.5 between them. We compared WattScope's approach above with a wide range of different disaggregation models implemented by nilmtk-contrib [20]. Table 1 shows that WattScope's approach generally has the highest or near the highest MAE, and is also consistent across five different types of job types. For this experiment, all five of these jobs ran on the same physical server, and we trained each model over 7 days and then tested its accuracy over the remaining length of our trace. By contrast, the other models have more variable accuracy. For example, Combinatorial Optimzation (CO) has poor accuracy on job 4, but much better relative accuracy on job 5. Table 1 similarly shows the normalized MAE for the same experiment. In all cases, WattScope yields the lowest MAE and NMAE when inferring each jobs' disaggregated power. The table also shows the MAE and NMAE between the actual aggregate power and the inferred aggregate power computed based on the sum of the inferred power of each job. ## 6 Evaluation In this section, we evaluate WattScope for its accuracy in non-intrusively disaggregating total server power consumption into job-level power consumption. We first present qualitative results illustrating the high disaggregation accuracy of WattScope and provide some intuition for our quantitative metrics (SS6.1). We next evaluate how job characteristics such as variability, regularity, and intensity affect disaggregation accuracy (SS6.2). We then present quantitative results for desegregating server-level power to job-level power based on actual co-location information in a production trace, which demonstrates how WattScope would work in practice (SS6.3). Finally, we evaluate the WattScope's disaggregation approach for its scalability, robustness, and generalization (SS6.4). Note that for most experiments we use application-specific disaggregation models, i.e., trained on the application's power data. We quantify the inaccuracy due to using a general model that is not application-specific in SS6.4. In evaluating WattScope, we assume that our system knows the characteristics of jobs on a server and uses them to select the appropriate model for disaggregation. Evaluating the performance of our model selector or performance monitor is outside the scope of this paper. Figure 9: _Time-series of actual and inferred (disaggregated) power usage of a job for the four representative jobs with different detected periods with high scores, similar coefficient of variations, and different intensities. The caption states the period, mean absolute error (MAE), and normalized mean absolute error (NMAE)._ ### Qualitative Results To provide an intuitive meaning to the quantitative results in the following sections, we present the time series of four representative jobs from the Azure trace. Figure 9 shows the time series of the actual and inferred (or disaggregated) power for each of the four jobs, along with the actual and inferred average power. The graphs illustrate that WattScope's disaggregated power is highly accurate and the actual and inferred power closely matches for all of the jobs, as does the actual and mean power. While we choose a more intuitive metric of NMAE for the rest of this section, a high value of NMAE does not necessarily mean poor disaggregation performance. NMAE can be quite high for jobs with low intensity as the 12h job shown in Figure (b)b has an NMAE value of 38% due to its average power of less than 5W. ### Effect of Job Characteristics As discussed in SS3, a job characteristics impact disaggregation accuracy. We next conduct experiments that decouple the effect of different job characteristics on disaggregation accuracy. In particular, we evaluate the impact of _variability_, _regularity_, and _intensity_. To do so, we sample jobs from the Azure trace with desired characteristics and synthesize servers with desired co-location of jobs. For example, for the left-most bar in Figure 10(a), we select 50 jobs that have CoV between 0 and 0.2, periodicity score of less than 0.2, and magnitude of greater than 100W. We next split the 50 jobs into 10 servers each hosting 5 jobs. We then disaggregate the power of all individual jobs at once and report the average values as well as the confidence interval. We describe the choice of jobs and their co-location settings when evaluating each factor. _Effect of Variability._ Figure 10(a) shows the disaggregation error (in NMAE) on the \(y\)-axis and the coefficient of variation (CoV) on the \(x\)-axis when the two other variables are controlled. The graph shows that, as the coefficient of variation increases, the error increases. The effect of CoV is less prominent for both low and high intensity settings since, at low and high power consumption, variability is bounded by the lower limit of 0 and the higher limit of the server's maximum power, respectively. At medium intensity, an increase in CoV results in a significant increase in variability and, thus, power disaggregation error increases. _Effect of Regularity._ Figure 10(b) shows the disaggregation error (in NMAE) on the \(y\)-axis and periodicity score (CoV) on the \(x\)-axis when the variability and intensity are controlled. As the periodicity score increases, we observe a downward trend in the disaggregation error, which is expected as more regular jobs are easier to disaggregate. However, we observe a very high error for the left most bar, where we have high variability (high CoV) and high intensity (high average power). This happens because periodicity is the strongest factor that affects the disaggregation accuracy. As high variability combines with high intensity, it is challenging for our disaggregator to infer the power consumption of 5 jobs that have random and high power usage. _Effect of Intensity._ Figure 10(c) shows the disaggregation error (in NMAE) on \(y\)-axis and intensity (power usage magnitude) on then \(x\)-axis when the variability and regularity are controlled. The effect of magnitude is only visible at the medium and low variability settings, as at high variability (yellow bar) the effect of variability dominates and results in high error with a slightly higher error at the medium magnitudes. Figure 10: _Effect of job characteristics: (a) effect of variability, quantified as CoV, when both regularity and intensity are controlled, (a) effect of regularity, quantified as periodicity score, when both variability and intensity are controlled, and (c) (a) effect of intensity, quantified as average power, when both variability and regularity are controlled. Each bar represents average across 50 jobs spread across 10 servers and error bars show the 90th percentile confidence interval across jobs. In total, each subfigure shows the disaggregation accuracy for 750 distinct jobs._ **Key Point.**: _The results above show that disaggregation accuracy is a function of a job's variability, regularity, and intensity. In general, variability tends to be the dominant metric in dictating disaggregation accuracy with regularity being the next most important metric followed by intensity._ ### Large-scale Job-level Disaggregation In the previous section, we looked at the individual characteristics of the jobs where we synthesized servers with five jobs on each server, which allowed for controlled experiments. However, actual production environments have a larger number of jobs on each server and may not always place similar jobs on the same server to avoid resource contention. To evaluate the performance of WattScope in real-world settings, we use a power consumption trace based on the Google Cloud trace, which provides the actual co-location information, i.e., which jobs run on the same physical servers. We randomly selected 1,100 servers from the trace, where each server hosts 40 other jobs on average. On each server, we select one job to disaggregate, at a time, while treating others as a background jobs. As our disaggregator takes \(<\)1ms to disaggregate a single job for a single timestep, our method can scale to a large number of jobs. Figure 11 shows the error in disaggregating power consumption of a given job on 1,100 different servers that differ in the number and characteristics of the jobs they run. We have ordered the servers by the Coefficient of Variation (CoV) for the disaggregated job from low (left) to high (right). We make two key observations from this experiment. First, most of the jobs (760 out of 1,100 or \(\sim\)69%) have a very low error of 10% or less, and a very small number of jobs (86 out of 1,100 or \(\sim\)7.81%) have a higher than 20% error. The worst-performing job has an NMAE of 90%, but less than 3W of mean absolute error (MAE). This shows that WattScope is highly accurate in disaggregating the power consumption of jobs even in the presence of a large number of jobs on the server in practical settings. The average error is 9.20%, which is very small considering the variations across servers and jobs. Second, overall, the value of NMAE increases as the CoV increases, indicating the poor disaggregation accuracy for jobs with variability in their power consumption. However, the trend is not smooth as other factors, such as the regularity and the intensity of the power consumption for a job, also affect the power disaggregation accuracy. Figure 12 shows the mean absolute error in disaggregating power consumption of a given job for the same set of servers as in Figure 11. Most jobs (1,034 out of 1,100) have less than 10W of error. This leads to a very small average MAE of 3.69W. Even the worst-performing job has an MAE value of 42W, which is around 20% of the maximum server power in our experimental setup. **Key Point.**: _Disaggregation accuracy is high for the vast majority of jobs in production due to their low variability and high regularity._ ### Scalability, Robustness, and Generalization In this section, we evaluate WattScope's ability to scale to a large number of jobs, robustness to the number of samples used for training, and generalization in using a model trained for a given job to disaggregate another job with similar characteristics but in total different environment. **Scalability.** Figure 13(a) shows the average power disaggregation error as the number of jobs on the server increase. Remember, in this experiment, we disaggregate a single job against the presence of a varying Figure 11: **Normalized Mean Absolute Error (NMAE) in disaggregating a job’s power consumption on the \(y\)-axis for 1,100 servers from the Google trace on the \(x\)-axis. The average error across all the servers is 9.26%. Each server runs 40 jobs on average. Servers are sorted in the order of increasing Coefficient of Variation (CoV) for the disaggregated job from 0.01 (left most) to 3.75 (right most).** Figure 12: **Mean Absolute Error (MAE) in disaggregating a job’s power consumption on the \(y\)-axis for 1,100 servers from Figure 11 on the \(x\)-axis. The average error across all the servers is 3.69W.** number of background jobs. Each bar represents an average across 10 experiments. The decreasing trend of disaggregation error with increasing number of jobs is the result of statistical multiplexing of power usage from background jobs. When the number of jobs on a server is small, the background jobs show significant variation in their usage making the disaggregation of the desired job harder. As the number of jobs on the server increase, the variability of the background jobs decreases due to statistical multiplexing and the aggregate of background jobs becomes easier to separate from the desired job. However, it must be noted that the model used for different number of jobs in the background changes. WattScope needs to train multiple models with different number of background jobs and select an appropriate model for disaggregation at runtime, which creates a trade-off between the disaggregation accuracy and the training cost. **Robustness.** We next examine how the length of the training period for each job's model affects the power disaggregation accuracy. Figure 13(b) shows the length of the training period for the job's model (ranging from 500 samples to over 4,000 samples) on the \(x\)-axis and the average NMAE on the \(y\)-axis across all the jobs. In this case, we have on average 40 jobs co-located on each server as present in the Google trace and we are trying to disaggregate all the jobs one at a time. As expected, as the training period increases, the error tends to decrease. However, the reduction in disaggregation error is marginal once 1,500 samples have been used for training. In our case, each sample is collected over 5 minutes and the 500 samples roughly correspond to 2 days while 4,000 samples correspond to 16 days. Since these jobs are long running (31 days), using up to 6 days (1,500 samples) is feasible. It must also be noted that the wallclock time in days is purely a function of data collection granularity. If data is collected every minute instead of every 5 minutes, the same level of accuracy can be achieved using training data collected in one day. **Generalization** In our scalability experiments, we mentioned that we need to train a model for different number of background jobs which can be costly interms of training time and resources. However, the cost of training can be significantly reduced if we are able to use a single model for a similar set of jobs. To test the generalizability of WattScope's disaggregator, we train a model on a job with given CoV and use it to disaggregate a job with CoV in the same range but running on a different server. Furthermore, the other server does not have the same number of background jobs as the server used for training. Figure 13(c) presents the results for our experiments where the \(x\)-axis is the coefficient of variation and \(y\)-axis is the average NMAE. The left bar (yellow, slanted pattern) represents the accuracy of the trained model on the same job while the right bar (red, horizontal pattern) represents the accuracy when the model is used on a different server to disaggregate a similar job but with different number and characteristics for the background jobs. The overall results show a high disaggregation accuracy that degrades with the increase in CoV. This indicated that the variability of the power usage trace is a stronger factor in determining the disaggregation accuracy than any other factor, even generalization. **Key Point.**_Our experimental results show that WattScope i) scales well as more jobs run on each server, ii) is robust as the amount of training data decreases, and iii) enables the use of generalized models trained similar applications with medium-to-low CoVs at similar accuracy._ Figure 13: WattScope Performance: (a) error in disaggregating a single job as the number of background jobs increases, (b) the effect of size of training data in number of samples used for training, and (c) generalization to disaggregating similar jobs on different servers. In (c), baseline represents the error when using the model to disaggregate the same job that was used for training and the swapped represents the error in using the model to disaggregate a similar job on another server. Each bar in generalization results represents the average across 20 experiments. ### Production Experiments In this section, we evaluate WattScope's performance in disaggregating power consumption for jobs that run in a physical conventional datacenter cluster. Our cluster comprises 40 Dell PowerEdge R430s with Intel Xeon processors with 16 cores and 64GB memory. We randomly sample 30 servers from the Google Trace and replay all of the jobs on our servers using stress-ng [28] for 3 weeks. To get the ground-truth power consumption for a given job, we run it in isolation and record its power consumption. Similar to our analysis in SS6.1, Figure 14(a) presents the time series of actual and inferred (disaggregated) power usage for the two jobs with the best NMAE (top) and the worst NMAE (bottom) from our production experiments consisting of 30 VMs, along with the actual and inferred average power. The graph illustrates that WattScope's disaggregated power matches well with the actual power consumption observed for the job for both NMAE values. For the worst NMAE scenario, the disaggregated power deviates from the actual power consumption but closely matches the trend. Figure 14(b) and Figure 14(c) present the distribution of errors using NMAE and MAE metrics, respectively. Similar to our large-scale evaluation results, more than 70% of the jobs have a less than 10% NMAE, and more than 90% of the jobs have less than 5W MAE. Our results demonstrate that WattScope demonstrates good performance on real power consumption traces from jobs running in conventional datacenters and can be deployed in practice. ## 7 Conclusion We design a model-based system WattScope for non-intrusively estimating the power consumption of individual applications using external measurements of a server's aggregate power usage and without requiring direct access to the server's operating system or applications. WattScope is widely applicable in datacenters, which typically meter individual servers for management and billing. WattScope addresses key problems with traditional application-level power monitoring techniques, which are **intrusive**: require running privileged software to monitor fine-grained resource utilization and hardware support that is not always available. Our key insight (SS3) is that, based on an analysis of production traces, the power characteristics of datacenter workloads, e.g., low variability, low magnitude, and high periodicity, are highly amenable to disaggregation of a server's total power consumption into application-specific values. We present WattScope for disaggregating server- and rack-level power meter measurements, that are already available in data centers, to server- and job-level power information, respectively. We extensively evaluate WattScope's accuracy on a production workload and show that it yields high accuracy, e.g., often \(<\sim\)10% normalized mean absolute error. Our key insight that enables accurate disaggregation is the generally low variability and high regularity of production applications in industry traces, as shown in SS3. This insight is more broadly applicable to general scheduling and resource problems in datacenters, including placing jobs and overcommitting resources. In the future, we plan to explore other implications of this insight. We also plan to explore methods for improving model selection by inferring an application's runtime characteristics, in terms of variability, regularity, and intensity, from its meta-data, such as the characteristics and constraints in its resource request. **Acknowledgements.** This research is supported by NSF grants 2213636, 2136199, 2106299, 2102963, 2105494, 2021693, 2020888, 2045641, as well as VMware. Figure 14: WattScope performance in production: (a) time-series of actual and inferred (disaggregated) power usage for the two jobs with the best NMAE (top) and the worst NMAE (bottom) from our production experiments consisting of 30 VMs, (b) NMAE distribution (7.12% average), and (c) MAE distribution for the jobs (4.16W average).
2309.07579
Structure-Preserving Transformers for Sequences of SPD Matrices
In recent years, Transformer-based auto-attention mechanisms have been successfully applied to the analysis of a variety of context-reliant data types, from texts to images and beyond, including data from non-Euclidean geometries. In this paper, we present such a mechanism, designed to classify sequences of Symmetric Positive Definite matrices while preserving their Riemannian geometry throughout the analysis. We apply our method to automatic sleep staging on timeseries of EEG-derived covariance matrices from a standard dataset, obtaining high levels of stage-wise performance.
Mathieu Seraphim, Alexis Lechervy, Florian Yger, Luc Brun, Olivier Etard
2023-09-14T10:23:43Z
http://arxiv.org/abs/2309.07579v7
# Structure-Preserving Transformers for Sequences of SPD Matrices ###### Abstract In recent years, Transformer-based auto-attention mechanisms have been successfully applied to the analysis of a variety of context-reliant data types, from texts to images and beyond, including data from non-Euclidean geometries. In this paper, we present such a mechanism, designed to classify sequences of Symmetric Positive Definite matrices while preserving their Riemannian geometry throughout the analysis. We apply our method to automatic sleep staging on timeseries of EEG-derived covariance matrices from a standard dataset, obtaining high levels of stage-wise performance. Mathieu Seraphim\({}^{\star}\) Alexis Lechervy\({}^{\star}\) Florian Yger\({}^{\dagger\star}\) Luc Brun\({}^{\star}\) Olivier Etard\({}^{\ddagger}\)\({}^{\star}\) Normandie Univ, UNICAEN, ENSICAEN, CNRS, GREYC, 14000 Caen, France \({}^{\dagger}\) LAMSADE, CNRS, PSL Universite Paris-Dauphine, France \({}^{\ddagger}\) Normandie Universite, UNICAEN, INSERM, COMETE, CYCERON, CHU Caen, 14000, Caen, France Transformers, SPD Matrices, Structure-Preserving, Electroencephalography, Sleep Staging ## 1 Introduction When analyzing the relationship between feature vectors or concurrent signals, correlation and covariance matrices are a useful tool. Such matrices are at least Positive Semi-Definite, and often fully Symmetric Positive Definite (SPD). The set of \(n\times n\) SPD matrices (\(SPD(n)\)) is a non-Euclidean, Riemannian (i.e. metric) manifold, and the regular Euclidean operations of most Neural Network (NN)-based models seldom preserve that geometric structure, introducing deformations such as the "swelling effect" [1]. Structure-preserving NN-based approaches have been introduced [2, 3], deriving their layers from one of two geodesic-defining metrics on \(SPD(n)\). Affine invariant metrics offer the best properties, but present computational challenges (e.g. no closed-form formula for averaging) [4]. LogEuclidean metrics are less isotropic, but still prevent swelling while being easier to compute [1]. With \(A,B\in SPD(n)\), we chose this LogEuclidean distance: \[\delta_{LE}(A,B)=\|log_{mat}(A)-log_{mat}(B)\|_{2} \tag{1}\] This metric relies on the matrix logarithm \(log_{mat}(\cdot)\), bijectively and isometrically mapping \(SPD(n)\) onto \(Sym(n)\), the vector space of \(n\times n\) symmetric matrices (with \(exp_{mat}(\cdot)\) being its inverse). Here, \(\|X\|_{2},X\in Sym(n)\) is the \(\mathcal{L}_{2}\) norm applied to the upper triangular of \(X\). LogEuclidean operations are thus the Riemannian equivalent to Euclidean operations on \(Sym(n)\). In this paper, we present a structure-preserving self-attention mechanism applicable to sequences of SPD matrices, derived from the aforementioned LogEuclidean metric. We embed said mechanism into a Transformer-based architecture, and apply it to a biomedical classification problem. Transformer-based technology has exploded in popularity ever since its introduction in [5], with self-attention mechanisms being applied to very different problems. With regards to Riemannian geometry, innovations seem centered around the computation and application of attention maps, specifically. For instance, Konstantinidis et al. [6] combine the standard attention maps with Grassmann and SPD manifold-valued maps, to enrich their computer vision model's descriptive capabilities. By contrast, both He et al. [7] and Li et al. [8] developed architectures to analyze 2D-manifold-valued Figure 1: The SP-MHA architecture. In parentheses are tensor dimensions at every step, with \(N\) the batch size. data in 3D space, the former achieving rotational equivariance with respect to surfaces on the manifold and the latter developing two geodesic distances applicable to point clouds, and building attention maps from these distances. More generally, Kratsios et al. [9] provide a mathematical framework to apply attention mechanisms on a variety of constrained sets, including manifolds. While the latter approaches share our interest in preserving geometric information, little to no focus is given to a Transformer's other components. As far as we are aware, ours is the only approach to apply structure-preserving Transformers to SPD manifold-valued data. ## 2 SPD Structure-Preserving Attention Let \(B_{m}=\{e_{i,j}\}_{0<i\leq j}\subset\mathbb{R}^{m\times m}\) be the the canonical basis of \(Sym(m)\), with \((e_{i,j})_{i,j}=(e_{i,j})_{j,i}=1\), and all other coefficients at 0. Let the triangular number \(d(m)=\frac{m(m+1)}{2}\) be the dimension of \(Sym(m)\). Any matrix \(M\) of \(Sym(m)\) can be written in the basis \(B_{m}\) as a vector (a.k.a. "token") of coordinates in \(\mathbb{R}^{d(m)}\). Therefore, any linear combination of these tokens would equate to the same linear combination in \(Sym(m)\), and thus to a LogEuclidean weighted sum in \(SPD(m)\), preserving its manifold structure. ### Structure-Preserving Multihead Attention (SP-MHA) In the original Linear Multihead Attention (L-MHA) component of Transformers [5], the input tokens in the Q, K and V tensors are processed in parallel in \(h\) attention heads, then recombined through concatenation. There is no guarantee that any underlying SPD structure in our tokens would survive this concatenation. Echoing the similar concerns, Li et al. [8] decided to forego having multiple heads. We chose instead to redefine the bloc, keeping the parallel attention maps computation without sacrificing our data's structure. Let \(d(m)\) be the dimension of input tokens. As seen in Figure 1, our SP-MHA bloc does the following: \[MHA_{SP}(Q,K,V)=C\left(sm\left(\frac{\mathcal{L}_{Q}(Q)\cdot\mathcal{L}_{K}(K) ^{T}}{\sqrt{d(m)/h}}\right)\right)\cdot V \tag{2}\] with \(\mathcal{L}_{Q}(\cdot)\) and \(\mathcal{L}_{K}(\cdot)\) banks of \(h\) linear maps from \(\mathbb{R}^{d(m)}\) to \(\mathbb{R}^{\frac{d(m)}{h}}\), \(sm(\cdot)\) the softmax function, and \(C(\cdot)\) the weighted linear combination of the \(h\) post-softmax attention maps. Although said attention maps are identical to their L-MHA counterparts, the only operation applied to V is the final matrix multiplication, i.e. linear combinations of V's tokens weighted by the combined attention map, which do not threaten our tokens' vector space geometry. ### Triangular linear maps Let \(Sym(n)\) and \(Sym(m)\) have the canonical bases \(B_{n}\) and \(B_{m}\), respectively. Let \(\mathcal{L}_{n,m}(\cdot)\) be a linear map from \(Sym(n)\) to \(Sym(m)\), represented by the matrix \(W\) in \(\mathbb{R}^{d(m)\times d(n)}\) with respect to the bases (implemented in code through a fully connected NN layer between tokenized matrices). We shall refer to such a map as a "triangular" linear map. Let \(A^{*},B^{*}\) be in \(SPD(n)\), mapped to \(A,B\in Sym(n)\) through \(log_{mat}(\cdot)\). As \(\mathcal{L}_{n,m}(\cdot)\) is a continuous linear map: \[\|\mathcal{L}_{n,m}(A)-\mathcal{L}_{n,m}(B)\|_{2}\leq\|W\|_{*}\cdot\|A-B\|_{2} \tag{3}\] \[\delta_{LE}(prj_{n,m}(A^{*}),prj_{n,m}(B^{*}))\leq\|W\|_{*}\!\cdot\!\delta_{LE }(A^{*},B^{*}) \tag{4}\] with \(\|\cdot\|_{*}\) the matrix norm induced by the norm \(\|\cdot\|_{2}\), and \(prj_{n,m}(\cdot)=exp_{mat}\circ\mathcal{L}_{n,m}\circ log_{mat}(\cdot)\) mapping \(SPD(n)\) onto \(SPD(m)\). By definition of \(\delta_{LE}\) (Equation 1), Equations 3 and 4 are strictly identical. Hence, applying \(\mathcal{L}_{n,m}(\cdot)\) on our tokens is equivalent to applying \(prj_{n,m}(\cdot)\) on matrices in \(SPD(n)\). The output tokens exhibit the Riemannian structure of \(SPD(m)\), and relations of proximity are preserved. Therefore, so is the overall structure of our data. Note that while other SPD-to-SPD NN-based mappings have been proposed [10, 2], they rely on full-rank weight tensors, whereas \(prj_{n,m}(\cdot)\) does not require special constraints. ## 3 Application to EEG Sleep Staging The study of sleep most often requires the analysis of electrophysiological - including electroencephalographic (EEG) - signals, subdivided into fixed-length windows ("epochs") and manually labeled with the appropriate sleep stages, inferred from properties of the signal in and around each epoch [16]. Figure 2: SPDTransNet global architecture, with \(t=3\) feature tokens per epoch. As seen in a recent survey by Phan et al. [17], state-of-the-art automatic sleep staging models typically use two-step architectures - given a sequence of epochs, epoch-wise features are extracted before being compared at the sequence-wise level, utilizing this contextual information to improve classification. Since epochs often contain markers indicative of multiple stages, two-step architectures tend to subdivide them further, extracting features from subwindows using convolutional NNs [12] and/or recurrent NNs [18, 19] - the latter utilizing RNNs for both steps. Multiple authors have adapted this context-inclusive approach to Transformer-based architectures [20, 21, 15], with auto-attention mechanisms at both the intra- and inter-epoch levels, taking advantage of the high performance they offer when applied to sequence-based data. ### Improving stage-wise classification According to the aforementioned survey [17], current sleep staging models have attained a sufficient performance level to replace manual staging in some contexts. However, we have found that class-wise performance was often lacking, particularly with regards to the N1 sleep stage [16], universally difficult to classify (Section 4). Most EEG datasets are heavily imbalanced, with the N1 stage often underrepresented (Section 3.3) - models optimized for high overall accuracy may thus sacrifice N1 classification if it improves global performance. To account for this, recent approaches [14, 15] elected to primarily evaluate their performance through the macro-averaged F1 (MF1) score, a class-wise balanced metric widely used in the literature. They also rebalance their training sets through oversampling, so that all stages within have the same number of classification targets. While the survey states that a sequence-to-sequence classification scheme (classifying each epoch in the input sequence) might lead to better performance, having multilabel inputs is nonsensical for this rebalancing - hence their use of a sequence-to-epoch scheme (classifying one epoch per sequence). Seraphim et al. [15] hypothesized that an analysis through functional connectivity - the activation correlations between different brain regions [22] - enhances stage-wise performance. Such an analysis was first done by Jia et al. [13], using epoch-wise graph learning to estimate said connectivity and sequence-wise spatio-temporal graph NNs to compare them. By contrast, Seraphim et al. estimate it through covariance matrices. Their two-step model uses standard Transformer encoders at each step, reminiscent of [21]. Each input epoch is described as a multichannel timeseries of SPD matrices, which are then tokenized bijectively. However, their approach does not guarantee the preservation of their data's SPD structure, as they operate a channel-wise concatenation of their tokens, in addition to the concatenations found within their encoders (Section 2.1). Hence, we propose a Transformer-based model capable of analyzing EEG-derived functional connectivity through SPD matrices _without_ sacrificing the SPD structure of our data throughout the analysis. ### The SPDTransNet model As can be seen in Figure 2, our SPDTransNet model takes as input a sequence of \(L\) epochs, composed of a central epoch to classify and surrounding epochs to provide context. Given \(\ell\) the context size, we have \(L=2\cdot\ell+1\). Each EEG signal is decomposed into \(C\) channels, divided into epochs, and further subdivided into \(S\) subwindows per epoch. After preprocessing (Section 3.3), each epoch is described by \(S\times C\) matrices in \(SPD(n)\). Each matrix is mapped onto \(Sym(n)\) logarithmically (Section 1), tokenized (Section 2), and linearly mapped onto \(Sym(m)\) (with \(m>n\), as we have found that larger tokens improve performance). The \(S\times C\) grid of tokens is then arranged into a 1D sequence, with the \(S\) tokens in the channel 1 followed by the \(S\) tokens in channel 2, etc. At the intra-epoch level, a first positional encoding is applied to the tokens, which pass through the first Transformer encoder. The \(S\times C\) output tokens are then uniformly divided into \(t\) groups, with each group averaged into a single token. The \(L\) sets of \(t\) tokens are then regrouped at the inter-epoch level, and passed through another positional encoding and Transformer encoder pair. Finally, the \(t\) tokens corresponding to the central epoch (of index \(\ell+1\) in Figure 2) go through two FC blocs1, and are mapped onto \(\hat{y}_{\ell+1}\in\mathbb{R}^{c}\) by a final classification linear map, with \(c\) the number of classes. Footnote 1: Fully connected layers followed by ReLU activation and dropout layer. We ensure structure preservation by using the SP-MHA bloc in all Transformer encoders, and choosing all linear maps \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{2-9} \multicolumn{1}{c|}{} & Model & MF1 & Macro Acc. & N1 F1 & Valid. metric & Token dim. \(d(m)\) & \# Feat. Tokens \(t\) \\ \hline 1 & DeepSleepNet [11] & 78.14 \(\pm\) 4.12 & 80.05 \(\pm\) 3.47 & 53.52 \(\pm\) 8.24 & N/A & N/A & N/A \\ \hline 2 & IITNet [12] & 78.48 \(\pm\) 3.15 & 81.88 \(\pm\) 2.89 & 56.01 \(\pm\) 6.54 & N/A & N/A & N/A \\ \hline 3 & GraphSleepNet [13] & 75.58 \(\pm\) 3.75 & 79.75 \(\pm\) 3.41 & 50.80 \(\pm\) 8.06 & N/A & N/A & N/A \\ \hline 4 & Dequidt et al. [14] & 81.04 \(\pm\) 3.26 & 82.59 \(\pm\) 3.45 & 58.42 \(\pm\) 6.09 & N/A & N/A & N/A \\ \hline 5 & Seraphim et al. [15] & 79.78 \(\pm\) 4.56 & 81.76 \(\pm\) 4.61 & 58.43 \(\pm\) 6.41 & MF1 & Concatenation & 1 \\ \hline 6 & SPDTransNet, \(L=13\) & 81.06 \(\pm\) 3.49 & **84.87**\(\pm\) 2.47 & 60.39 \(\pm\) 6.77 & MF1 & 351 (\(m=26\)) & 7 \\ \hline 7 & SPDTransNet, \(L=21\) & **81.24**\(\pm\) 3.29 & 84.40 \(\pm\) 2.61 & **60.50**\(\pm\) 6.18 & MF1 & 351 (\(m=26\)) & 10 \\ \hline 8 & SPDTransNet, \(L=29\) & 80.83 \(\pm\) 3.40 & 84.29 \(\pm\) 2.65 & 60.35 \(\pm\) 6.01 & N1 F1 & 351 (\(m=26\)) & 5 \\ \hline \end{tabular} \end{table} Table 1: Results obtained from both our model and the re-trained literature. Best results are in **bold**. within said encoders' Feed-Forward (FF) components [5] and the aforementioned FC blocs to be triangular (Section 2.2). The ReLU and dropout layers in the FF and FC blocs do not cause issue, as setting a values within a token to 0 won't remove the corresponding matrix from \(Sym(m)\). Same for the positional encodings, average poolings and in-encoder layer normalizations, which all qualify as linear combinations. As such, our model preserves the SPD structure of its input up to the final classification map. ### Dataset and preprocessing We utilize the MASS SS3 dataset [23] due to its large number of available EEG electrode-derived signals and widespread use in the literature. It is composed of 62 full-night recordings of healthy subjects, segmented into 30s epochs. Due to its nature, it is unbalanced, with the largest and smallest of its \(c=5\) classes (stages N2 and N1) composed of 50.24% and 8.16% of the dataset, respectively. As do [14] and [15], we selected the 8 electrodes F3, F4, C3, C4, T3, T4, O1 and O2. To estimate functional connectivity from those signals, we apply the same preprocessing pipeline as [15]2, computing \(S\times C=30\times 7\) covariance matrices in \(SPD(8)\), with \(S\) the sequence length and \(C\) the number of frequency-based channels. We then augment our matrices with signal-derived information before whitening them2, leading to more uniformly distributed matrices in \(SPD(9)\) (i.e. \(n=9\)). Said whitening requires the computation of average covariance matrices per recording and channel, which was done in [15] by computing the covariances over the entire recording. Instead, we average all relevant matrices using the standard affine invariant metric [4], improving performance. Footnote 2: More details at github.com/MathieuSeraphim/SPDTransNet. ## 4 Experiments & Results To maximize class-wise performance, we operate a hyperparameter research per configuration, followed by a 31-fold cross-validation. As do [14, 15] (Section 3.1), we rebalance all training sets and maximize the MF1 score. To explore the importance of the context length \(\ell\) (Section 3.2) within our model, we ran hyperparameter researches with \(\ell\) = 6, 10 or 14 (i.e. \(L\) = 13, 21 or 29), with hyperparameter research configuration unchanged between them. Our hyperparameter researches use the Optuna tool [24], with 5 simultaneous runs and 50 total runs per configuration. Hyperparameters include2 the token size \(d(m)\), set by the first linear map (Section 3.2) and chosen in {351, 378} (i.e. \(m\)\(\in\) {26, 27})3; the \(h\) parameter of each Transformer encoder, in {3, 9}3; and the number of epoch feature tokens \(t\) (Section 3.2), chosen among {1, 3, 5, 7, 10} - with in particular \(t=1\) akin to describing each epoch with a single token, and \(t=7\) corresponding to one token being preserved per channel. We train all folds on the hyperparameters giving the best validation MF1, as well as those with the best F1 score for the N1 stage. Out of those two sets, the results from the set yielding the best average test MF1 is presented in lines 6 to 8 of Table 1, with the corresponding hyperparameter set, \(d(m)\) and \(t\) in the final three columns. Footnote 3: Since \(\frac{d(m)}{h}\) must be an integer, potential values for those are limited. We compare ourselves to five models: DeepSleepNet [11], often used as a benchmark, with a pre-trained epoch-wise global feature map submodel followed by a sequence-to-sequence RNN; IITNet [12], the source of our 31 folds, extracting multiple features per epoch through CNNs and comparing them through sequence-wise RNNs; GraphSleepNet [13], expliciting epoch-wise functional connectivity through graph learning; Dequidt et al. [14], utilizing a single-step pretrained visual CNN, who both maximize MF1 performance and rebalance training sets; and Seraphim et al. [15], with a similar approach to ours lacking in structural preservation (Section 3.1). These models were re-trained using our methodology - except for oversampling in DeepSleepNet's sequence-to-sequence submodel - though we use only their published hyperparameters. Finally, as test sets vary between models due to sequence-based recording-wise border effects, we trim test set borders to enforce uniformity. All these changes cause the results we obtained to differ somewhat from those initially published. Our results, averaged over all folds, are displayed in lines 1 to 5 of Table 1. As shown in Table 1, we obtain the best MF1 and N1 F1 scores for \(L=21\), whereas the best macro-averaged accuracy is obtained for \(L=13\). For all values of \(L\), we outperform the state-of-the-art on the considered metrics (except for the MF1 score for \(L=29\)). Moreover, all three configurations have around a two-point lead in both macro accuracy and N1 F1 score. While our model favors the smaller token size of \(d(m)=351\) for all values of \(L\), it seems that having a large number of tokens to describe each epoch (at least \(t=5\)) is necessary for best performance. Overall, \(L=21\) seems to be a good compromise to capture enough contextual information without burdening our model with irrelevant data. ## 5 Conclusion We presented SP-MHA, a novel, structure-preserving Multi-head Attention bloc, and integrated it into our SPDTransNet model, designed to analyze SPD matrix sequences. We proved said model's capabilities through automatic EEG sleep staging, obtaining a high level of per-stage performance relative to the literature. Beyond this two-step analysis, SPDTransNet can be easily adapted to a variety of problems, for instance by using only a single encoder step and/or implementing a sequence-to-sequence classification scheme.
2308.16749
Cyclotomic expansions for double twist knots with an odd number of half-twists
In this note, we compute the cyclotomic expansion formula for colored Jones polynomial of double twist knots with an odd number of half-twists $\mathcal{K}_{p,\frac{s}{2}}$ by using the Kauffman bracket skein theory. It answers a question proposed by Lovejoy and Osburn in 2019.
Qingtao Chen, Kefeng Liu, Shengmao Zhu
2023-08-31T14:10:19Z
http://arxiv.org/abs/2308.16749v1
# Cyclotomic expansions for double twist knots with an odd number of half-twists ###### Abstract. In this note, we compute the cyclotomic expansion formula for colored Jones polynomial of double twist knots with an odd number of half-twists \(\mathcal{K}_{p,\frac{s}{2}}\) by using the Kauffman bracket skein theory. It answers a question proposed by Lovejoy and Osburn in 2019. ###### Contents * 1 Introduction * 2 Preliminaries * 3 Proof of the cyclotomic expansion formula * 4 Multiple sum expressions * 4.1 Bailey chains * 4.2 Multiple sum expression for the coefficient \(c_{k,p}\) * 4.3 Multiple sum expression for the coefficient \(c_{k,\frac{s}{2}}\) * 4.4 Multiple sum expression for the coefficient \(d_{k,j}\) * 5 Appendix: corrected form of Walsh's formula
2309.09540
Mind the (spectral) gap: How the temporal resolution of wind data affects multi-decadal wind power forecasts
To forecast wind power generation in the scale of years to decades, outputs from climate models are often used. However, one major limitation of the data projected by these models is their coarse temporal resolution - usually not finer than three hours and sometimes as coarse as one month. Due to the non-linear relationship between wind speed and wind power, and the long forecast horizon considered, small changes in wind speed can result in big changes in projected wind power generation. Our study indicates that the distribution of observed 10min wind speed data is relatively well preserved using three- or six-hourly instantaneous values. In contrast, daily or monthly values, as well as any averages, including three-hourly averages, are almost never capable of preserving the distribution of the underlying higher resolution data. Assuming that climate models behave in a similar manner to observations, our results indicate that output at three-hourly or six-hourly temporal resolution is high enough for multi-decadal wind power generation forecasting. In contrast, wind speed projections of lower temporal resolution, or averages over any time range, should be handled with care.
Nina Effenberger, Nicole Ludwig, Rachel H. White
2023-09-18T07:35:38Z
http://arxiv.org/abs/2309.09540v1
Mind the (spectral) gap: How the temporal resolution of wind data affects multi-decadal wind power forecasts ###### Abstract To forecast wind power generation in the scale of years to decades, outputs from climate models are often used. However, one major limitation of the data projected by these models is their coarse temporal resolution - usually not finer than three hours and sometimes as coarse as one month. Due to the non-linear relationship between wind speed and wind power, and the long forecast horizon considered, small changes in wind speed can result in big changes in projected wind power generation. Our study indicates that the distribution of observed 10min wind speed data is relatively well preserved using three- or six-hourly instantaneous values. In contrast, daily or monthly values, as well as any averages, including three-hourly averages, are almost never capable of preserving the distribution of the underlying higher resolution data. Assuming that climate models behave in a similar manner to observations, our results indicate that output at three-hourly or six-hourly temporal resolution is high enough for multi-decadal wind power generation forecasting. In contrast, wind speed projections of lower temporal resolution, or averages over any time range, should be handled with care. ## 1 Introduction Wind is one of the main sources of renewable power and its utilisation is on the rise in many countries, [e.g. Soares-Ramos et al., 2020]. It is therefore of uttermost importance to have reliable wind power forecasts in the range of years to decades (i.e. a turbine's lifetime) for site assessment and reliable future power supply [Copernicus, 2023]. However, the high variability and stochasticity of weather and wind introduces uncertainty that makes multi-decadal planning difficult. Furthermore, computational complexity limits the resolution of forecasts; the temporal and spatial resolution 2 of long-term and multi-decadal forecasts are therefore usually much coarser than that of short-term forecasts [Eyring et al., 2016]. Footnote 2: Notice that temporal and spatial resolution are often directly linked e.g. Courant et al. [1928] But do the low temporal resolution outputs from e.g. climate models provide wind speeds representative of the true site-specific high resolution wind speed distribution? An indicator that this could be the case is the so called wind power spectral gap [Van der Hoven, 1957]. Wind speed variability can be assessed in the frequency domain in terms of power spectra by (Fourier-) decomposing high-resolution wind speed observations. This decomposition reveals an amplitude gap between high and low frequencies where the strong variability in high frequencies (on the order of seconds to minutes) is associated with turbulence, while strong low frequency variability (hours to days) is associated with synoptic weather systems [Van der Hoven, 1957],[Stull, 1988]. In the gap between the synoptic weather and turbulence peaks in the frequency spectrum are frequencies with little variability; this was found to be in the range of \(10^{-3}Hz(\sim 17min)\) to \(10^{-4}Hz(\sim 3h)\) by Van der Hoven (1957). However, other research suggests the gap is smaller (Kang and Won, 2016) or may not exist at all (Larsen et al., 2016). This spectral gap is described in many research papers (Horvath et al., 2012), (Kang and Won, 2015), (Larsen et al., 2016), (Lopez-Villalobos et al., 2021) indicating that observations that fall into the corresponding frequencies do not add much knowledge. Given the inconsistency of the width of this spectral gap, the question remains of what temporal resolution we should aim for in multi-decadal wind power forecasting. Due to the non-linear relationship between wind speed and wind power, small changes in the wind speed distribution can lead to significant changes in wind power generation. However, if the underlying wind speed characteristics are preserved, lower-resolution data is preferred for multi-decadal forecasting to reduce required storage space and potentially also computational costs. Additionally, multi-decadal wind speed projections should also account for climate change and interannual climate variability (Pryor et al., 2018). Several studies indicate that climate change will affect average wind speeds (McVicar et al., 2012), (Tobin et al., 2015)) as well as wind speed variability (Pryor and Barthelmie, 2010), (Tobin et al., 2016), (Dunn et al., 2019), (Jeong and Sushama, 2019), (Ringkjob et al., 2020) which will impact wind power generation. In general, it must be assumed that current wind conditions may not be representative of future wind conditions (Jung and Schindler, 2019); we thus often rely on output from climate models to help inform about future potential wind power. But which data (resolution) do we need for meaningful wind speed and power predictions? There seems to be agreement among researchers that certain temporal resolutions are _too low_. It is therefore common to account for additional variability using so-called downscaling techniques (Pryor and Barthelmie, 2010). In the past, different statistical downscaling approaches have been introduced, e.g. (Von Storch, 1999), (Tobin et al., 2015), (Shin et al., 2018), as well as dynamical downscaling using regional climate models such as CORDEX (Giorgi and Gutowski Jr, 2015), used by e.g. Davy et al. (2018) and Yang et al. (2022). The question remains, however: what temporal resolution of wind speed data is _high enough_? The output from climate models are often available in various temporal aggregations, where some datasets represent temporal averages and other datasets consist of instantaneous values. In the CMIP6 datasets (Eyring et al., 2016), one of the most widely used set of global circulation models (GCMs), wind projections are available as temporal means and instantaneous values (CMIP6 data request, 2016). Currently, however, little focus has been placed on the choice of data and many studies are performed without explicitly stating whether averages or instantaneous values are being used. With this study we show that the type of data (instantaneous vs averages) influences the wind speed distributions and thus the estimated wind power generation. We also conduct analysis to determine whether there is a temporal resolution that is _high enough_, i.e. for which added temporal resolution provides little additional information and accuracy. We give recommendations regarding the choice of data and the temporal resolution that downscaling techniques, and climate model outputs, should aim for. To do this, we conduct empirical data analysis using data from eight different mid-latitude sites across Europe and North America. In Section 2 we describe the data and the methodology used. Our results are presented in Section 3 and we discuss their implications in Section 4. Finally, we conclude in Section 5. ## 2 Methods To investigate how aggregating wind speed data affects the wind speed distribution we use both parametric and non-parametric approaches. In the main manuscript we focus on observations of turbine-hub-height winds from four sites, with locations shown in Figure 1. We use these as our primary data as they are hub-height data; however, all four sites have a relatively limited observational period of only a few years (see Table 1). We thus also use 10\(m\) wind speeds from an additional four observational met masts from locations across Germany (see Figure A.1 for locations) that have between \(18\) to \(34\) years of data available. These data show very similar results to the hub-height datasets (analysis presented in the supplementary material in Section A). We also use these longer datasets to analyze multi-decadal tendencies of wind power generation in Section 3. In the following we first give a description of the data used and then describe our methodology. ### Data We investigate wind speed observations using open-source high met mast wind data of four mid-latitude locations. All of the wind speeds are either measured at wind turbine hub-height directly, i.e. by nacelle anemometers (sites Penmanshiel and Kelmarsh) or by high met masts (sites NWTC and Owez). The observation heights are between \(59m\) and \(116m\) and all of the measurements are provided as 10min averages, a very common aggregation-level of wind resource data (Harper et al., 2010). In Table 1 we present static information of the observation sites. Static information of the longer datasets (at 10\(m\) height) is presented in Table A.1. Unless specified otherwise, the following abbreviations for the datasets can be found in the top left corner of figures: a) Kelmarsh, b) Penmanshiel, c) NWTC, d) Owez, e) Aachen, f) Zugspitze, g) Boltenhagen, h) Fichtelberg. To bring the data to a format where we can compare different temporal resolutions, we pre-process the data by excluding all days where at least one observation is missing. We then average the 10 min wind speed observations to three-hourly, six-hourly and daily data: the \(n\)'th wind speed value \(w_{n}\) in the time-series averaged over \(t\) consecutive time steps is computed as \[w_{n}^{t}=\frac{1}{t}\sum_{i=nt}^{nt+t}w_{i}^{1}, \tag{1}\] where \(t=18\) (three-hourly), \(36\) (six-hourly) and \(144\) (daily) respectively; for \(t=1\) we get the original 10 min resolution time series \(w^{1}\). In addition to calculating averages, we also consider wind speed time series of lower resolution, which we call instantaneous values. To do so, we use wind speed measurements every \(t\)'th time step only and exclude all other wind speed measurements \(w_{i}\) where \(i\neq nt\). The results are eight observations per day (every three hours), four observations per day (every six hours) and one observation per day respectively. ### Comparing wind speed distributions To determine whether wind speed distributions from data of different temporal resolution are statistically different, we compute pairwise Kolmogorov-Smirnov test statistics of cumulative density distributions \[F_{W}(w)=P(W\leq w). \tag{2}\] \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Name & Country (Lat, Lon) & Temporal resolution in min & Observation period (Number of days) & Observation height in \(m\) & Mean wind speed (variance) in & Data source \\ \hline Kelmarsh & United Kingdom (52.40, -0.95) & 10 & 2016-2021 (1687) & 78.5 & 6.25 (7.70) & Plumley [2022a] \\ \hline Penmanshiel & United Kingdom (55.90, -2.31) & 10 & 2016-2021 (1522) & 59 & 7.27 (15.30) & Plumley [2022b] \\ \hline Owez & Netherlands (52.61, 4.39) & 10 & 2012-2017 (1148) & 116 & 4.82 (13.02) & Ramon et al. [2020], Ramon et al. [2022a] \\ \hline NWTC & United States (39.21, -105.23) & 10 & 2005-2010 (1083) & 87 & 8.37 (16.55) & Ramon et al. [2020], Ramon et al. [2022b] \\ \hline \end{tabular} \end{table} Table 1: Static data of the four different sites with hub-height measurements. Our chosen datasets cover a large range of observation heights as well as different mean wind speeds and variances. Figure 1: Locations of the two wind farms in the UK and the tall towers in central North America and the Netherlands. Our hub-height observation locations include one mountainous site (NWTC), one off-shore site (Owez), one coastal site (Penmanshiel) and one site on flat terrain (Kelmarsh). Penmanshiel and Kelmarsh are wind farm sites, their wind speeds are therefore influenced by wake effects of the surrounding turbines [e.g. González-Longatt et al., 2012] and we only used one turbine for our evaluations. The Kolmogorov-Smirnov statistic \(D\) is given by: \[D=\sup_{w}|T(w)-S(w)| \tag{3}\] where \(w\) are the wind speed values, \(T\) and \(S\) are the wind speed distributions to be compared, and the supremum, \(\sup\), is the largest value of the set of values \(|T(w)-S(w)|\) across all \(w\). The Kolmogorov-Smirnov test only takes the largest absolute difference between the two distributions across all \(w\) values into account and we identify a statistically significant difference if the \(p\)-value of an individual test is \(p\leq 0.05\). ### Modeling wind speeds using Weibull distributions The Kolmogorov-Smirnov test can tell us whether wind distributions of different temporal resolution differ; in order to quantify the differences found, we model the wind speeds, \(w\), as Weibull distributions. This is done by fitting the parameters of a three-parameter Weibull distribution, \[f(w;\beta,\lambda,\theta)=\frac{\beta}{\lambda}(\frac{w-\theta}{\lambda})^{ \beta-1}e^{-(\frac{w-\theta}{\lambda})^{\beta}} \tag{4}\] to the different temporal resolution datasets. The three-parameter Weibull distribution described in Equation (4) is defined by \(w\geq\theta\) and \(f(w;\beta,\lambda,\theta)=0\) for \(w<0\) where \(\beta>0\) is the shape parameter, \(\lambda>0\) is the scale parameter and \(\theta\) is the location parameter of the distribution which equals the lowest possible value of the distribution. For \(\beta\approx 3\), the Weibull distribution approximates a Gaussian distribution, while \(3>\beta\geq 1\) corresponds to a right-skewed distribution, and \(\beta>3\) corresponds to a left-skewed distribution, for \(\beta<1\) the density values are steadily decreasing with increasing \(w\). \(\lambda\) represents the variability, i.e. smaller values of \(\lambda\) are associated with less variability (Rinne, 2008). We fit the parameters using Maximum Likelihood Estimation (MLE). This approach requires maximizing the likelihood function \[L=\prod_{i=1}^{n}f(w_{i};\beta,\lambda,\theta). \tag{5}\] We can then evaluate the change of the parameters when wind speeds are averaged or discarded to produce datasets with different temporal resolution. While Weibull distributions are commonly used to model wind speed distributions (e.g. Mert and Karakus, 2015), we additionally use a generalized Gamma distribution to test the sensitivity of our results to the choice of underlying distribution. The conclusions are unchanged (figures shown in the appendix) and we therefore consider our results to be insensitive to the distribution choice. ### Validating the Weibull parametrization Using a kernel density estimation we confirm that Weibull distributions are a reasonable representation of our data, with the exception of monthly averages. The kernel density estimator \(\hat{f}\) of an unknown density \(f\) at a point \(x\) is defined by \[\hat{f}_{h}(x)=\frac{1}{nh}\sum_{i=1}^{n}K(\frac{x-x_{i}}{h}), \tag{6}\] where we choose \(K\) to be the Gaussian kernel \[K(x,h)=\exp(-\frac{x^{2}}{2h^{2}}). \tag{7}\] The band-width \(h\) is selected using Scott's rule (Scott, 2015). Figures A.2 and A.4 show kernel density estimations for averaged and instantaneous wind speeds respectively; it is clear that monthly values can not be described by a Weibull distribution and thus we do not include this temporal resolution in subsequent analysis. To validate the fit of the Weibull distributions to the original wind speed distributions we generate quantile-quantile plots of the observations of length \(\frac{l}{l}\) against \(\frac{l}{l}\) randomly drawn samples from the corresponding Weibull distribution; these plots are shown in Figure A.5 and Figure A.6. The Weibull distributions generally provide a good fit to the data, although in some locations the fit is less good at higher wind speeds. ### Power generation and transferability of results As a last step, we relate the insights from the wind speed distributions to multi-decadal wind power forecasts. Wind power generation is often forecasted using hub-height wind speed forecasts with a turbine-specific wind power curve (Wang et al., 2019) that describes the relationship between wind (speed) and potential wind power generation and is highly non-linear. We apply the Enercon E92/2350 wind power curve,visualized in Figure 2. Given the relatively short duration of the main observational datasets we use in this study (around 3-5 years of non-missing data) and our interest in multi-decadal forecasts, we study potential power generation using four \(18\) to \(34\) year long wind speed datasets. Additionally, to determine whether the results we find for observational datasets are also applicable to climate model data, we repeat our analysis using data generated by the historical run of the MPI-ESM1-2-LR general circulation model (Eyring et al., 2016) which was the only model with three-hourly data available on the ESGF node (WCRP CMIP, 2023) at the time of research. ## 3 Results We analyze the distributions of wind speed averages and instantaneous wind speed time series, and find consistent results across all sites investigated: averaging introduces shifts to the wind speed distributions, while three-hourly and six-hourly instantaneous data are usually close to the original. This is clearly demonstrated in Figure 3, with differences between 10min wind speeds and averaged data (left hand column) larger than the differences between 10min wind speeds and instantaneous wind speeds of lower resolution (right hand column). Similar patterns can be observed in the four longer datasets, see Figure A.7. The impact of averaging can also be seen in the variance of the data: while averaging does not affect the mean it reduces the variance of the averaged time series (Figure 4). In contrast, three-hourly and six-hourly instantaneous values (dashed lines in Figure 4) preserve the variance, with daily instantaneous values preserving substantially more of the variance than daily averages, and at some sites, more than six-hourly averages. ### Kolmogorov-Smirnov tests To quantify whether the difference between the 10min wind speed distribution and the averaged and instantaneous distributions of lower resolution are statistically significant we compute Kolmogorov-Smirnov test statistics of their cumulative density distributions pairwise. The results for the Penmanshiel site are presented in Table 2 (for averaged data) and Table 3 (instantaneous data), where values in **bold** indicate no statistically significant difference in the distributions, i.e. that using the dataset of lower temporal resolution may retain all information about the wind speed distribution. We observe that all averaged distributions, as well as daily instantaneous distributions, differ significantly from the 10min wind speeds (right-most column), whilst three- and six-hourly instantaneous values are not distinguishable from the 10 min data. The results for all other locations are very similar (see Table A.2 to Table A.15): in general, averages do not preserve the wind speed distribution, and three- and six-hourly instantaneous values are almost always not statistically distinguishable from the 10 min data. Figure 2: Wind power curve of Enercon E92/2350 turbine. The relationship between wind speed and wind power can be roughly divided into four different regions: No power is generated if the wind speed is below the cut-in wind speed or if the wind speed is above the cut-out wind speed where the turbine is shut down to protect it from damage. In between, wind power generation first increases rapidly with increasing wind speed. Once the maximum wind speed that the wind turbine can convert to power is reached, the power output is usually constant until the wind speed exceeds the cut-out wind speed. Figure 4: Variances of averaged (solid lines) and instantaneous (dashed lines) values. The variances steadily decrease when wind speeds are averaged and stay close to the 10min variances for the instantaneous three-hourly and six-hourly distributions. Figure 3: Difference of cumulative densities from the 10min data to the other temporal resolution datasets for average wind speeds (left) and instantaneous wind speeds (right). It can be seen that the averaged wind speeds are visually distinguishable, which is less the case for instantaneous wind speeds, particularly for data with a temporal resolution of six-hourly or higher. ### Changes in Weibull parameters The cumulative density plots and Kolmogorov-Smirnov tests indicate that wind speed distributions are likely to _change_ when wind speeds are averaged and likely to stay similar when measurements are discarded, at least until around six-hourly resolution. The aim of the next steps is to _quantify these changes_ by parameterizing the distributions as Weibull distributions. Figure 5 shows the values of the three Weibull parameters for the different aggregation levels for averaging (top row) and instantaneous (bottom row). The shape parameter \(\beta\) stays approximately constant across all aggregation levels and types, for both averaged and instantaneous data. For averaging, the location parameter \(\theta\) increases with higher aggregation levels. This is consistent with the lowest values of the dataset increasing as they are averaged. Conversely, with lower resolution instantaneous data, the lowest values remain similar, leading to similar \(\theta\) for all resolutions studied. The scale parameter \(\lambda\) decreases with increased averaging length, and remains relatively constant for instantaneous data, consistent with the changes in variance shown in Figure 4. We observe very similar results for \(\sim 30\) years of data measured at \(10m\) height (see Figure A.8 and Figure A.9). In addition, two of our observational sites, NWTC and Owez, have wind speed data at multiple heights, ranging from \(10m\) to \(130m\); the changes in Weibull parameters seen in Figure 5 are found at all different heights studied (see Figure A.10). To test for robustness of our results we also use an MLE to fit a generalized Gamma distribution (see Appendix for details); both Weibull and Gamma distributions are regarded as suitable statistical models for wind speed data (e.g. [15]). We find very similar results (see Figure A.11 to Figure A.14), with large changes in parameters when averaging data, and relatively small changes for three- and six-hourly instantaneous values. This suggests our conclusions are not sensitive to our choice of parameterization. ### Implications for multi-decadal wind power forecasting As introduced in Section 1, wind speeds and wind speed variability are subject to interannual variability and climate change. Hence, for multi-decadal wind power forecasts climate models can provide useful information (e.g. [14]). To understand how our results apply on multi-decadal timescales, we repeat our analysis using data from four sites where multiple decades of \(10m\) wind speed observations are available. We also repeat our analysis on climate model output data to determine whether our conclusions are applicable to model data. We extract \(10m\) wind speeds from the historical run of the CMIP6 model MPI-ESM1-2-LR for grid points closest to these four longer-term observational sites (locations shown in Figure A.1). We only use direct output from the model, and thus at daily temporal resolution we do not have instantaneous values, only averages. For these multi-decadal datasets the Kolmogorov-Smirnov tests and Weibull parameterization analysis produce results comparable to those for the hub-height observations shown in the previous sections. The parameters of fitted Weibull distributions behave similarly in response to changing temporal resolution, with a decrease of \(\lambda\) as temporal resolution decreases indicating a decrease in variability (see Figure 6). The climate model data do not show an increase in \(\theta\) with decreasing temporal resolution, although for three-hourly and six-hourly averages \(\theta\) fitted to the climate model data is very close to the parameters fitted to the observational data. So far we have only looked at wind speeds and their distributions. However, these are just a proxy for wind power - our variable of interest - and wind power depends non-linearly on wind speed. To estimate the power a wind turbine could generate over its \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & daily & six-hourly & three-hourly & 10min \\ \hline daily & \(\mathbf{1}\) & \(1.12\cdot 10^{-3}\) & \(2.94\cdot 10^{-5}\) & \(1.40\cdot 10^{7}\) \\ \hline six-hourly & & \(\mathbf{1}\) & \(\mathbf{4.41\cdot 10^{-4}}\) & \(5.83\cdot 10^{-3}\) \\ \hline three-hourly & & & \(\mathbf{1}\) & \(5.83\cdot 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 2: Test statistics of Kolmogorov-Smirnov test for Penmanshiel averages. We reject the hypothesis that the wind speed distributions are equal if \(p\leq 5\cdot 10^{-2}\). Therefore, only the three-hourly averages are not significantly different from six-hourly averages. \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & daily & six-hourly & three-hourly & 10min \\ \hline daily & \(\mathbf{1}\) & \(3.03\cdot 10^{-2}\) & \(2.34\cdot 10^{-2}\) & \(6.62\cdot 10^{-3}\) \\ \hline six-hourly & & \(\mathbf{1}\) & \(\mathbf{1.00}\) & \(\mathbf{9.58\cdot 10^{-1}}\) \\ \hline three-hourly & & & \(\mathbf{1}\) & \(\mathbf{8.07\cdot 10^{-1}}\) \\ \hline \end{tabular} \end{table} Table 3: Test statistics of Kolmogorov-Smirnov test for Penmanshiel instantaneous data. We reject the hypothesis that the wind speed distributions are equal if \(p\leq 5\cdot 10^{-2}\) which reveals that only daily instantaneous values are significantly different from all other distributions. lifetime, we apply the Enercon E92/2350 power curve (see Figure 2) to the four multi-decadal observational wind speed datasets as well as to the wind speeds from the corresponding closest grid-points in the CMIP6 dataset. We then compare the expected cumulative power generation of the highest available resolution (10min in observations and three-hourly instantaneous values in CMIP6 model) to lower resolutions (three-hourly, six-hourly, and daily averages and instantaneous values). Although the highest available resolution in the CMIP6 model is only three-hourly, our previous analysis shows that these values are closely aligned with 10min data. Figure 7 shows the expected cumulative power generation using wind speed observations of different resolutions. We show this as a fraction of the total power generation achieved when applying the wind power curve to the 10min observational wind speed data and integrating over the whole time period. The dotted gray line shows 100% and thus if the expected cumulative power generation reaches this threshold without overshooting we consider the change introduced by the temporal resolution to be small. The top row of Figure 7 shows that averaging values, particularly to daily, but even to three- or six-hourly at some locations, can lead to relatively large errors in estimated power generation, with errors of up to -34.48% (daily average), -15.45% (six-hourly average), and -10.06% (three-hourly average). Conversely, three-hourly and six-hourly instantaneous values, shown in the bottom row, reveal very similar results to the 10min data, with errors less than 2%. Figure 8 demonstrates that very similar results are found using the output from the MPI-ESM1-2-LR global climate model, with results given relative to the total amount of power generated using three-hourly instantaneous values. In all cases six-hourly instantaneous values are closest to three-hourly instantaneous values, followed by three-hourly averages and six-hourly averages; daily averages differ substantially. Reducing temporal resolution leads to an underestimation of expected power generation at all sites studied except for daily instantaneous values at Zugspitze (Figure 7 bottom row, site (f)), a site with relatively high mean wind speeds and high wind speed variance, situated at a high elevation above sea level (\(2956m\)). This is very likely a function of the particular wind turbine power curve we have chosen. It is important to note that in all cases, both with observational data and climate projections, the difference in power generation compared to higher-resolution data increases with an increasing forecast horizon and does not average out - the shift in the wind distribution leads to a systematic error in wind power estimation. In general, averaging tends to result in an underestimation of expected power generation, while discarding values has only minor impacts. ## 4 Discussion Global climate models simulate climate dynamics physically using partial differential equations. Their long forecast horizons make them computationally expensive with large storage space needs and their output is usually provided with a temporal resolution ranging from three hours to one month, either as instantaneous or averaged values. Using hub-height wind speed observations, this study investigates how the temporal resolution of wind speed data affects the wind speed distributions. Using multi-decadal observational datasets at 10\(m\) height we study the corresponding potential power generation and which time resolution is actually necessary. Figure 5: Parameters when data are averaged (top row) or discarded (bottom row) for four different datasets a) Kelmarsh b) Penmanshiel c) NWTC d) Owez. The Weibull parameters change when wind speeds are averaged and stay similar when values are discarded. The corresponding non-parameterized wind speed distributions are visualized in Figure A.2 and Figure A.4. Using hub-height data we find that in all cases investigated, three-hourly and six-hourly instantaneous observations well-preserve the underlying 10min wind speed distribution, whilst three- and six-hourly averaging leads to distributional shifts. However, whether a significant wind speed distribution shift results in a significant change in wind power generation depends on the turbine and its corresponding power curve. Our results, using an example turbine power curve, indicate that the differences in wind distribution highlighted in this study can lead to accumulating errors when power generation is forecasted for years to decades (compare Figure 7 and Figure 8). Our results are consistent across different observational sites and a GCM. We can therefore give two suggestions when working with wind speed projections for wind power modelling. First, instantaneous wind speed projections should be preferred over wind speed averages, as sub-sampling wind speed data introduces relatively minor errors in contrast to averaging wind speeds, where we observe a characteristic distributional shift. This shift associated with averaging data indicates that we might consistently over- or underestimate wind power generation. Second, instantaneous wind speed projections of six or three hours suffice, whilst daily data may be too low resolution, even with instantaneous values. For our sites temporal downscaling or either three- or six- hourly data is unlikely to provide substantial improvement in accuracy. For example, instantaneous observational wind speeds with a three-hourly temporal resolution lead to errors of less than \(0.29\%\), with six-hourly data leading to errors of less than \(1.57\%\). The experiments conducted using climate model data support these results. This knowledge can be useful to reduce storage space almost loss-free and decrease computational complexity in further applications using the data. The primary shortcomings of our investigations include a potential lack of generalizability across turbines, sensitivity to our underlying highest temporal data resolution, and uncertain transferability to other climate models. More specifically, as wind Figure 6: Parameters of Weibull distribution fitted to observational data averages (dashed lines) of the four multiple-decadal sites and to the closest grid points in the CMIP6 MPI-ESM1-2-LR dataset (solid lines). The parameter \(\lambda\) decreases in all cases when comparing three-hourly to daily averages, indicating a decrease in variability. The location parameter \(\beta\) does not show any consistent trends. \(\theta\) increases with daily averages in observational data, but not in the CMIP6 data. Figure 7: Cumulative power generation of daily, six-hourly and three-hourly values relative to the total 10min wind power generation computed by feeding the wind speeds into a power curve. Top row: Average values underestimate wind power generation – the longer the forecast the higher the error. Bottom row: Instantaneous values every three or six hours only introduce minor errors. Daily data is in no case a good proxy. The _power generation gap_ at the Fichtelberg site between \(\sim 2005\) and \(2010\) stems from various missing observations. Absolute errors are reported in Table A.16. speed data measured at hub-height is often confidential, we are limited to relatively few datasets. Furthermore, for site assessment, wind speeds are usually transformed to hub-height which is non-trivial (e.g. Banuelos-Ruedas et al., 2010). However, the close agreement of results from stations across a range of different locations, including different local geography (off-shore, on-shore, different altitudes and local topography), suggests that our results likely hold for the majority of locations. For 7 out of the 8 locations studied, using daily instantaneous or average values underestimates the power generation (see Figure 7), while six-hourly values are good approximations. For one station, however, the daily instantaneous data is an overestimate; understanding the conditions under which daily data over- or under-estimates the power generation would be useful for studies in which only daily data is available. In addition, while the Enercon power curve we use in this study has the characteristic shape of any modern horizontal wind turbine power curve, it is not necessarily representative of the turbines at a particular site. Regarding the sensitivity to the underlying data resolution, we use data in 10min resolution, hence variability on shorter time scales - often associated with turbulence (e.g. Stull, 1988) - is not preserved. However, higher temporal resolution data is rarely available or used in wind power forecasting (Tawn and Browell, 2022), (Effenberger and Ludwig, 2022), and thus we assume that this is a minor issue. This claim is also supported by Lopez-Villalobos et al. (2021) who investigate wind power spectra (Van der Hoven, 1957) and find only small differences between power production given wind speeds of different resolution between 1min and 6h. Lastly, climate projections are characterized by different pathways that describe anthropogenic climate change (Eyring et al., 2016). This makes handling the data cautiously even more important, especially in wind power forecasting, where non-linearly dependent wind speed projections are often used as a proxies for power generation. Our results using one historical CMIP6 run indicate that changes in observational wind speed distributions for different temporal resolution data can be seen in data from climate models as well. However, future research has to investigate the sensitivity of these results to different climate models. ## 5 Conclusion Wind power generation depends non-linearly on wind speeds. For multi-decadal wind power forecasts in the order of years to decades, small changes in modelling the wind speed distributions can result in large systematic errors in power estimation, with absolute errors increasing with longer forecast horizon. Using hub-height wind speed observations and climate model output data, this study investigates how the temporal resolution of wind speed data affects wind speed distributions and corresponding potential power generation. We show that instantaneous wind speeds of lower resolution more accurately represent the underlying distribution of higher resolution data when compared to averaged wind speeds. Three- and six-hourly instantaneous values preserve the wind speed distribution of 10min wind speed averages well. Small changes in the wind speed distribution, through averaging or using daily data, has significant impacts on the estimated wind power generation of a turbine over its lifetime. These results hold true across Figure 8: Cumulative power generation of daily, six-hourly and three-hourly values relative to three-hourly instantaneous (3h inst.) wind power generation at the four CMIP6 locations closest to e) Aachen, f) Zugspitze, g) Boltenhagen, h) Fichtelberg computed by feeding the wind speeds into a power curve. Top row: Average values underestimate wind power generation. Bottom row: Instantaneous values (note that daily instantaneous values were not available as direct output, and so are not included). Using six-hourly instantaneous values introduces only minor errors relative to three-hourly instantaneous values. several observational sites and a global climate model. Based on our results, we argue that modelling wind speed _distributions_ correctly is what we should aim for in multi-decadal wind power forecasting. ## Acknowledgements This study was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC number 2064/1 - Project number 390727645 and the Athene Grant of the University of Tubingen. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Nina Effenberger, and the Natural Sciences and Engineering Research Council of Canada (NSERC) [RGPIN-2020-05783] for supporting Rachel H. White. We acknowledge support by Open Access Publishing Fund of University of Tubingen. Open Access funding enabled and organized by Projekt DEAL. WOA Institution: Eberhard Karls Universitat Tubingen Consortia Name : Projekt DEAL. Nina wants to thank Roland Stull and the Weather Forecast Research Team at UBC for (intellectual) resources!
2305.19486
Instance-dependent Noisy-label Learning with Graphical Model Based Noise-rate Estimation
Deep learning faces a formidable challenge when handling noisy labels, as models tend to overfit samples affected by label noise. This challenge is further compounded by the presence of instance-dependent noise (IDN), a realistic form of label noise arising from ambiguous sample information. To address IDN, Label Noise Learning (LNL) incorporates a sample selection stage to differentiate clean and noisy-label samples. This stage uses an arbitrary criterion and a pre-defined curriculum that initially selects most samples as noisy and gradually decreases this selection rate during training. Such curriculum is sub-optimal since it does not consider the actual label noise rate in the training set. This paper addresses this issue with a new noise-rate estimation method that is easily integrated with most state-of-the-art (SOTA) LNL methods to produce a more effective curriculum. Synthetic and real-world benchmark results demonstrate that integrating our approach with SOTA LNL methods improves accuracy in most cases.
Arpit Garg, Cuong Nguyen, Rafael Felix, Thanh-Toan Do, Gustavo Carneiro
2023-05-31T01:46:14Z
http://arxiv.org/abs/2305.19486v3
# Noisy-label Learning with Sample Selection based on Noise Rate Estimate ###### Abstract Noisy-labels are challenging for deep learning due to the high capacity of the deep models that can overfit noisy-label training samples. Arguably the most realistic and coincidentally challenging type of label noise is the instance-dependent noise (IDN), where the labelling errors are caused by the ambivalent information present in the images. The most successful label noise learning techniques to address IDN problems usually contain a noisy-label sample selection stage to separate clean and noisy-label samples during training. Such sample selection depends on a criterion, such as loss or gradient, and on a curriculum to define the proportion of training samples to be classified as clean at each training epoch. Even though the estimated noise rate from the training set appears to be a natural signal to be used in the definition of this curriculum, previous approaches generally rely on arbitrary thresholds or pre-defined selection functions to the best of our knowledge. This paper addresses this research gap by proposing a new noisy-label learning graphical model that can easily accommodate state-of-the-art (SOTA) noisy-label learning methods and provide them with a reliable noise rate estimate to be used in a new sample selection curriculum. We show empirically that our model integrated with many SOTA methods can improve their results in many IDN benchmarks, including synthetic and real-world datasets. Introduction Deep neural networks (DNNs) attain exceptional performance in countless tasks across many domains, including vision [41], language [36], medical [32], and code-generation [9]. Yet, such accomplishments usually depend on the methodical curation of training sets with clean-labels, which can be extraordinarily expensive in some domains [37]. Cost-effective labelling techniques are beneficial [44] in these cases (such as data mining [8] and crowd-sourcing [34]) but it often results in inferior-standard labelling [34]. Thus, such labelling can introduce incorrect labels in real-world datasets [17]. Even small amounts of label noise are enough to hinder the effectiveness of DNNs due to the well-known memorisation effects [47; 28]. This problem motivated the designing of robust noisy-label learning algorithms. The type of label noise, i.e., instance-independent noise (IIN) [13] or instance-dependent noise (IDN) [38], dictates the design principles of the noisy-label learning algorithms. For instance, IIN focuses on mislabellings that are independent of sample information [13], where estimating the underlying label transition matrix is a common way of handling this noise type [44]. On the other hand, in the more-realistic IDN, mislabellings are due to both sample information and true class labels [38], which generally require the combination of many label noise learning techniques, such as robust loss functions [50; 22], and noisy-label sample selection [20; 51]. In particular, sample selection approaches that divide the training data into clean and noisy samples have produced competitive results in many benchmarks [20; 6; 8; 17; 12]. Such sample selection techniques require the definition of a criterion and a selection curriculum. Many studies in this topic focus on developing new sample selection criteria, such as the small-loss hypothesis [20], which states that noisy-label samples have larger loss values than clean-label samples, particularly during the first training stages [1]. Another example is the FINE [17] criterion, which discriminates clean and noisy-label samples via the distance to class-specific eigenvectors. In this technique, clean-label samples tend to lie closer to the class-specific dominant eigenvector of the latent representations than the noisy-label samples. One more example is SSR [8], which introduces a selection criterion based on K nearest neighbour (KNN) classification in the feature space. Further, CC [51] uses a two-stage sampling procedure, including class-level feature clustering followed by a consistency score. An equally important problem in sample selection is the definition of the curriculum to select clean training samples, but it has received comparatively less attention. The sample selection curriculum defines a threshold to be used with one of the criteria listed above to classify the training samples into clean or noisy at each training epoch [39]. For example, the threshold can be fixed to an arbitrary clustering score that separates clean and noisy samples [20], but such strategy does not account for the proportion of label noise in the training set, nor does it consider the dynamics of the selection of noisy-label samples during the training. The consideration of such dynamics has been studied in [13; 43], which defined a curriculum of the noisy-label sampling rate \(R(t)\) as a function of the training epoch \(t\in\{1,\ldots,T\}\). The curriculum \(R(t)\) defines a sampling rate close to \(100\%\) of the training set at the beginning of the training, which is then reduced to arbitrarily low rates at the end of the training. In practice, the function \(R(t)\) is either pre-defined [13] or learned by weighting a set of basis functions with similar characteristics [43]. Although generally effective, these techniques do not consider the label noise rate estimated from the training set, making them vulnerable to over-fitting (if too many noisy-label samples are classified as clean) or under-fitting (if informative clean-label samples are classified as noisy). It can be argued that label transition matrix estimation [44; 5; 42] aims to recover the noise rate affecting pairwise label transitions. However, label transition matrix techniques follow a quite different strategy compared with sample selection methods, where their main challenge is the general under-constrained aspect of the matrix estimation, making them sensitive to large noise rates and not scalable to a high number of classes [34]. We are unaware of any approach that aims to directly estimate the label noise rate from the training set and incorporate this rate into the sample selection curriculum. To motivate the use of noise rate to select noisy-label samples during training, let us consider CIFAR100 [18] at an instance-dependent noise rate \(\epsilon=50\%\)[38] (noise rate specifications and other details are explained in Section 4). We use DivideMix [20], but replace its sample selection (based on an arbitrary clustering score [7; 35; 26]) by a thresholding process that classifies the \(R(t)=1-\epsilon=50\%\) largest loss samples as noisy, and the remaining ones as clean in all training epochs \(t\in\{1,\ldots,T\}\). This sample selection is used to implement the semi-supervised learning mechanism of DivideMix. As displayed in Fig. 0(a), the new sample selection approach based on the "provided" noise rate (dashed red curve) improves 6% in terms of prediction accuracy as compared with the original DivideMix [20] (solid blue curve) that relies on arbitrary thresholding. Similar conclusions can be achieved with other methods that apply sample selection strategies to address the noisy-label learning problem, as shown in the experiments. In this paper, we introduce a new noisy-label learning graphical model (shown in Fig. 0(b)) that can be integrated with SOTA noisy-label learning methods to provide them with a reliable noise rate estimate and a new sample selection curriculum. In particular, in our curriculum, instead of being constrained by a pre-defined function \(R(t)\)[13; 43], it is based on a noise rate automatically estimated from the training set, as displayed in Fig. 1(a). The integration of our graphical model with SOTA noisy-label learning models (e.g., DivideMix [20], C2D [52], InstanceGM [12], FINE [17], SSR [8], and CC [51]) is shown to improve their sample selection mechanisms, and ultimately upgrade their results, as presented in Fig. 1(b). The primary contributions of our paper can be summarised as follows: * A novel noisy-label learning graphical model (see Fig. 0(b)) that estimates and uses the noise rate from the training set to build a new sample selection curriculum. * A simple strategy to integrate our new graphical model with many SOTA noisy-label learning methods, such as DivideMix [20], C2D [52], InstanceGM [12], FINE [17], SSR [8], and CC [51] with the goal of improving their sample selection process, and consequently their test accuracy, as displayed in Fig. 1(b). We also demonstrate empirically the critical role of the new sample selection mechanism that boosts the performance of SOTA noisy-label learning methods on several synthetic (CIFAR100 [18]) and real-world (red mini-ImageNet [15], Clothing1M [39], mini-WebVision [20] and ImageNet [19]) benchmarks. ## 2 Method In this section, we present our new graphical model that estimates the noise rate, which will be used in the sample selection process. Let \(\mathcal{D}=\{(x_{i},\hat{y}_{i})\}_{i=1}^{N}\) be the noisy-label training set containing \(d\)-dimensional data vector \(x_{i}\in\mathcal{X}\subseteq\mathbb{R}^{d}\) and it's respective \(C\)-dimensional one-hot encoded observed (potentially corrupted) label \(\hat{y}_{i}\in\hat{\mathcal{Y}}=\{\hat{y}:\hat{y}\in\{0,1\}^{C}\wedge\mathbf{1 }_{C}^{\top}\hat{y}=1\}\), where \(\mathbf{1}_{C}\) is a vector of ones with \(C\) dimensions. The aim is to estimate the label noise rate \(\epsilon\), used for the generation of noisy-label training data from the observed training dataset \(\mathcal{D}\) and integrate this label noise rate into the sample selection strategy. Figure 1: (a) Comparison of test accuracy % (as a function of training epoch) between the original DivideMix [20] (solid, blue curve) and our modified DivideMix (dashed, red curve) that selects the clean and noisy data based on a fixed noise rate \(R(t)=1-\epsilon=50\%\) using the small-loss criterion on CIFAR100 [18] at \(0.5\) IDN [38]; (b) The proposed probabilistic graphical model that generates noisy-label \(\hat{Y}\) conditioned on the image \(X\), the latent clean-label \(Y\) and noise rate \(\epsilon\), where forward pass (solid lines) is parameterized by \(\theta_{y},\theta_{\hat{y}}\) and \(\epsilon\) representing the generation step, and the backward pass (dashed lines) is parameterized by \(\rho\). ### Graphical Model We portray the generation of noisy-label via the probabilistic graphical model shown in Fig. 0(b). The observed random variables, denoted by shaded circles, are data \(X\) and the corresponding noisy-label \(\hat{Y}\). We also have one latent variable, namely: the clean-label \(Y\). Under our proposed modelling assumption, a noisy-label of a data instance can be generated as follows: * sample an instance from the pool of data \(p(X)\), i.e.,: \(x\sim p(X)\) * sample a clean-label from the clean-label distribution: \(y\sim\mathrm{Cat}(Y;f_{\theta_{y}}(x))\) * sample a noisy-label from the noisy-label distribution: \(\hat{y}\sim\mathrm{Cat}(\hat{Y};\epsilon\times f_{\theta_{y}}(x)+(1-\epsilon) \times y)\), where \(\mathrm{Cat}(.)\) denotes a categorical distribution, \(f_{\theta_{y}}:\mathcal{X}\rightarrow\Delta_{C-1}\) and \(f_{\theta_{y}}:\mathcal{X}\times\Delta_{C-1}\rightarrow\Delta_{C-1}\) denote two classifiers for the clean-label \(Y\) and noisy-label \(\hat{Y}\), respectively, with \(\Delta_{C-1}=\{s:s\in[0,1]^{C}\wedge\mathbf{1}_{C}s=1\}\) being the \((C-1)\)-dimensional probability simplex. Figure 3: Our training algorithm uses an existing noisy-label classifier (e.g., DivideMix [20]) parameterised by \(\theta_{y}\) as the clean-label model \(p(Y|X;\theta_{y})\). The generation of noisy-label (given \(X\) and \(Y\)) is performed by a model parameterised by noise rate \(\epsilon\) and \(\theta_{\hat{y}}\). The noisy-label classifier relies on a sample-selection mechanism that uses a curriculum \(R(t)=1-\epsilon^{(t)}\). Figure 2: (a) Visual comparison of different \(R(t)\) on CIFAR100 [18] at \(0.5\) IDN [38]: _(i)_ Co-teaching [13] with curricula based on different hyper-parameter \(T_{k}\), where \(R(t)=1-\tau\cdot\min(t/T_{k},1)\) and we manually set \(\tau=\epsilon=0.5\); _(ii)_ S2E [43], where \(R(t)\) is estimated with a bi-level optimisation; and _(iii)_ ours. (b) Comparison between noisy-label robust methods on CIFAR100 [18] at \(0.5\) IDN [38], including DivideMix [20], FINE [17] and InstanceGM [12], without (left, blue) and with (right, orange) integration of our proposed graphical model for estimation of the noise rate \(\epsilon\) and sample selection based on \(\epsilon\). According to the data generation process, \(\epsilon\) corresponds to \(\mathbb{E}_{(x,\hat{y})\sim p(X,\hat{Y})}[P(\hat{y}\neq y|x)]\), which is the label noise rate of the training dataset of interest. Our aim is to infer the parameters \(\theta_{y},\theta_{\hat{y}}\) and \(\epsilon\) from a noisy-label dataset \(\mathcal{D}\) by maximising the following log-likelihood: \[\max_{\theta_{y},\theta_{\hat{y}},\epsilon}\mathbb{E}_{(x_{i},\hat{y}_{i}) \sim\mathcal{D}}\left[\ln p(\hat{y}_{i}|x_{i};\theta_{y},\theta_{\hat{y}}, \epsilon)\right]=\max_{\theta_{y},\theta_{\hat{y}},\epsilon}\mathbb{E}_{(x_{ i},\hat{y}_{i})\sim\mathcal{D}}\left[\ln\sum_{y_{i}}p(\hat{y}_{i},y_{i}|x_{i}; \theta_{y},\theta_{\hat{y}},\epsilon)\right]. \tag{1}\] Due to the presence of the clean-label \(y_{i}\), it is difficult to evaluate the log-likelihood in Eq. (1) directly. We, therefore, employ the _expectation - maximisation_ (EM) algorithm [7] to maximise the log-likelihood. The main idea of the EM algorithm is to _(i)_ construct a tight lower bound of the likelihood in Eq. (1) by estimating the latent variable \(Y\) (known as _expectation step_) and _(ii)_ maximise that lower bound (known as _maximisation step_). Formally, let \(q(y_{i}|x,\hat{y};\rho)\) be an arbitrary distribution over a clean-label \(y_{i}\). The evidence lower bound (ELBO) on the log-likelihood in Eq. (1) can be obtained through Jensen's inequality and presented as follows: \[Q(\theta_{y},\theta_{\hat{y}},\epsilon,\rho)=\mathbb{E}_{(x_{i}, \hat{y}_{i})\sim\mathcal{D}}\left[\ln p(\hat{y}_{i}|x_{i};\theta_{y},\theta_{ \hat{y}},\epsilon)-\mathrm{KL}\left[q(y_{i}|x_{i},\hat{y}_{i};\rho)\|p(y_{i}| x_{i},\hat{y}_{i})\right]\right]\\ =\mathbb{E}_{(x_{i},\hat{y}_{i})\sim\mathcal{D}}\left[\mathbb{E} _{q(y_{i}|x_{i},\hat{y}_{i};\rho)}[\ln p(y_{i}|x_{i};\theta_{y})+\ln p(\hat{y} _{i}|x_{i},y_{i};\theta_{\hat{y}},\epsilon))]+\mathbb{H}[q(y_{i}|x_{i},\hat{y }_{i};\rho)]\right], \tag{2}\] where \(\mathrm{KL}[q\|p]\) is the Kullback - Leibler divergence between distributions \(q\) and \(p\), and \(\mathbb{H}(q)\) is the entropy of the distribution \(q\). The EM algorithm is then carried out iteratively by alternating the following two steps: E stepWe maximise the ELBO in Eq. (2) w.r.t. \(q(y_{i}|x_{i},\hat{y}_{i};\rho)\). Theoretically, such optimisation results in \(\mathrm{KL}\left[q(y_{i}|x_{i},\hat{y}_{i};\rho)\|p(y_{i}|x_{i},\hat{y}_{i}) \right]=0\) or \(q(y_{i}|x_{i},\hat{y}_{i};\rho)=p(y_{i}|x_{i},\hat{y}_{i})\). This is equivalent to estimating the posterior of the clean-label \(y_{i}\) given noisy-label data \((x_{i},\hat{y}_{i})\). Obtaining the exact posterior \(p(y_{i}|x_{i},\hat{y}_{i})\) is, however, intractable for most deep-learning models. To mitigate such an issue, we follow the _variational EM_ approach [27] by employing an approximate posterior \(q(y_{i}|x_{i},\hat{y}_{i};\rho^{(t)})\) that is the closest to the true posterior \(p(y_{i}|x_{i},\hat{y}_{i})\), where: \[\rho^{(t)}=\arg\max_{\rho}Q(\theta_{y}^{(t)},\theta_{\hat{y}}^{(t)}, \epsilon^{(t)},\rho), \tag{3}\] with the superscript \({}^{(t)}\) denoting the parameters at the \(t\)-th iteration. Although this results in a non-tight lower bound of the log-likelihood in Eq. (1), it does increase the variational bound \(Q\). M stepWe maximise the ELBO in Eq. (2) w.r.t. \(\theta_{y},\theta_{\hat{y}}\) and \(\epsilon\): \[\theta_{y}^{(t+1)},\theta_{\hat{y}}^{(t+1)},\epsilon^{(t+1)}=\arg\max_{\theta_ {y},\theta_{\hat{y}},\epsilon}Q\left(\theta_{y},\theta_{\hat{y}},\epsilon, \rho^{(t)}\right). \tag{4}\] The estimated noise rate \(\epsilon\) can then be integrated into certain noisy-label algorithms to train the models of interest as mentioned in Section 1. Despite its effectiveness shown in Fig. 0(a), such a two-phase process might be inefficient. In addition, the inference of noise rate \(\epsilon\) might associate with the identifiability issue when estimating the clean-label \(Y\)[23], i.e., there exists multiple sets of \(\rho\) and \(\theta_{y}\), where each set can explain the observed noisy-label data equally well. Such issues are addressed in the following subsection. ### Sample Selection for SOTA Models to Address the Identifiability Issue The identifiability issue when inferring the clean-label \(Y\) from noisy-label data \((X,\hat{Y})\) can be mitigated either by acquiring multiple noisy-labels [23] or introducing additional constraints, such as _small loss hypothesis_[13] or FINE [17]. Since requesting additional noisy-labels per training sample is not always available, we follow the latter approach by imposing a constraint, denoted as \(L(\theta_{y},\epsilon^{(t)})\), over \(\theta_{y}\) in the M step via a sample selection approach based on the estimated noise rate \(\epsilon^{(t)}\). Formally, we propose a new curriculum when selecting samples as follows: \[R(t)=1-\epsilon^{(t)}. \tag{5}\] In the simplest case, such as Co-teaching [13] or FINE [17], the constraint for \(\theta_{y}\) can be written as: \[L(\theta_{y},\epsilon^{(t)})=\sum_{(x_{i},\hat{y}_{i})\in\mathcal{S}_{\text{ clean}}}\mathrm{KL}\left[\mathrm{Cat}(Y;\hat{y})\|\mathrm{Cat}(Y;f_{\theta_{y}}(x_{i})) \right], \tag{6}\] where: \[\mathcal{Z}_{\mathrm{sorted}}=\mathrm{sort}(z_{1},z_{2},\ldots,z_{N})\] \[\mathcal{S}_{\mathrm{clean}}=\left\{(x_{i},\hat{y}_{i}):(x_{i},\hat {y}_{i})\in\mathcal{D}\wedge z_{i}\in\mathcal{Z}_{\mathrm{sorted}}\wedge i\leq \lfloor R(t)\times N\rfloor\right\},\quad\mathcal{S}_{\mathrm{noisy}}=\mathcal{D }\setminus\mathcal{S}_{\mathrm{clean}}, \tag{7}\] with \(R(t)\) defined in (5), \(\mathrm{sort}(.)\) representing a function that sorts the set of criterion (loss [20; 52; 51], distance to the largest eigenvectors [17], or KNN scores [8]) values \(\{z_{1},z_{2},\ldots,z_{N}\}\) in ascending order and \(\lfloor.\rfloor\) denoting the floor function. Intuitively, the loss in Eq. (6) is simply the cross-entropy loss on the \(\lfloor R(t)\times N\rfloor\) clean samples selected based on the estimated noise rate \(\epsilon^{(t)}\). One can also extend to other SOTA models by replacing the loss \(L\) accordingly. For example, if DivideMix is used as a base model to constrain \(\theta_{y}\), \(L\) will include two addition terms: loss on un-labelled data and regularisation using mixup [48]. ### Training and Testing Given the sample selection approach in Section 2.2, the training of the model is slightly modified to include the loss for the SOTA model from Eq. (6). The M step in Eq. (4) is, therefore, re-defined as: \[\theta_{y}^{(t+1)},\theta_{\hat{y}}^{(t+1)},\epsilon^{(t+1)}=\arg\max_{\theta _{y},\theta_{\hat{y}},\epsilon}Q\left(\theta_{y},\theta_{\hat{y}},\epsilon, \rho^{(t)}\right)-\lambda\,L(\theta_{y},\epsilon^{(t)}), \tag{8}\] where \(\lambda\) is a hyper-parameter and \(L\) is defined similar to Eq. (6). The training procedure is summarised in Algorithm 1 and visualised in Fig. 3. In the implementation, we integrate the proposed method into existing models, such as DivideMix [20] or FINE [17]. Note that the clean-label classifier \(f_{\theta_{y}}(.)\) is also the clean classifier of the base model. ``` 1:procedureNoise rate estimation and integration(\(\mathcal{D},T,\lambda\)) 2:\(\triangleright\)\(\mathcal{D}=\{(x_{i},\hat{y}_{i})\}_{i=1}^{N}\): training set with noisy-label data 3:\(\triangleright\)\(T\): number of epochs 4:\(\triangleright\)\(\lambda\): a hyper-parameter 5:Initialise\(\theta_{y}^{(1)},\theta_{\hat{y}}^{(1)},\epsilon^{(1)}\) and \(\rho^{(0)}\) 6:\(\theta_{y}^{1}\leftarrow\textsc{Warm up}(\mathcal{D},\theta_{y}^{1})\) 7:\(t\gets 0\) 8:for\(n_{\mathrm{epoch}}=1:T\)do 9:for each mini-batch \(\mathcal{S}\) in shuffle(\(\mathcal{D}\))do 10:\(t\gets t+1\) 11:\(\mathcal{S}_{\mathrm{clean}},\mathcal{S}_{\mathrm{noisy}}\leftarrow\textsc{ Sample Selection}(\mathcal{S},\theta_{y}^{(t)},\epsilon^{(t)})\)\(\triangleright\)Eq. (7) 12:\(\rho^{(t)}\leftarrow\textsc{variational E step}(\mathcal{S},\theta_{y}^{(t)}, \theta_{\hat{y}}^{(t)},\epsilon^{(t)},\rho^{(t-1)})\)\(\triangleright\)Eq. (3) 13:\(\theta_{y}^{(t+1)},\theta_{\hat{y}}^{(t+1)},\epsilon^{(t+1)}\leftarrow\textsc{M step}(\mathcal{S}_{\mathrm{clean}},\mathcal{S}_{\mathrm{noisy}},\theta_{y}^{(t)}, \theta_{\hat{y}}^{(t)},\epsilon^{(t)},\rho^{(t)},\lambda)\)\(\triangleright\)Eq. (8) 14:return\(\theta_{y}\)\(\triangleright\) parameter of the clean-label classifier ``` **Algorithm 1** Proposed noisy-label learning algorithm that relies on the estimation of noise rate \(\epsilon\) to build a sample selection curriculum. ## 3 Related Work DNNs suffer from overfitting when trained with noisy-labels [46], resulting in poor generalisation [22; 49]. To address this issue, several techniques have been developed, including noise-robust loss functions [24], noise-label sample selection [20; 17; 51], and re-labelling [8] followed by sample selection [8]. The most challenging and realistic type of label noise is IDN [34], where most methods have a training stage based on sample selection techniques [20; 52]. These sample selection techniques separate clean and noisy-label samples, where clean samples are treated as labelled, and noisy samples are discarded [13; 14; 43] or treated as unlabelled samples [20; 52; 29; 12; 6] for semi-supervised learning [2]. A major limitation of these approaches is the need to define a curriculum on how to select clean and noisy-label samples during training, which as discussed in Section 1, is either pre-determined [14; 13] or learned from a set of pre-determined basis functions [43]. Additionally, Xiao et al. [39] discuss the effectiveness of estimating the noise type, but no further investigation has been devoted to estimating the noise rate. In general, we notice a research gap regarding the use of noise rate estimation to be used by sample selection techniques. The estimation of noise rate affecting the transition between pairs of labels has received considerable attention [38; 5; 53; 4]. Still, they portray comparatively lower accuracy results for large real-world datasets or for IDN problems [38; 34]. These label transition approaches suffer from identifiability issues [11], where any clean-label distribution assignment is acceptable as long as the distribution of observed labels can be reconstructed [10]. This makes the identification of the true underlying clean-labels challenging. One solution is to contemplate the use of multiple annotations to help analyse the agreements and disagreements for improved identification of clean patterns [25; 11]. An alternate technique to handle IDN problems is based on graphical models representing the relationship between various observed and latent variables [12; 45; 39]. Garg et al. [12], Yao et al. [45] use graphical models which rely on a generative approach to generate noisy-labels from the respective image features and latent clean-labels. Nonetheless, previous graphical models fail to consider the underlying noise rate parameter while modelling. Our work is the first graphical model approach to estimate the noise rate of the dataset. An important point of our approach is that it can be easily integrated with existing SOTA noisy-label learning methods to improve classification accuracy. ## 4 Experiments We show extensive experiments in several noisy-label synthetic benchmarks with CIFAR100 [18], and real-world benchmarks, including CNWL's red mini-ImageNet [15], Clothing1M [39] and mini-WebVision [21]. Section 4.1 describes implementation details. We evaluate our approach by plugging SOTA models into \(p(y|x;\theta_{y})\), defined in Section 2, with results being shown in Section 4.2, and ablation studies in Section 4.3. Please refer to Appendix A.1 for detailed dataset information. ### Implementation All methods are implemented in Pytorch [30] and use one NVIDIA RTX 3090 card. As mentioned in the original papers, hyperparameter settings are kept the same for the baselines used in the proposed algorithm. All classifier architectures are also kept the same as the baseline models. A random initialisation of noise rate parameter \(\epsilon\) with the sigmoid as its activation function is employed for all experiments to maintain the fairness of the comparisons with other approaches. The value of \(\lambda\) in Eq. (8) is set to \(1\) for all the cases. We integrate many SOTA approaches [20; 52; 12; 17; 8] into our graphical model, as explained in Section 2.3. For CIFAR100 [18] with IDN [38], we integrate DivideMix [20] and InstanceGM [12] into our model, given their superior performance across various noise rates. Additionally, we also use F-Dividemix (Fig. 2b) from FINE [17]. Moreover, for red mini-ImageNet [15], we test our proposed approach with and without DINO self-supervision [3]. For the implementation without self-supervision, we use DivideMix [20] and InstanceGM [12], and for the self-supervised version, we only use InstanceGM [12]. The models for Clothing1M [39] are trained using DivideMix [20] and SSR [8]. Furthermore, for mini-WebVision [20] and ImageNet [19], we test our model with C2D [52]. Additional implementation details are present in Appendix A.2. ### Comparison This section compares our approach to the dataset with the IDN settings in Section 4.2.1 and noisy real-world settings in Section 4.2.2. #### 4.2.1 Synthetic Instance-Dependent Noise The comparison between various baselines and our proposed work on CIFAR100 [18] with IDN [38] is shown in Table 1 with the noise rate ranging from \(20\%\) to \(50\%\). It is worth noting that using our proposed model with DivideMix [20] and InstanceGM [12] improves their performance in \(90\%\) cases. Table 1 also shows the final noise rate \(\epsilon\) estimated by our model, where the actual noise rates are displayed in the table's header. It's worth noting that the estimated noise rate is always reasonable to the actual rate. #### 4.2.2 Real-World Noise We have also evaluated our proposed method on various real-world noisy settings regarding test accuracy and estimated noise rates \(\epsilon\) in Tables 2 to 4. Similarly to the synthetic IDN in Section 4.2.1, the results show that existing noisy-label robust methods can be easily integrated into our model to outperform current SOTA results for real-world noisy-label datasets. Table 2 shows the results on red mini-ImageNet using two configurations, including cases without self-supervision (top part of the table) and with self-supervision (bottom part of the table). The self-supervision DINO pre-training [3] relies only on images from red mini-ImageNet to enable a fair comparison with existing baselines [6; 12] Results from Table 2 demonstrate that our approach improves the performance of SOTA methods by a considerable margin in all cases. In fact, using estimated noise rate \(\epsilon\) while training InstanceGM [12] without self-supervision shows better performance than existing self-supervised baselines at \(0.4\) and \(0.8\) noise rates. Moreover, DivideMix [20], SSR [8] and CC [51] are used as a baselines for Clothing1M [39] as shown in Table 3 and further explanation is present in **??** and **??**. Furthermore, C2D [52] is used as a baseline for mini-WebVision [20], shown in Table 4 where validation is performed on ImageNet [19]. It is worth noting that results improve the most baselines and exhibit competitive performance. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**Noise Rates - IDN**} \\ \cline{2-5} & **0.2** & **0.3** & **0.4** & **0.5** \\ \hline CE [45] & 30.42 & 24.15 & 21.45 & 14.42 \\ ParT [38] & 65.33 & 64.56 & 59.73 & 56.80 \\ kMEIDTM [5] & 69.16 & 66.76 & 63.46 & 59.18 \\ \hline DivideMix [20] & 77.03 & 76.33 & 70.80 & 58.61 \\ **DivideMix-Ours** & **77.42** & **77.21** & **72.41** & **64.02** \\ \hline InstanceGM [12] & **79.69** & 79.21 & 78.47 & 77.19 \\ **InstanceGM-Ours** & 79.61 & **79.40** & **79.52** & **77.76** \\ \hline \hline \end{tabular} \end{table} Table 1: _(left)_ Test accuracy % and _(right)_ final estimated noise rate \(\epsilon\) on CIFAR100 [18] under different IDN [39]. Other models’ results are from [12; 5]. Here, we integrate DivideMix [20] and InstanceGM [12] into our proposed model. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**Actual noise rate**} \\ \cline{2-4} & **0.4** & **0.6** & **0.8** \\ \hline CE [40] & 42.70 & 37.30 & 29.76 \\ MixUp [48] & 46.40 & 40.58 & 33.58 \\ MentorMix [15] & 47.14 & 43.80 & 33.46 \\ FaMUS [40] & 51.42 & 45.10 & 35.50 \\ \hline DivideMix [20] & 46.72 & 43.14 & 34.50 \\ **DivideMix-Ours** & **50.70** & **45.11** & **37.44** \\ \hline InstanceGM [12] & 52.24 & 47.96 & 39.62 \\ **InstanceGM-Ours** & **56.61** & **51.40** & **43.83** \\ \hline \hline \multicolumn{4}{l}{**With self-supervised learning**} \\ \hline PropMix [6] & 56.22 & 52.84 & 43.42 \\ \hline InstanceGM-SS [12] & 56.37 & 53.21 & 44.03 \\ **InstanceGM-SS-Ours** & **58.29** & **53.60** & **45.47** \\ \hline \hline \end{tabular} \end{table} Table 2: _(left)_ Test accuracy % and _(right)_ final estimated noise rate \(\epsilon\) for red mini-ImageNet [15]. Other methods’ results are reported in [12; 40]. We present the results with and without self-supervision [3]. We integrate DivideMix [20] and InstanceGM [12] into our model, with the latter tested with and without self-supervision. ### Ablation We show an ablation study _(left)_ and training time _(right)_ of our approach in Table 5 on CIFAR100 [18] at \(0.5\) IDN [38] using DivideMix [20] as baseline. Initially, the accuracy result of baseline DivideMix under original settings is \(58.61\%\). In the second row, we fix the noise rate \(\epsilon\) at \(0.5\) for DivideMix's sample selection, as explained in Section 2.3 (without updating \(\epsilon\)), then the results improved to \(64.44\%\), which is the ideal case that motivated our work (this is an ideal case because that would be a perfect noise rate estimation). In the third case, we use the proposed graphical model with pre-trained DivideMix [20] that shows an accuracy of \(52.31\%\). In the next case, the proposed graphical model is trained together with DivideMix [20] without considering the estimated noise rate \(\epsilon\) for sample selection, which results in an accuracy of \(56.30\%\). In the last row, we show the training of the proposed model with DivideMix, together with the estimation of noise rate \(\epsilon\), and the selection of samples based on that, with \(\approx 8\%\) accuracy improvement, which is very close to our ideal case (second row). ## 5 Conclusion In this paper, we demonstrate the importance of estimating the label noise rate to build a novel noisy-label sample selection curriculum. This estimation is a process within the training of our proposed graphical model that can effortlessly be integrated with SOTA noisy-label robust learning approaches to boost their classification accuracy on synthetic and real-world benchmarks that include CIFAR100 [18], red mini-ImageNet [15], Clothing1M [39], mini-WebVision [20] and ImageNet [19]. This paper will encourage researchers and practitioners in the field to explore the noisy-label noise rate estimation for sample selection approaches. We aim to explore other (and perhaps more effective) ways to estimate noise rates. We also plan to investigate whether the estimated noise rate could be leveraged for other purposes beyond sample selection, such as for the study of theoretical aspects of noisy-label learning. We have not detected any negative societal impact but we anticipate many \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c}{**mini-WebVision**} & \multicolumn{2}{c}{**ImageNet**} & \multirow{2}{*}{**Estimated noise rate**} \\ \cline{2-2} \cline{4-4} & **Top-1** & **Top-5** & & **Top-1** & **Top-5** \\ \hline DivideMix [20] & 77.32 & 91.64 & 75.20 & 91.64 & \\ BtR [33] & 80.88 & 92.76 & 75.96 & 92.20 & \\ SSR [8] & **80.92** & 92.80 & 75.76 & 91.76 & \\ \hline C2D [52] & 79.42 & 92.32 & 78.57 & 93.04 & \\ **C2D-Ours** & 80.20 & **92.82** & **79.16** & **93.12** & **0.43** \\ \hline \hline \end{tabular} \end{table} Table 4: Test accuracy (%) and final estimated noise rate \(\epsilon\) on mini-WebVision [20] and validation on ImageNet [19]. We integrate Contrast-to-Divide (C2D) [52] into our model, whilst **C2D-Ours** is our proposed approach. ImageNet [19] is only considered for validation. \begin{table} \begin{tabular}{l c c} \hline \hline **Method** & **Test accuracy (\%)** & **Estimated noise rate** \\ \hline ELR+ with C2D [52] & 74.58 & \\ AugDesc [29] & 75.11 & \\ \hline DivideMix [20] & 74.32 & \\ **DivideMix-Ours** & 74.41 & 0.41 \\ \hline SSR (class-imbalance) [8] & 74.12 & \\ **SSR-Ours** & 74.20 & 0.42 \\ \hline **CC** [8] & 75.24 & \\ **CC-Ours** & **75.31** & 0.41 \\ \hline \hline \end{tabular} \end{table} Table 3: Test accuracy (%) of competing methods, and final estimated noise rate \(\epsilon\) on Clothing1M [39]. Also, we have not considered competing models that rely on a clean set whilst training. We integrate DivideMix [20], SSR [8] and CC [51] into our model. DivideMix [20] and CC [51] shows the locally reproduced results. Estimated noise rate by [39] is \(0.385\). Our results are within \(1\%\) of the accuracy of the top approaches. positive societal impacts provided by estimating the noise rate and alleviating biases in the noisy data. For example, it can help to reduce the arduous annotation task for maintaining the data quality. We hope that this work would help other researchers to study the importance of estimating and using label noise rate in the designing of new noisy-label robust approaches.
2309.14277
SINCERE: Supervised Information Noise-Contrastive Estimation REvisited
The information noise-contrastive estimation (InfoNCE) loss function provides the basis of many self-supervised deep learning methods due to its strong empirical results and theoretic motivation. Previous work suggests a supervised contrastive (SupCon) loss to extend InfoNCE to learn from available class labels. This SupCon loss has been widely-used due to reports of good empirical performance. However, in this work we find that the prior SupCon loss formulation has questionable justification because it can encourage some images from the same class to repel one another in the learned embedding space. This problematic intra-class repulsion gets worse as the number of images sharing one class label increases. We propose the Supervised InfoNCE REvisited (SINCERE) loss as a theoretically-justified supervised extension of InfoNCE that eliminates intra-class repulsion. Experiments show that SINCERE leads to better separation of embeddings from different classes and improves transfer learning classification accuracy. We additionally utilize probabilistic modeling to derive an information-theoretic bound that relates SINCERE loss to the symmeterized KL divergence between data-generating distributions for a target class and all other classes.
Patrick Feeney, Michael C. Hughes
2023-09-25T16:40:56Z
http://arxiv.org/abs/2309.14277v3
# SINCERE: Supervised Information Noise-Contrastive Estimation Revisited ###### Abstract The information noise-contrastive estimation (InfoNCE) loss function provides the basis of many self-supervised deep learning methods due to its strong empirical results and theoretic motivation. Previous work suggests a supervised contrastive (SupCon) loss to extend InfoNCE to learn from available class labels. This SupCon loss has been widely-used due to reports of good empirical performance. However, in this work we suggest that the specific SupCon loss formulated by prior work has questionable theoretic justification, because it can encourage images from the same class to repel one another in the learned embedding space. This problematic behavior gets worse as the number of inputs sharing one class label increases. We propose the Supervised InfoNCE Revisited (SINCERE) loss as a remedy. SINCERE is a theoretically justified solution for a supervised extension of InfoNCE that never causes images from the same class to repel one another. We further show that minimizing our new loss is equivalent to maximizing a bound on the KL divergence between class conditional embedding distributions. We compare SINCERE and SupCon losses in terms of learning trajectories during pretraining and in ultimate linear classifier performance after finetuning. Our proposed SINCERE loss better separates embeddings from different classes during pretraining while delivering competitive accuracy. ## 1 Introduction Self-supervised learning (SSL) has been crucial in creating pretrained computer vision models that can be efficiently adapted to a variety of tasks Jing and Tian (2020); Jaiswal et al. (2021). The conceptual basis for many successful pretraining methods is the instance discrimination task Wu et al. (2018), where the model learns to classify each training image as a unique class. Self-supervised methods solve this task by _contrasting_ different augmentations of the same image with other images, seeking a learned vector representation in which each image is close to augmentations of itself but far from others. Among several possible contrastive losses in the literature Caron et al. (2020); Schroff et al. (2015), one that has seen particularly wide adoption is information noise-contrastive estimation (InfoNCE) loss van den Oord et al. (2019). InfoNCE variants such as MOCO Chen et al. (2021), SimCLR Chen et al. (2020, 2020), and BYOL Grill et al. (2020) have proven empirically effective. The above methods are all for _unsupervised_ pretraining of representations from unlabeled images. To create more effective representations for applications where labeled images are available, we may wish to extend instance discrimination methods so that learned representations are informed by the available class labels. A natural way forward is to contrast images of the same class with images from other classes Schroff et al. (2015). Following the noise contrastive estimation framework Gutmann and Hyvarinen (2010), we assume that images from the same class are drawn from a target distribution while images from other classes come from a noise distribution. Khosla et al. (2020) previously proposed the Supervised Contrastive (SupCon) loss as a _supervised_ extension of the InfoNCE loss. They examined two straightforward ways of averaging an InfoNCE-like loss over image pairs with the same class label. Their recommended loss, named SupCon, was chosen because it performed best empirically in top-1 classification on the ImageNet dataset Deng et al. (2009). SupCon loss has been applied to problems such as contrastive open set recognition Xu et al. (2023) and generalized category discovery Vaze et al. (2022). In light of this empirical success, in this work we investigate the theoretical justification for SupCon. We find that the SupCon loss violates InfoNCE's core assumption that the target and noise distributions should be _separated_. Consider a class with at least 3 member images, labeled \(t\), \(p\), and \(q\), as illustrated in Fig. 1. When target image \(t\) is part-nered with \(p\), SupCon by construction will push \(t\)'s representation away from \(q\), effectively treating \(q\) as a noise image. This problematic behavior makes it difficult to separate target and noise distribution in the embedding space. Moreover, the problem will get worse as the number of images belonging to the same target class increases. To resolve this issue, this paper proposes the Supervised InfoNCE REvisited (SINCERE) loss. See Fig. 1 for an illustration of SINCERE's differences from SupCon. Unlike SupCon, our SINCERE loss by definition excludes image \(q\) (and all other members of the same class, like the image labeled \(r\) in Fig. 1) from the noise distribution, thus ensuring the core assumption underlying InfoNCE remains intact. To begin, we offer a first-principles derivation of SIN CERE, thereby providing a conceptual justification for this loss function as a well-founded generalization of InfoNCE to supervised learning. We then demonstrate SupCon loss' problematic behavior using both a formal analysis of gradients as well as by investigating the cosine similarity between learned embeddings from different classes. We find that the problematic repulsion that can happen between members of the same class under SupCon is not present when using our SINCERE loss. Instead, as desired our proposed SINCERE loss better separates the target and noise distributions. Finally, we observe that a linear classifier using SINCERE features maintains the accuracy seen with SupCon loss. Overall, our main contributions are: 1. [leftmargin=*] 2. The SINCERE loss function, which is a drop-in replacement for SupCon loss for representation learning informed by available class labels. The SINCERE loss arises from a derivation that enforces a core assumption of noise-contrastive estimation: images known to be from target distribution should not be treated as examples of the noise distribution. 3. A proof that SINCERE loss acts as a bound on the KL divergence between target and noise distributions. This bound becomes tighter as the amount of data samples used by the loss increases. This bound provides a supervised extension to the information-theoretic bound presented in the original InfoNCE paper. 4. Empirical results showing that SINCERE loss eliminates problematic behavior from SupCon loss, while still delivering competitive accuracy for a fine-tuned linear classifier. SINCERE users can thus anticipate improved separation of target and noise distributions in learned embeddings. Code for reproducing all experiments is available via the GitHub link on the first page of this paper. ## 2 Background ### Noise-Contrastive Estimation Noise-contrastive estimation (NCE) [10] provides a general framework for modeling a target distribution of interest given a set of samples from it. This framework utilizes a binary classifier to contrast the target distribution samples with samples from a noise distribution. This noise distribution is an arbitrary distribution different from the target distribution, although in practice the noise distribution must be similar enough to the target distribution to make the classifier learn the structure of the target distribution. Later work maintains the focus on contrasting target and noise distributions while defining these distributions as generating disjoint subsets of a dataset of interest. ### Self-Supervised Contrastive Learning Consider an observed dataset \((\mathcal{X},\mathcal{Y})\) of \(n\) elements, composed of data (e.g. images or features) in set \(\mathcal{X}=(x_{1},x_{2},...,x_{n})\) each paired with one of \(k\) categorical labels \(\mathcal{Y}=(y_{1},y_{2},...,y_{n})\), where \(2\leq k\leq n\). Let integer interval \(\mathcal{I}=\llbracket 1,n\rrbracket\) denote the set of indices for elements in \(\mathcal{X}\) or \(\mathcal{Y}\). Let \(z_{i}\in\mathbb{R}^{d}\) be an embedding vector representation of data element \(x_{i}\) produced by a neural network. Self-supervised contrastive learning pursues an _instance discrimination_ task [11], which involves classifying each point in the dataset as a separate class. Therefore each data point \(x_{i}\) has a unique label \(y_{i}\) and the number of unique labels \(k\) is equal to \(n\). To set up the instance discrimination problem, select index \(t\) as the only member of the target distribution in the dataset. Let the rest of the dataset \(\mathcal{N}_{t}=\mathcal{I}\setminus\{t\}\) be drawn from the noise distribution. Applying NCE produces the Figure 1: _Left:_ Example images in one batch for supervised contrastive learning. Color indicates whether image’s class label matches the target class (green) or a non-target noise class (red). Images \(t\) and \(p\) (light green) are chosen as the image pair that represents the target class for the loss, while \(q\) and \(r\) (dark green) are not. _Right:_ Equations for our new proposed loss (SINCERE) and the previous SupCon by Khosla et al. (2020), when \(t\) is the target image and \(p\) is the chosen partner. Both losses attempt to move embeddings from the target class closer to each other. However, SupCon loss treats \(q\) and \(r\) as though they were from a noise class. This problematically _repels_ the learned embedding of \(t\) from \(q\) and \(r\) despite \(q\) and \(r\) truly belonging to the target class. Our SINCERE loss avoids this problem by not involving \(q\) or \(r\) at all when computing loss for target image \(t\) with partner \(p\). Not shown: both SINCERE and SupCon average over pairs \(t,p\) from the target class. Information Noise-Contrastive Estimation (InfoNCE) loss van den Oord, Li, and Vinyals (2019) \[L_{\text{InfoNCE}}(x_{t},y_{t})=-\log\frac{e^{s(z_{t},y_{t})}}{e^{s(z_{t},y_{t})} +\sum_{i\in\mathcal{N}_{t}}e^{s(z_{i},y_{t})}} \tag{1}\] where \(s(z_{i},y_{j})\) is a classification score function which outputs a scalar score for image \(x_{i}\) under label \(y_{j}\). The loss \(L_{\text{InfoNCE}}(x_{i},y_{j})\) defined above thus calculates the negative log-likelihood that \(x_{i}\) has label \(y_{j}\). More specifically, \(L_{\text{InfoNCE}}(x_{t},y_{t})\) is the negative log-likelihood that \(x_{t}\) is part of the target distribution. The score \(s(z_{t},y_{t})\) is often chosen to be cosine similarity by representing \(y_{t}\) as a vector in embedding space \(z_{t}^{\prime}\)Wu et al. (2018); Chen et al. (2020); Chen, Xie, and He (2021). \(z_{t}^{\prime}\) can be produced by embedding a data augmented copy of \(x_{t}\)Le-Khac, Healy, and Smeaton (2020); Chen et al. (2020), embedding \(x_{t}\) via older Wu et al. (2018) or averaged embedding function parameters Chen, Xie, and He (2021), or a combination of these techniques Jaiswal et al. (2021). Rewriting \(L_{\text{InfoNCE}}\) in terms of a data augmented \(z_{i}^{\prime}\) and setting \(s(z_{i},z_{i}^{\prime})=z_{t}\cdot z_{t}^{\prime}/\tau\), with \(\tau\) acting as a temperature hyperparameter, produces the self-supervised contrastive loss proposed by Wu et al. (2018): \[L_{\text{self}}(z_{t},z_{t}^{\prime})=-\log\frac{e^{z_{t}\cdot z_{t}^{\prime} /\tau}}{e^{z_{t}\cdot z_{t}^{\prime}/\tau}+\sum_{i\in\mathcal{N}_{t}}e^{z_{t} \cdot z_{i}/\tau}} \tag{2}\] Throughout, we assume each embedding vector is normalized so the inner product with itself equals one: \(z_{t}\cdot z_{t}=1\). InfoNCE and the subsequent self-supervised contrastive losses cited above are all theoretically motivated by NCE. The larger instance discrimination problem is posed as a series of binary classification problems between instance-specific target and noise distributions. This clear distinction between target and noise underlies our later SINCERE loss. ### Supervised Contrastive Learning (SupCon) Khosla et al. (2020) considers supervised classification where more than one element in \(\mathcal{X}\) is drawn from the target distribution. Let \(\mathcal{T}_{t}=\{i\in\mathcal{I}|y_{i}=y_{t}\}\) be the set of elements drawn from the same target distribution as \(x_{t}\), such that \(|\mathcal{T}_{t}|<|\mathcal{X}|\). The possible set of same-class partners for index \(t\) is \(\mathcal{P}_{t}=\mathcal{T}_{t}\setminus\{t\}\). The set of indices modeled as noise is \(\mathcal{N}_{t}=\mathcal{I}\setminus\mathcal{T}_{t}\). Based on empirical results, Khosla et al. (2020) propose an average of \(L_{\text{self}}\) over \(\mathcal{P}_{t}\) as their supervised contrastive loss \(L_{\text{SupCon}}(z_{t})\): \[\frac{-1}{|\mathcal{P}_{t}|}\sum_{p\in\mathcal{P}_{t}}\log\frac{e^{z_{t}\cdot z _{p}/\tau}}{(\sum_{i\in\mathcal{P}_{t}}e^{z_{t}\cdot z_{i}/\tau})+(\sum_{i\in \mathcal{N}_{t}}e^{z_{t}\cdot z_{i}/\tau})}. \tag{3}\] When \(|\mathcal{P}_{t}|>1\), this loss contains terms from the target distribution in the denominator that are not in the numerator. In contrast, in the self-supervised losses defined above, all target terms appear in both numerator and denominator. The members of \(\mathcal{P}_{t}\) that only appear in the denominator of Eq. (3) are effectively being used as part of the noise distribution, which causes the SupCon loss to penalize similarity between embeddings from the target distribution. This problematic behavior complicates analysis of the loss Graf et al. (2021) and limits the loss' ability to separate embeddings from different classes. ## 3 Method In this section, we develop a new loss we call SINCERE for supervised contrastive learning. We derive and justify our SINCERE loss in Section 3.1, showing how it arises from applying noise-contrastive estimation to a supervised problem via the same principles that justify InfoNCE for the self-supervised case. Next, we cover practical implementation of SINCERE in Sec. 3.2, including complexity analysis. Sec. 3.3 contrasts the gradient of SINCERE with the gradient of SupCon loss. Sec. 3.4 motivates the loss via an information-theoretic bound. Finally, Sec. 3.5 examines how SINCERE loss relates to other works building on InfoNCE and SupCon losses. ### Derivation and Justification We first establish a "true" data-generating model for both self-supervised and supervised cases. We then show how noise contrastive estimation under this assumed model leads to our proposed SINCERE loss in the supervised case, and reduces to the InfoNCE loss in the self-supervised case. **Model for self-supervised case.** Assume we observe a dataset \(\mathcal{X}\) of \(n\) examples, without any labels. The target class of interest is denoted \(y_{t}\). Exactly one example's data is drawn from the target distribution with pdf function \(p(x|y_{t})\). All other examples are drawn i.i.d. from the noise distribution, with pdf function denoted \(p(x|\neq_{t})\), where symbol \(\neq_{t}\) is shorthand for conditioning on the event that \(Y\neq y_{t}\), which means the instance's label \(Y\) does not match the target class. Let random variable \(S\) indicate the index of the example from the target distribution. The assumed "true" model accounts for both \(S\) and \(\mathcal{X}\) as random variables. First, set \(p(S{=}i|y_{t})\) to uniform over \(i\in\mathcal{I}\). Then, generate the data as \[p(\mathcal{X}|S{=}i,y_{t})=p(x_{i}|y_{t})\prod_{l\neq i}p(x_{l}|\neq_{t}) \tag{4}\] **Lemma 1**.: _Likelihood for Self-Supervised NCE van den Oord, Li, and Vinyals (2019). For any index \(i\in\mathcal{I}\), the probability that \(x_{i}\) was drawn from the target, given that there is only one sample in \(\mathcal{X}\) drawn from the target, is_ \[p(S=i|\mathcal{X},y_{t}) =\frac{p(x_{i}|y_{t})\prod_{l\neq i}p(x_{l}|\neq_{t})}{\sum_{j\in \mathcal{I}}p(x_{j}|y_{t})\prod_{l\neq j}p(x_{l}|\neq_{t})} \tag{5}\] \[=\frac{\frac{p(x_{i}|y_{t})}{p(x_{i}|\neq_{t})}}{\frac{p(x_{i}|y_ {t})}{p(x_{i}|\neq_{t})}+\sum_{j\in\mathcal{N}_{t}}\frac{p(x_{j}|y_{t})}{p(x_{ j}|\neq_{t})}}. \tag{6}\] Proof: _Bayes theorem produces the first formula given the joint \(p(S,\mathcal{X}|y_{t})\) defined in and above Eq. 4. The second formula reduces via algebra, recalling \(\mathcal{I}=\mathcal{N}_{t}\cup\{i\}\)._ **Model for supervised case.** Suppose multiple elements of \(\mathcal{X}\) are drawn from the target class of interest \(y_{t}\). We assume known the indices \(\mathcal{P}_{t}\) of all but one target example. Let random variable \(S\in\mathcal{I}\setminus\mathcal{P}_{t}\) indicate the index of the final example of the target class. The remaining indices \(\mathcal{N}_{t}\) are assumed from the noise. The assumed "true" model is keeps \(p(S=i|\mathcal{P}_{t},y_{t})\) uniform over \(\mathcal{I}\setminus\mathcal{P}_{t}\), then generates data as \[p(\mathcal{X}|S=i,\mathcal{P}_{t},y_{t})=p(x_{i}|y_{t})\prod_{p\in \mathcal{P}_{t}}p(x_{p}|y_{t})\prod_{j\in\mathcal{N}_{t}}p(x_{j}|\neq_{t}) \tag{7}\] **Lemma 2**.: _Likelihood for Supervised NCE. Given an \(\mathcal{X}\) from the model defined above with target \(y_{t}\) and known indices \(\mathcal{P}_{t}\), the probability that some index \(i\in\mathcal{I}\setminus\mathcal{P}_{t}\) is the final target sample is_ \[p(S=i|\mathcal{P}_{t},\mathcal{X},y_{t})=\frac{\frac{p(x_{i}|y_{t})}{p(x_{i}| \neq_{t})}}{\frac{p(x_{i}|y_{t})}{p(x_{i}|\neq_{t})}+\sum_{j\in\mathcal{N}_{t}} \frac{p(x_{i}|y_{t})}{p(x_{i}|\neq_{t})}} \tag{8}\] Proof: _We first derive an expression for the likelihood of all indices of the target class: \(p(S=i,\mathcal{P}_{t}|\mathcal{X},y_{t})\) from the joint in and above Eq. (7). Standard probability operations (sum rule, product rule) then allow obtaining the desired \(p(S=i|\mathcal{P}_{t},\mathcal{X},y_{t})\) For details, see App. B._ Tractable model.In practice, we will not know the true density functions for target or noise distributions. Instead, we can build an alternative tractable model of random variable \(S\in\mathcal{I}\setminus\mathcal{P}_{t}\), indicating the index of the single unknown member of the target class. Let neural net \(f_{\theta}\) map any input data \(x_{i}\) to a strictly positive value. Then, our tractable model for \(S\) given data \(\mathcal{X}\) and known class members \(\mathcal{P}_{t}\) is \[p_{\theta}(S=i|\mathcal{P}_{t},\mathcal{X},y_{t})=\frac{f_{\theta}(x_{i})}{f_ {\theta}(x_{i})+\sum_{j\in\mathcal{N}_{t}}f_{\theta}(x_{j})}. \tag{9}\] Suppose we can observe many samples of \(\mathcal{X},\mathcal{P}_{t},S\) from the true model in Eq. (7). We can fit \(f_{\theta}\) by minimizing the following _idealized_ loss \[L_{\text{SINCERE}}(\theta)=\mathbb{E}_{\mathcal{X},\mathcal{P}_{t},S\sim p_{ \text{true}}}\left[-\log p_{\theta}(S|\mathcal{P}_{t},\mathcal{X},y_{t})\right] \tag{10}\] This loss, which we call SINCERE, provides a principled way to fit a tractable neural model \(f_{\theta}\) to identify the last remaining member of a target class when given other class member indices \(\mathcal{P}_{t}\). Minimizing the loss has an equivalent interpretation as maximizing the log likelihood of \(S\) under the tractable model. The following two propositions justify the chosen form of function \(f\) in Eq. (9) and loss \(L\) in Eq. (10). We emphasize that our two-proposition justification for the loss holds for both the supervised case that is our main focus, as well for the self-supervised case (where \(\mathcal{P}_{t}\) is the empty set). This justification can be viewed as a formalization of the arguments in van den Oord, Li, and Vinyals (2019) that has been extended to handle the supervised case in a principled way. **Proposition 3.1**.: _If \(f\) is sufficiently flexible, there exists parameter \(\theta^{*}\) such that the tractable model can match the true likelihood of \(S\). Proof: Fix \(k>0\), then set \(\theta^{*}\) such that_ \[f_{\theta^{*}}(x)=k\frac{p(x|y_{t})}{p(x|\neq_{t})},\;\text{for all possible}\;x. \tag{11}\] _This implies \(p_{\theta^{*}}(S{=}i|\mathcal{P}_{t},\mathcal{X},y_{t})=p_{true}(S{=}i| \mathcal{X},\mathcal{P}_{t},y_{t})\) by construction for all valid values of \(i\in\mathcal{I}\setminus\mathcal{P}_{t}\)._ **Proposition 3.2**.: _The truth-matching parameter \(\theta^{*}\) is a minimizer of the SINCERE loss \(L_{\text{SINCERE}}(\theta)\) in Eq. (10). Proof sketch: We recognize our loss minimization objective in Eq. (10) as equivalent to maximizing the log likelihood \(p_{\theta}\) under samples from the true model. Using the theory of maximum likelihood estimation under possible model misspecification (White, 1982; Fan, 2016), we can view this as minimizing the KL-divergence \(D_{\text{KL}}(p_{true}(S|\mathcal{P}_{t},\mathcal{X},y_{t})||p_{\theta}(S| \mathcal{P}_{t},\mathcal{X},y_{t}))\). KL-divergence is minimized when its two arguments are equal, and we've shown that \(\theta^{*}\) can match the truth in Prop 3.1. Thus, setting \(\theta=\theta^{*}\) will attain the optimal loss._ Comparison to SupCon.Attempting to translate SupCon loss into the noise-contrastive paradigm suggests that it assigns probability to the data point at index \(i\) out of all possible data points via \[\frac{\frac{p(x_{i}|y_{t})}{p(x_{i}|\neq_{t})}}{\frac{p(x_{i}|\neq_{t})}{p(x_{i }|\neq_{t})}+\sum_{p\in\mathcal{P}_{t}}\frac{p(x_{p}|y_{t})}{p(x_{p}|\neq_{t}) }+\sum_{m\in\mathcal{N}_{t}}\frac{p(x_{n}|y_{t})}{p(x_{n}|\neq_{t})}}.\] We emphasize that this does _not_ correspond to a principled derivation from a coherent probabilistic model. In contrast, our derivation of SINCERE follows directly from the model in Eq. (7). Furthermore, this framing of SupCon makes clear that it penalizes similarity between embeddings from the target distribution, which results in the problematic within-class repulsion behavior described in Fig. 1. We formalize this analysis later in Sec. 3.3. ### SINCERE Loss in Practice To compute SINCERE loss in practice, we take the expectation over stochastically-sampled batches \((\mathcal{X}_{b},\mathcal{Y}_{b})\) of fixed size \(n\) from a large labeled dataset \((\mathcal{X},\mathcal{Y})\). Each data point in the mini-batch is reduced to an embedding vector representation by a neural network with weights \(\theta\). Then, each embedding is treated in turn as the target example \(z_{t}\) for one computation of the SINCERE loss. Overall, we fit neural net weights \(\theta\) by minimizing this expected loss over batches: \[L(\theta) =\mathbb{E}_{\mathcal{X}_{b},\mathcal{Y}_{b}}\left[\sum_{t=1}^{n} \frac{L_{\text{SINCERE}}(z_{t})}{n}\right], \tag{12}\] \[L_{\text{SINCERE}}(z_{t}) =\frac{-1}{|\mathcal{P}_{t}|}\sum_{p\in\mathcal{P}_{t}}\log\frac{ e^{z_{t}\cdot z_{p}/\tau}}{e^{z_{t}\cdot z_{p}/\tau}+\sum_{j\in\mathcal{N}_{t}}e^{z_{t} \cdot z_{j}/\tau}}.\] Here, \(\mathcal{P}_{t}\) defines all data points other than \(t\)_in the current batch_ that share class label \(y_{t}\). Similarly, \(\mathcal{N}_{t}\) defines data points in the current batch with any other class label. Our implementation of SINCERE loss uses the cosine similarity function proposed by Wu et al. (2018), although other choices of similarity functions may be used. By averaging over the elements of \(\mathcal{P}_{t}\) we can nonparametrically represent the target class \(y_{t}\) and encourage \(z_{t}\) to have similar embeddings as its fellow members. However, no member of \(\mathcal{P}_{t}\) ever appears in the denominator without also appearing in the numerator. This avoids any "repulsion" between two members of the same class in the embedding space seen with SupCon loss. Intuitively, our SINCERE loss restores NCE's assumption that the input used in the numerator belongs to the target distribution while all other inputs in the denominator belong to the noise distribution. Runtime and memory complexity.We emphasize that SINCERE's complexity exactly matches SupCon's complexity in both speed and memory. Given a batch of \(n\) data points, each with a \(d\)-dimensional embedding, SINCERE loss can be computed in \(O(n^{2}d)\) time, with quadratic complexity arising due to need for computation of dot products between many pairs \(z_{t},z_{j}\). An implementation that was memory sensitive could be done with \(O(nd)\) memory, which is the cost of storing all embedding vectors. Our implementation has memory cost of \(O(n^{2}+nd)\), as we find computing all \(n^{2}\) pairwise similarities at once has speed advantages due to vectorization. In our experiments with a batch size of 512, we find the runtime of computing embeddings far exceeds the runtime of computing SINCERE given embeddings. ### Analysis of Gradients We study the gradients of both SINCERE and SupCon to gain additional understanding of their relative properties. The gradient of the SINCERE loss with respect to \(z_{t}\) is \[\frac{\delta}{\delta z_{t}}L_{\text{SINCERE}}(z_{t})=\frac{1}{\tau|\mathcal{P }_{t}|}\sum_{p\in\mathcal{P}_{t}}g_{p}, \tag{13}\] \[g_{p}\triangleq z_{p}\left(\frac{e^{z_{t}\cdot z_{p}/\tau}}{\sum_{i\in \mathcal{N}_{t}\cup\{p\}}e^{z_{t}\cdot z_{i}/\tau}}-1\right)+\frac{\sum_{n\in \mathcal{N}_{t}}z_{n}e^{z_{t}\cdot z_{n}/\tau}}{\sum_{i\in\mathcal{N}_{t}\cup \{p\}}e^{z_{t}\cdot z_{i}/\tau}}.\] The first term of \(g_{p}\) involves a _negative_ scalar times \(z_{p}\). The second term involves a _positive_ scalar times each noise embedding \(z_{n}\). Thus during gradient descent each update to \(z_{t}\) encourages it to move _towards_ each target embedding \(z_{p}\) and _away_ from each noise embedding \(z_{n}\). The magnitude of these movements is determined by the softmax of cosine similarities. For a complete derivation and further analysis, see App. Sec. C. This behavior is different from the gradient dynamics of SupCon loss. Khosla et al. (2020) provide SupCon's gradient with respect to \(z_{t}\) as \(\frac{\delta}{\delta z_{t}}L_{\text{SupCon}}(z_{t})=\) \[\frac{1}{\tau}\sum_{p\in\mathcal{P}_{t}}z_{p}\left(\frac{e^{z_{t}\cdot z_{p}/ \tau}}{\sum_{i\in\mathcal{I}}e^{z_{t}\cdot z_{i}/\tau}}-\frac{1}{|\mathcal{P}_ {t}|}\right)+\frac{1}{\tau}\frac{\sum_{n\in\mathcal{N}_{t}}z_{i}e^{z_{t}\cdot z _{n}/\tau}}{\sum_{i\in\mathcal{I}}e^{z_{t}\cdot z_{i}/\tau}}. \tag{14}\] Studying this SupCon gradient, we see that the scalar multiplying \(z_{p}\) in Eq. 14 will be in the range \([\frac{-1}{|\mathcal{P}_{t}|},1-\frac{1}{|\mathcal{P}_{t}|}]\). The possibility of positive values implies \(z_{t}\) could be _pushed away_ from \(z_{p}\) when applying gradient descent. In contrast, the scalar multiplier for \(z_{p}\) will always by in \([-1,0]\) for SINCERE in Eq. 13, which effectively performs hard positive mining (Schroff, Kalenichenko, and Philbin, 2015). SupCon's problematic behavior (possible repulsion between members of the same class) increases in severity as \(|\mathcal{P}_{t}|\) increases, resulting in a scalar in \([0,1]\) as \(|\mathcal{P}_{t}|\) approaches positive infinity. Khosla et al. (2020) previously hypothesized that the \(\frac{-1}{|\mathcal{P}_{t}|}\) term came from the softmax of \(z_{t}\) and the mean of the embeddings \(z_{p}\in\mathcal{P}_{t}\). Our analysis suggests it is actually due to improperly including target class examples other than \(p\) in the loss' denominator. A similar issue arises from the summation over the noise distribution in Eq. 14. Each softmax includes the noise distribution and the entire target distribution in the denominator instead of only the noise distribution and \(z_{p}\) as in Eq. 13. Reducing the weight put onto the noise distribution causes the SupCon loss to reduce the separation between the noise and target distributions. ### SINCERE Bounds KL Divergence van den Oord, Li, and Vinyals (2019) motivate the self-supervised InfoNCE loss via an information-theoretic bound. Revisiting this analysis, we suggest that SINCERE loss and also InfoNCE loss can be understood as a bound on the KL divergence between the target and noise distributions. This bound becomes tighter as the number of negative examples \(|\mathcal{N}_{t}|\) increases, or as the loss \(L_{\text{SINCERE}}(z_{t})\) decreases. **Theorem 3**.: _Bound on KL Divergence_ \(L(\theta)\) _bounds the KL divergence between the noise and target distributions, where \(L(\theta)\) is \(L_{\text{InfoNCE}}(\theta)\) when self-supervised and \(L_{\text{SINCERE}}(\theta)\) when supervised:_ \[L(\theta)\geq\mathbb{E}_{\mathcal{P}_{t},t}\left[\log|\mathcal{N}_{t}|-\text{ KL}(p(x_{t}|y_{t})||p(x_{t}|\neq_{t}))\right]. \tag{15}\] _Proof: See App. Sec. D._ Maximizing the KL divergence ensures that the distributions are separable. Therefore Theorem 3 shows performance improves with more negative samples, as has been observed in self-supervised contrastive learning (Henaff, 2020; Chen, Xie, and He, 2021; Chen et al., 2020; Tian, Krishnan, and Isola, 2020). More positive samples removes bias from the estimation of the KL divergence, ensuring the entire target distribution is separable from the noise distribution. ### Related Work on Supervised Contrastive Several works have expanded on SupCon loss in order to apply it to new problems. Feng et al. (2022) limit the target and noise distributions to K-nearest neighbors to allow for multi-modal class distributions. Kang et al. (2021) explicitly set the number of samples from the target distribution to handle imbalanced datasets. Li et al. (2022) introduce a regularization to push target distributions to center on uniformly distributed points in the embedding space. Yang et al. (2022) and Li et al. (2022) utilize pseudo-labeling to address semi-supervised learning and supervised learning with noisy labels respectively. SINCERE loss can easily replace the use of SupCon loss in these applications. Terms similar to the SINCERE loss have previously been used as a part of more complex losses. Barbano et al. (2023) proposed \(\epsilon\)-SupInfoNCE loss, which eliminated the problematic denominator terms from SupCon for being non-contrastive and introduced a margin hyperparameter. However, it is unclear how the loss handles the target distribution. \(\epsilon\)-SupInfoNCE loss eliminates the average over data points from the same class used in SupCon and SINCERE losses and there is no code at time of writing to show their alternative approach. Chen et al. (2022) utilizes a loss like our SINCERE loss as one term of an overall loss function meant to spread out embeddings that share a class. Neither of these works provide detailed discussion or motivation for the changes they make to SupCon loss. ## 4 Experiments We evaluate SINCERE and SupCon losses for supervised pretraining on CIFAR-10 and CIFAR-100 (Krizhevsky, 2009). These datasets were selected in order to reproduce previous results for the SupCon loss (Khosla et al., 2020). Following the PyTorch code released by Khosla et al. (2020), a ResNet-50 for each loss is pretrained for 1,000 epochs then frozen. Each frozen model is then used as a feature extractor for a linear classifier that is trained for 100 epochs with cross-entropy loss. Section 4.3 provides a detailed explanation of the training process to enable others to reproduce these results. Our code is available: see page one. Section 4.1 contrasts the behavior of the losses throughout pretraining by contrasting loss values and the learned embeddings. Section 4.2 validates that SINCERE loss embeddings are as effective as SupCon loss embeddings for finetuning a linear classifier. ### Supervised Pretraining Table 1 reports loss values from pretraining. SupCon and SINCERE losses return very similar values during the first training epoch because the target and noise distributions are not yet separable. However they differ significantly by the final training epoch, when the target and noise distributions are separable. SINCERE loss ends with lower loss values for both datasets. SupCon loss on CIFAR-100 is lower than SupCon loss on CIFAR-10, when there are fewer inputs sharing a class per batch. This confirms the analysis of SupCon loss' gradient in Section 3.3: including the target distribution as part of the noise distribution produces problematic behavior that increases in severity as the number of inputs sharing a class increases. Figure 2 examines the average cosine similarity for CIFAR-10 test set image pairs. Pairs made up of the same image are excluded when the row and column class are the same, as these would always evaluate to a cosine similarity of 1. SINCERE loss learns embeddings with a lower cosine similarity than SupCon loss, with a mean decrease of 0.06 when pairs are from the same class and 0.11 when pairs are from different classes. Therefore SINCERE loss better separates the target and noise distributions despite spreading out the target distribution embeddings. Figure 3 visualizes the cosine similarity of the truck class and its noise distribution composed of the other 9 classes. The truck class was chosen as a representative sample as the other classes follow the same trends seen here. SINCERE loss' lower means are seen in the shift of the target and noise distributions. Additionally, SINCERE loss spreads both distributions across more cosine similarity values, which is seen in the lower peaks. The similar shape of the cosine similarity distributions confirms that the trends seen in Figure 2 arise from better separation of the target and noise distributions and not from differently shaped distributions, such as comparing a unimodal and bimodal distribution. ### Classification Accuracy We report top-1 and top-5 accuracy results. Top-1 accuracy was chosen to reproduce the previous SupCon results (Khosla et al., 2020), which are within \(0.6\) percentage points of our results. Top-5 accuracy was chosen to provide a broader view of the models' predictions than top-1 accuracy. Table 2 reports the top-1 and top-5 accuracy of the fine-tuned linear classifiers on the test set. Accuracy results are the mean of 1,000 iterations of test set bootstrapping. No results are boldfaced because the difference between accuracies is not statistically significant according to a 95% confidence interval of the bootstrapped results (Foody, 2009). In further analysis, Supplementary Figure A.1 provides side-by-side confusion matrices for the two methods on the CIFAR-10 test set. The two confusion matrices are quite similar (e.g. cat and dog as the most confused classes). Therefore, we suggest that our new SINCERE loss after fine-tuning maintains the good accuracy previously reported by Khosla et al. (2020) with SupCon loss plus fine-tuning. ### Training Details Models were trained on a Red Hat Enterprise Linux 7.5 server with a A100 GPU with 40 GiB of memory and 16 Intel Xeon Gold 6226R CPUs with 1TB of memory each. Many of the CPUs were primarily used for parallelization of data loading, so fewer or smaller CPUs could be used easily. PyTorch 2.0.1 and Torchvision 0.15.2 for CUDA 11.7 were used for model and loss implementations. A single training procedure was done for each SupCon and SINCERE loss. Models were pretrained with the hyperparameters suggested by Khosla et al. (2020): 1,000 epochs of stochastic gradient descent with 0.9 momentum, 0.0001 weight decay, 512 batch size, and a cosine annealed learning rate schedule with warm-up, which spends 10 epochs warming up from 0.0005 to 0.5 then cosine anneals back to 0.0005 at the last epoch. Temperature (\(\tau\)) was set to 0.1. Additional learning rates were tested to ensure that pretraining \begin{table} \begin{tabular}{c|c c|c c} \hline & \multicolumn{2}{c|}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ Pretraining Loss & Initial & Final & Initial & Final \\ \hline SupCon & 6.93 & 4.69 & 6.91 & 2.43 \\ SINCERE & 6.82 & **0.24** & 6.90 & **0.14** \\ \hline \end{tabular} \end{table} Table 1: Average pretraining loss values for initial and final training epochs. \begin{table} \begin{tabular}{c|c c|c c} \hline & \multicolumn{2}{c|}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ Pretraining Loss & Top-1 & Top-5 & Top-1 & Top-5 \\ \hline SupCon & 95.78 & 99.84 & 75.96 & 92.33 \\ SINCERE & 95.93 & 99.78 & 75.86 & 92.54 \\ \hline \end{tabular} \end{table} Table 2: Test set accuracy of a linear classifier trained using frozen pretrained features. SINCERE’s performance is essentially indistinguishable from SupCon. No results are boldfaced, as we did _not_ find any differences to be statistically significant in our bootstrap interval analysis.. results were robust, applying similar warm-up and annealing schedules. Results similar to those reported were found when using 0.005, 0.05, or 0.1 as the learning rate. Both models produce worse results with learning rates of 1 or 5. Linear models were trained with the hyperparameters suggested by Khosla et al. (2020): 100 epochs of stochastic gradient descent with 0.9 momentum, 0 weight decay, 512 batch size, and a cosine annealed learning rate schedule with warm-up, which spends 10 epochs warming up from 0.05 to 5 then cosine anneals back to 0.05 at the last epoch. ## 5 Discussion The proposed SINCERE loss is a theoretically motivated loss for supervised noise contrastive estimation. SINCERE loss eliminates the problematic behavior seen in SupCon loss, as shown through examination of the loss gradients and empirical results. Additionally, SINCERE loss bounds the KL divergence between target and noise distributions, with the tightness of the bound increasing with more data. SINCERE loss can easily replace SupCon loss, only requiring refitting of loss weight hyperparameters for multi-objective losses due to SINCERE loss' broader range of loss values. Future work may explore an alternative supervised loss which removes the conditioning on the target distribution. This would correspond to predicting all entries of the target distribution at once instead of individually. A naive approach to this problem would involve an exponential increase in the number of terms in the denominator. Abstracting the classes via prototypes or sampling terms from the full denominator could resolve that issue. Such a loss could potentially model higher-order interactions between sets of samples instead of averaging over pair-wise interactions as is done currently. Figure 3: Histograms of cosine similarity values for image pairs in the CIFAR-10 test set, comparing SupCon (top) and SINCERE (bottom). The target “Truck” distribution is formed by evaluating image pairs where both are labeled truck. The noise distribution is formed by computing similarity between truck, non-truck pairs. SINCERE loss lowers the cosine similarity of the noise distribution, suggesting better target-noise separation. Figure 2: Average cosine similarity for pairs of CIFAR-10 test set images, with one image from the row class and the other from the column class. SINCERE loss better separates embeddings from other classes (off-diagonal entries are much lower). We also observe that SINCERE does not group embeddings from the same class (on diagonal) as tightly as SupCon loss.
2309.15169
Revealing the Power of Masked Autoencoders in Traffic Forecasting
Traffic forecasting, crucial for urban planning, requires accurate predictions of spatial-temporal traffic patterns across urban areas. Existing research mainly focuses on designing complex models that capture spatial-temporal dependencies among variables explicitly. However, this field faces challenges related to data scarcity and model stability, which results in limited performance improvement. To address these issues, we propose Spatial-Temporal Masked AutoEncoders (STMAE), a plug-and-play framework designed to enhance existing spatial-temporal models on traffic prediction. STMAE consists of two learning stages. In the pretraining stage, an encoder processes partially visible traffic data produced by a dual-masking strategy, including biased random walk-based spatial masking and patch-based temporal masking. Subsequently, two decoders aim to reconstruct the masked counterparts from both spatial and temporal perspectives. The fine-tuning stage retains the pretrained encoder and integrates it with decoders from existing backbones to improve forecasting accuracy. Our results on traffic benchmarks show that STMAE can largely enhance the forecasting capabilities of various spatial-temporal models.
Jiarui Sun, Yujie Fan, Chin-Chia Michael Yeh, Wei Zhang, Girish Chowdhary
2023-09-26T18:05:19Z
http://arxiv.org/abs/2309.15169v2
# Revealing the Power of Spatial-Temporal Masked Autoencoders ###### Abstract Multivariate time series (MTS) forecasting involves predicting future time series data based on historical observations. Existing research primarily emphasizes the development of complex spatial-temporal models that capture spatial dependencies and temporal correlations among time series variables explicitly. However, recent advances have been impeded by challenges relating to data scarcity and model robustness. To address these issues, we propose Spatial-Temporal Masked Autoencoders (STMAE), an MTS forecasting framework that leverages masked autoencoders to enhance the performance of spatial-temporal baseline models. STMAE consists of two learning stages. In the pre-training stage, an encoder-decoder architecture is employed. The encoder processes the partially visible MTS data produced by a novel dual-masking strategy, including biased random walk-based spatial masking and patch-based temporal masking. Subsequently, the decoders aim to reconstruct the masked counterparts from both spatial and temporal perspectives. The pretraining stage establishes a challenging pretext task, compelling the encoder to learn robust spatial-temporal patterns. In the fine-tuning stage, the pre-trained encoder is retained, and the original decoder from existing spatial-temporal models is appended for forecasting. Extensive experiments are conducted on multiple MTS benchmarks. The promising results demonstrate that integrating STMAE into various spatial-temporal models can largely enhance their MTS forecasting capability. Maked autoencoders, Multivariate time series forecasting, Spatial-temporal models ## 1 Introduction Multivariate time series (MTS) data are ubiquitous in our modern world. The task of MTS forecasting, which involves predicting future trends based on historical observations, plays a crucial role in guiding informed decisions across diverse real-world domains. This task finds applications in a spectrum of fields, ranging from traffic planning [19, 18] and human motion modeling [17, 23] to epidemic simulation [11, 2]. To generate accurate predictions, state-of-the-art methods develop spatial-temporal models to capture intricate spatial-temporal interactions inherent in MTS data. From the temporal perspective, these works often leverage convolutional neural networks (CNNs) or recurrent neural networks (RNNs) to model temporal evolution; and from a spatial point of view, graph neural networks (GNNs) are often used to capture the spatial correlations among different variables. By jointly addressing spatial and temporal patterns, these spatial-temporal models have exhibited remarkable performance in MTS prediction. However, despite the considerable efforts invested by researchers in designing complicated spatial-temporal models, several evident issues impede their performance. First, _data scarcity_ often leads to model _overfitting_. Notably, compared to the domains of computer vision and natural language processing, where millions or even billions of samples are used for model training, the benchmarks constructed for MTS data exhibit a considerably smaller scale. For example, learning pipelines related to human motion [10] often involve only a few actors engaged in minute-level actions, while traffic records [7] typically encompass only a few months of sensory data. This significant difference in data size restricts spatial-temporal models' capability to generalize across a diverse range of spatial-temporal patterns, rendering them prone to overfitting against existing MTS benchmarks. Second, the _incompleteness_ of MTS data poses a challenge to model _robustness_. Given that the MTS data are collected from the real world, they are noisy, incomplete, and even has missing values. For example, the aforementioned traffic data are gathered from various sensors deployed in road networks, which may encounter failures or temporary malfunctions, introducing gaps and noise to the data. These data irregularities introduce uncertainties and inaccuracies that hinder the spatial-temporal model's ability to learn and generalize the discovered patterns robustly to real-world scenarios. In light of the overfitting and robustness concerns, how to unleash the power of complex spatial-temporal models for general performance enhancement is an urgent question we need to answer. Self-supervised learning (SSL) [31], a learning paradigm in which a model learns to make predictions based on supervision signals derived from the input data itself, offers a promising avenue for mitigating the above challenges. For example, one recent work STGCL [15] adopts contrastive learning-based SSL to enhance existing spatial-temporal models' performance. It first derives positive data sample pairs through various data augmentation techniques and collect negative pairs by some predefined heuristics. Based on the generated positive and negative samples, the model is facilitated with a contrastive loss that tries to maximize the agreements between positive pairs while minimizing that between negative ones. However, in the MTS forecasting scenario, the process of generating positive and negative samples is often overly complex: it either involves heuristic-based techniques that need to be adapted based on specific application type, or requires prior knowledge or meta-information about the data, which are often unavailable. These limitations motivate us to explore another direction of SSL: generative SSL [16]. Inspired by masked autoencoders (MAE) [8], a promising generative SSL approach in the computer vision domain, we propose _Spatial-Temporal Masked AutoEncoders_ (STMAE), a versatile framework that is able to elevate the capability of existing spatial-temporal models in MTS forecasting. STMAE consists of two stages: pretraining and fine-tuning. In the initial self-supervised pretraining stage, we employ an encoder-decoder architecture to discover informative patterns based on self-supervision signals derived from spatial-temporal data. The encoder, which can be adopted from off-the-shelf spatial-temporal models, inputs partially visible MTS data that is generated by a well-designed _dual-masking strategy_; while the decoders aim to reconstruct the masked counterparts from both the spatial and temporal perspectives. The dual-masking module plays a crucial role in the pretraining stage. Unlike uniform masking, which is relatively easy to reconstruct, we propose a _biased random walk-based_ spatial masking and _patch-based_ temporal masking. By breaking short-range spatial-temporal connections of MTS, our dual-masking strategy introduces additional challenges to pretrain the encoder, enabling the learned representations to be more robust, predictive, and consistent on both spatial and temporal dimensions. Following the pretraining phase, a fine-tuning stage is performed. We retrain the pretrained encoder, discard the two decoders, and append the original predictor from the spatial-temporal models for forecasting purposes. What sets STMAE apart is its ability to be integrated into existing spatial-temporal models. Additionally, STMAE does not require any additional knowledge to perform complex data augmentation processes, making it a more flexible choice compared to contrastive learning-based methods. Our major contributions are summarized as below: * We introduce STMAE, a versatile framework that seamlessly integrates with established spatial-temporal models, alleviating model overfitting and robustness concerns. To the best of our knowledge, STMAE represents the first exploration of the potential of masking-based SSL for MTS forecasting. * We propose a dual-masking strategy in the pretraining stage, consisting of a biased random walk-based spatial masking and patch-based temporal masking. This strategic design establishes a challenging pretext task, which encourage the encoder to acquire more informative MTS representations. * We perform comprehensive empirical analyses to validate STMAE's efficacy in MTS forecasting. Promising results demonstrate its capacity in enhancing the performance of existing spatial-temporal models. Ablation studies further underscore our dual-masking strategy's contribution in learning robust models. ## 2 Related Work ### Spatial-Temporal Models. By considering the interdependence among different variables, spatial-temporal models [14, 1, 28, 27, 30, 4, 32, 12] have shown remarkable performance in MTS forecasting. For example, Li _et al_. introduced DCRNN [14], which utilizes RNN-based graph diffusion convolution to capture spatial-temporal correlations among MTS. Bai _et al_. proposed AGCRN [1], which incorporates several adaptive graph learning modules that dynamically capture variable dependencies from MTS recurrently. Wu _et al_. designed MTGNN [27], which captures the spatial-temporal correlations of MTS effectively using adaptive GNN and CNN-based modules. However, as MTS benchmarks are often limited in scale, these spatial-temporal models are prone to overfitting and thus hard to generalize. To address these issues, Liu _et al_. proposed STGCL [15], a framework that integrates contrastive-based SSL into the learning pipeline of spatial-temporal models. STGCL's innovation lies in its data augmentation strategy, which utilizes knowledge about the graph structure, time and frequency information of MTS to generate positive and negative samples. Based on the augmentation techniques, the contrastive loss optimizes baseline models for MTS forecasting. However, STGCL relied heavily on heuristics that require prior knowledge or meta-information about the MTS data, which are usually unavailable. ### Masked Autoencoders. Masked autoencoders (MAE) [3, 8, 5] are a type of generative SSL method used for robust feature representation learning. MAE includes an encoder for mapping masked input to a latent representation and a decoder for reconstructing the masked data. In the language domain, Devlin _et al_. proposed BERT [3], a Transformer-based model that leverages bidirectional context to learn contextualized word embeddings through sentence masking. In the vision domain, He _et al_. proposed MAE [8], an asymmetric vision Transformer-based autoencoder to learn robust image representations by reconstructing masked images. Feichtenhofer _et al_. extended the MAE framework to learn spatio-temporal representations from videos [5]. These approaches demonstrate that masked autoencoding can be a unified methodology for representation learning with minimal domain knowledge and label information. Recently, there has been a surge of interest in applying MAE to graphs [9, 13, 20, 25, 26, 29]. Hou _et al_. proposed GraphMAE [9], which focuses on graph feature reconstruction by incorporating a re-mask strategy with a scaled cosine error objective. Li _et al_. [13] provided both theoretical and empirical evidence that comprehensively justified the benefits of using MAE on graph-structured data. Regarding time series forecasting, Shao _et al_. [20] proposed to couple a feature masking strategy with a Transformer architecture to efficiently encode temporal patterns from very long-term historical time series. ## 3 Preliminaries ### MTS Forecasting. Given the MTS data \(\mathcal{X}\in\mathbb{R}^{H\times N\times C}\) containing \(H\) frames of \(N\) time-dependent variables with \(C\) features, MTS forecasting aims to predict the future value \(\hat{\mathcal{Y}}\in\mathbb{R}^{F\times N\times C}\) of \(F\) following steps. It is important to note that \(N\) variables in \(\mathcal{X}\) are not only temporally evolving but also often structurally interrelated. This structural dependency can be represented by a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{A})\), where \(\mathcal{V}\) and \(\mathcal{E}\) are the sets of \(N\) variables and their relations respectively, \(\mathbf{A}\in\mathbb{R}^{N\times N}\) denotes adjacency matrix representing variables' connectivity. Consequently, the forecasting problem can be formally defined as: \[f_{\theta}(\mathcal{X},\mathbf{A})\rightarrow\hat{\mathcal{Y}}, \tag{1}\] where \(f_{\theta}(\cdot)\) denotes the parameterized forecaster. We note that some works such as [14] rely on a predefined \(\mathcal{G}\) as part of the model input, while others, like [1, 27], aim to learn the structural dependencies between variables by constructing \(\mathcal{G}\) during the learning process. ### Pipeline of Spatial-Temporal Models. Existing spatial-temporal models often employ an encoder-decoder architecture for MTS forecasting as shown in Figure 1 (d). The _encoder_, denoted as \(\text{Enc}(\mathcal{X},\mathbf{A})\rightarrow\mathbf{S}\), is designed to extract complex spatial-temporal patterns from historical MTS data and summarize them into a hidden representation \(\mathbf{S}\in\mathbb{R}^{N\times D}\), where \(D\) is the hidden dimension. It combines GNNs to model variable dependencies with RNNs or CNNs-based sequence models for capturing temporal correlations across time steps. On the other hand, the _predictor_, denoted as \(\text{Pred}(\mathbf{S})\rightarrow\hat{\mathcal{Y}}\), focuses on making precise predictions based on the encoded state \(\mathbf{S}\). In contrast to the encoder, \(\text{Pred}(\cdot)\) is typically lightweight, often taking the form of a multilayer perceptron (MLP) with only a few layers. Therefore, the pipeline of spatial-temporal models can be summarized as: \[f_{\theta}(\mathcal{X},\mathbf{A})\coloneqq\text{Pred}\big{(}\text{Enc}( \mathcal{X},\mathbf{A})\big{)}\rightarrow\hat{\mathcal{Y}}. \tag{2}\] Lastly, the mean absolute error between the model's predictions \(\hat{\mathcal{Y}}\) and the groundtruth \(\mathcal{Y}\) is used as the objective to train spatial-temporal models. Let \(\mathcal{L}_{\text{pred}}\) denote the forecasting loss. It is denoted as: \[\mathcal{L}_{\text{pred}}=\|\hat{\mathcal{Y}}-\mathcal{Y}\|_{1}. \tag{3}\] ## 4 Methodology In this section, we introduce the details of STMAE, including its overall design and two learning stages. ### Overall Design. The overall framework of STMAE is shown in Figure 1. Drawing inspiration from MAE works in other domains [3, 8], STMAE employs a two-stage training scheme comprising pretraining and fine-tuning. First, as shown in Figure 1 (a), STMAE's pretraining stage aims at reconstructing the masked MTS data from both spatial and temporal perspectives. The masked inputs are yielded from the dual spatial-temporal masking module, which is shown in Figure 1 (c). STMAE's fine-tuning stage is presented in Figure 1 (b), where we leverage the pretrained encoder to provide contextual representations which are fed into the predictor for MTS forecasting. We emphasize that the encoder \(\text{Enc}(\cdot)\) and the predictor \(\text{Pred}(\cdot)\) used in our framework can be directly adopted from existing spatial-temporal models as Figure 1 (d) indicates. ### Pretraining. The goal of the pretraining stage is to discover informative spatial-temporal patterns from MTS data through self-supervised masking reconstructions. To accomplish this, we introduce two innovative components on top of the existing spatial-temporal encoders \(\text{Enc}(\cdot)\): (1) a dual spatial-temporal masking module, and (2) two lightweight decoders designed for reconstructing MTS data from both spatial and temporal dimensions. #### 4.2.1. Dual-Masking Strategy The masking module is essential to the pretraining stage that involves two key considerations: _what_ to mask and _how_ to mask. Regarding the first question, since spatial-temporal data inherently reflects interactions among MTS variables, we incorporate both spatial and temporal aspects into our dual-masking strategy, rather than focusing on just one domain. As for the "how to mask" aspect, we aim for a challenging pretext task to ensure robust training and to guide the \(\text{Enc}(\cdot)\) to acquire generalizable representations. To this end, at the spatial level, we avoid simple uniform relation masking which can be easy to recover. Instead, we employ a biased random walk-based spatial masking that samples walk paths from the graph for masking. This approach preserves local and even global graph structural information within the masked portion, making reconstruction challenging. Similarly, at the feature level, we adopt a patch-based temporal masking strategy, where we divide the original MTS data into non-overlapping patches that each spanning a few consecutive timesteps. Masking is then randomly applied to these patches. Notably, patch-based masking presents a more formidable reconstruction task compared to single timestep-based masking. We introduce each component in detail below. **Spatial Masking.** Most existing works on masked graph autoencoders focus on masking a subset of relations uniformly. However, the inherent structural redundancy simplifies the task of relation reconstruction. In this case, the model merely needs to focus on first or second hop neighborhoods to discover relevant relationships. To address this challenge, we take path as the basic masking unit, where a path in a graph signifies a sequence of relations connecting a series of adjacent nodes. Building on this concept, we propose a _biased random walk-based_ spatial masking strategy. This strategy first generates paths from \(\mathcal{G}\) using a biased random walker [6], which offers the flexibility to smoothly transition between Breadth-first Sampling (BFS) and Depth-first Sampling (DFS) of nodes along the path, allowing for a more comprehensive exploration of the graph's structure. Formally, the set of masked relations \(\mathcal{E}_{\text{mask}}\) with size \(|\mathcal{E}|\cdot p_{s}\) are sampled from \(\mathcal{E}\) as: \[\mathcal{E}_{\text{mask}}\sim\text{BiasedRandomWalk}(\mathcal{E},\mathcal{R },p,q), \tag{4.4}\] where \(p_{s}\) is the spatial masking ratio, \(\mathcal{R}\in\mathcal{V}\) is the set of root nodes to start the walks, \(p\) and \(q\) denote return and in-out hyperparameters [6], respectively. Then, all relations belonging to the sampled paths are masked by Figure 1: The overall STMAE framework, including both the (a) pretraining and (b) fine-tuning stages. Specified by (c), We use a biased random walk-based spatial masking strategy on \(\mathcal{G}\), and a patch-based temporal masking strategy on \(\mathcal{X}\). After reconstruction, learning is guided jointly by \(\mathcal{L}_{\mathbf{A}}\) and \(\mathcal{L}_{\mathcal{X}}\). As shown in (d), STMAE can be easily plugged into existing spatial-temporal models. setting the corresponding entry of \(\mathbf{A}\) to zero: \[\mathbf{A}_{uv}=\begin{cases}0&(u,v)\in\mathcal{E}_{\text{mask}},\\ \mathbf{A}_{uv}&\text{otherwise}.\end{cases} \tag{4.5}\] Compared to uniform relation masking and plain walk-based masking, our proposed strategy can flexibly break connections between nodes that either have close proximity or share similar structural roles. For spatial-temporal models that learns \(\mathcal{G}\) dynamically from MTS data, we apply our designed spatial masking strategy adaptively during training. **Temporal Masking.** With relatively low information density, MTS data can be easily recovered by interpolation. This holds particularly when the temporal masking areas are sparse and distinct, making it possible for reconstruction without requiring a deep understanding of its underlying patterns. To this end, we leverage _patch-based temporal masking_. We first divide the original MTS data \(\mathcal{X}\) into \(P\) nonoverlapping subsequences of length \(L\) temporally1, and then randomly mask a subset of patches. Following previous work [3], the masked patches of \(\mathcal{X}\) are replaced by a shared, learnable mask token. Let \(\{\mathcal{X}_{i}\}_{i=1}^{P}\) denotes \(P\) embedding patches with \(\mathcal{X}_{i}\in\mathbb{R}^{L\times N\times D}\). The temporal masking is denoted as: Footnote 1: Suppose \(H\) is dividable by \(P\): \(H=L\times P\). \[\mathcal{X}_{i}=\begin{cases}\mathcal{M}&r\sim\text{Bernoulli}(p_{t}),\ r=1,\\ \mathcal{X}_{i},&\text{otherwise},\end{cases} \tag{4.6}\] where \(\mathcal{M}\in\mathbb{R}^{L\times N\times D}\) is the learnable mask token and \(p_{t}<1\) is the temporal masking ratio. #### 4.2.2 Decoders The decoders' goal is to reconstruct both \(\mathbf{A}\) and \(\mathcal{X}\) simultaneously. Let \(\mathbf{A}_{M},\mathcal{X}_{M}\) represent the masked versions of the model inputs. After masking, we feed \(\mathbf{A}_{M}\) and \(\mathcal{X}_{M}\) into the encoder \(\text{Enc}(\cdot)\) of the spatial-temporal models, which yields the encoded state \(\mathbf{S}\). At this point, rather than using \(\text{Pred}(\cdot)\) to predict \(\hat{\mathcal{Y}}\) directly, our objective is to reconstruct the original MTS data from both spatial and temporal dimensions. To achieve this, we introduce two lightweight decoders: temporal decoder \(\text{Dec}_{t}(\cdot)\) and spatial decoder \(\text{Dec}_{s}(\cdot)\) for reconstructing MTS data and its structural dependencies. Formally, we have: \[\mathbf{S}= \text{Enc}(\mathcal{X}_{M},\mathbf{A}_{M}),\] \[\hat{\mathcal{X}}= \text{Dec}_{t}(\mathbf{S}),\] \[\hat{\mathbf{A}}= \text{Dec}_{s}(\mathbf{S}), \tag{4.7}\] where \(\hat{\mathcal{X}}\) and \(\hat{\mathbf{A}}\) represent the reconstructed MTS data and its graph adjacency matrix, respectively. The spatial decoder \(\text{Dec}_{s}(\cdot)\) takes the form of a simple inner product operation after a linear transformation, which is denoted as: \[\hat{\mathbf{A}}_{uv}=\text{Sigmoid}\big{(}(\mathbf{SW})(\mathbf{SW})_{uv}^{T }\big{)}, \tag{4.8}\] where \(\mathbf{W}\in\mathbb{R}^{D\times D}\) is the trainable transformation matrix, and the \(\text{sigmoid}(\cdot)\) operation ensures the values falls in correct range. On the other hand, we implement the temporal decoder \(\text{Dec}_{t}(\cdot)\) as a single linear layer mapping the hidden state \(\mathbf{S}\) back to its original dimension. The reason we have this asymmetric encoder-decoder design is that we specifically encourage the encoder \(\text{Enc}(\cdot)\) to focus on capturing intricate spatial-temporal interactions from MTS data. #### 4.2.3 Reconstruction Targets To guide spatial-temporal reconstruction, we leverage two objectives, consisting of a classification loss \(\mathcal{L}_{\mathbf{A}}\) and a regression loss \(\mathcal{L}_{\mathcal{X}}\) tailored to adjacency matrix \(\mathbf{A}\) and the MTS data \(\mathcal{X}\), respectively. Specifically, \(\mathcal{L}_{\mathbf{A}}\) aims to reconstruct the masked walks using a classification objective. Formally, we have: \[\mathcal{L}_{\mathbf{A}}=-\frac{1}{|\mathcal{E}_{\text{mask}}|}\sum_{(u,v)\in \mathcal{E}_{\text{mask}}}\log\hat{\mathbf{A}}_{uv}. \tag{4.9}\] On the other hand, \(\mathcal{L}_{\mathcal{X}}\) computes the mean absolute error of the masked patches between \(\mathcal{X}\) and the reconstruction \(\hat{\mathcal{X}}\). Formally, we have: \[\mathcal{L}_{\mathcal{X}}=\|\hat{\mathcal{X}}_{[M]}-\mathcal{X}_{[M]}\|_{1}, \tag{4.10}\] where the subscript \([M]\) denotes the masked area index. In line with recent work [24], we only compute losses over the masked portions both spatially and temporally. The overall reconstruction objective of the pretraining stage is: \[\mathcal{L}_{\text{pretrain}}=\lambda\cdot\mathcal{L}_{\mathbf{A}}+\mathcal{L} _{\mathcal{X}}, \tag{4.11}\] where \(\lambda\) is a non-negative hyperparameter trading off the spatial reconstruction loss and the temporal reconstruction loss. ### Fine-tuning After pretraining the encoder \(\text{Enc}(\cdot)\) with the reconstruction objectives, we fine-tune \(\text{Enc}(\cdot)\) with the original predictor \(\text{Pred}(\cdot)\) obtained from spatial-temporal models to predict \(\hat{\mathcal{Y}}\). In this stage, we provide \(\text{Enc}(\cdot)\) with the complete MTS data without masking, as opposed to the pretraining stage and discard the spatial decoder \(\text{Dec}_{s}(\cdot)\) and the temporal decoder \(\text{Dec}_{t}(\cdot)\). This fine-tuning process aligns with the training pipeline described in Section 3.2, aiming to optimize the forecasting loss \(\mathcal{L}_{\text{pred}}\) as in Equation (3.3). ## 5 Experiments In this section, we comprehensively evaluate the effectiveness of STMAE for MTS forecasting. ### Datasets. We choose traffic forecasting as downstream MTS forecasting task and validate STMAE on four popular traffic benchmarks, i.e., PEMS03, PEMS04, PEMS07 and PEMS08, which have been widely studied in this domain [22, 15, 12]. These datasets contain traffic flow, average speed, and average occupancy readings recorded at different locations, which are aggregated into 5-minute windows. Z-score normalization is used to standardize the data inputs. We follow the practices in [15] to construct the predefined graphs that are necessary for some spatial-temporal models [14] using Gaussian thresholding [21]. In line with the standard benchmark settings in this field, each dataset is split into train, validation, and test in a ratio of 6:2:2 in chronological order. We use 12-step historical MTS data to predict the next 12 steps and use traffic flow as the prediction target. More detailed dataset statistics are summarized in Table 2. ### Experimental Setup #### 5.2.1 Baselines. We select three popular spatial-temporal baselines tailored to the MTS forecasting task, including DCRNN [14], AGCRN [1] and MTGNN [27]. DCRNN additionally requires a predefined graph as model input to perform graph diffusion while the other two automatically learn the graph structure during training. Furthermore, we also compare STMAE with STGCL [15], which uses contrastive learning to enhance the performance of spatial-temporal models. #### 5.2.2 Metrics. In terms of performance evaluation, we adopt three metrics, including mean absolute error (MAE), root mean squared error (RMSE), and mean absolute percentage error (MAPE). These metrics are commonly used in evaluating MTS forecasting task. #### 5.2.3 Implementation Details. STMAE is a versatile framework that can be integrated into existing spatial-temporal models, in which the encoder Enc(\(\cdot\)) and the predictor Pred(\(\cdot\)) are taken directly from these spatial-temporal baselines. Therefore, our framework avoids extensive hyper-parameter tuning, such as adjusting model depth, batch size, hidden dimension, learning rate and optimizer, which are commonly practiced to obtain optimal results for spatial-temporal models. For hyper-parameters required for STMAE, we set \(L=2\) in temporal masking and conduct a grid search to determine few hyper-parameters: \(p\), \(q\), \(\lambda\) from \(\{0.5,1,2,4\}\) and the masking ratios from 20% to 80% based on the validation MAE performance. For all datasets, we pretrain STMAE for 100 epochs, followed by 100 epochs of fine-tuning. ### Results. In this section, we perform overall and per-step analysis of STMAE's performance compared with popular spatial-temporal approaches. #### 5.3.1 Overall Results. Table 1 summarizes the experimental results averaged across all evaluation time steps on four datasets. Through analysing the results, we have the following observations: (1) Demonstrated by the comparison among three baselines, AGCRN and \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c} \hline Datasets & \multicolumn{3}{c}{PEMS03} & \multicolumn{3}{c}{PEMS04} & \multicolumn{3}{c}{PEMS07} & \multicolumn{3}{c}{PEMS08} \\ \hline Method & MAE & MAPE & RMSE & MAE & MAPE & RMSE & MAE & MAPE & RMSE & MAE & MAPE & RMSE \\ \hline \hline AGCRN [1] & 15.47 & 15.26 & 27.06 & 19.39 & 13.24 & **31.07** & 20.64 & 8.80 & 34.19 & 15.65 & 10.33 & 24.51 \\ STGCL\({}_{\text{A}}\) & 15.36 & **14.85** & 27.15 & 19.23 & 13.01 & 31.36 & 20.61 & 8.74 & 34.14 & 15.91 & 10.43 & 24.88 \\ \hline STMAE\({}_{\text{A}}\) & **15.13** & 14.95 & **26.72** & **19.05** & **12.91** & **31.32** & **20.13** & **8.53** & **33.79** & **15.01** & **9.79** & **23.97** \\ \hline \hline DCRNN [14] & 15.76 & 15.69 & **26.76** & 21.48 & 14.65 & 33.99 & 22.55 & 9.78 & 35.24 & 16.63 & 10.78 & 26.01 \\ STGCL\({}_{\text{D}}\) & **15.64** & 15.68 & 27.08 & 21.23 & 14.57 & 33.60 & **22.34** & **9.68** & **35.21** & 16.51 & **10.70** & **25.93** \\ \hline STMAE\({}_{\text{D}}\) & 15.79 & **15.44** & 26.99 & **21.20** & **14.23** & **33.57** & 22.80 & 9.81 & 35.72 & **16.46** & 10.76 & **25.93** \\ \hline \hline MTGNN [27] & 14.94 & 16.02 & 25.29 & 19.02 & 13.32 & 30.95 & 20.83 & 9.00 & 33.77 & 15.44 & 10.35 & 24.30 \\ STGCL\({}_{\text{M}}\) & 14.87 & 15.37 & 25.53 & 18.94 & 13.34 & 30.79 & 20.72 & 8.95 & 33.78 & 15.39 & 10.13 & 24.32 \\ STMAE\({}_{\text{M}}\) & **14.84** & **14.15** & **24.95** & **18.87** & **12.78** & **30.28** & **20.57** & **8.90** & **33.47** & **15.03** & **9.82** & **24.08** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative results of STMAE compared with state-of-the-art methods. The subscripts {A, D, M} correspond to the initials of the baseline methods in which STMAE and STGCL are coupled with. STMAE can consistently enhance the performance of current spatial-temporal models, and performs better than the contrastive learning-based framework STGCL [15]. The best results are **bolded** and the second bests are underlined. \begin{table} \begin{tabular}{c|c c c} \hline Datasets & \#Sensors & \#Instances & Time Range \\ \hline PEMS03 & 358 & 26,208 & 09/18 - 11/18 \\ PEMS04 & 307 & 16,992 & 01/18 - 02/18 \\ PEMS07 & 883 & 28,224 & 05/17 - 08/17 \\ PEMS08 & 170 & 17,856 & 07/16 - 08/16 \\ \hline \hline \end{tabular} \end{table} Table 2: Dataset statistics. MTGNN, that dynamically learn the graph from the MTS data, tend to outperform DCRNN, which relies on a predefined graph. (2) SSL can enhance the capability of spatial-temporal models for the MTS forecasting task, as both STGCL and STMAE achieve better performance than the backbones at most scenarios. (3) Our proposed STMAE, which leverages generative-based SSL through masking reconstruction, often outperforms the contrastive learning-based STGCL. More importantly, it does not require complex data augmentation processes that STGCL heavily relies on. We also observe that STMAE's performance is not as favorable on the PEMS07 dataset when coupled with DCRNN. This undesirable outcome may be attributed to two factors: the requirement of a predefined structural input for DCRNN and the low degree of such predefined graph in the PEMS07 dataset. #### 5.3.2 Per-Step Analysis To gain a deeper insight into the predictive capabilities of each model, this section demonstrates how they perform at each timestep. We illustrate the per-step MAE results of \(\text{STMAE}_{\text{A}}\) and its corresponding baseline and \(\text{STGCL}_{\text{A}}\) in Figure 2. Upon analysing the results, we have the following conclusions: (1) The prediction error is positively correlated with the forecasting step. Across all methods, the MAE continues to increase as the forecasting step becomes larger, indicating the common performance degradation issue of MTS forecasting. (2) STMAE consistently outperforms STGCL and their respective baselines at each timestep, except for the first prediction of the PEMS03 dataset. (3) More importantly, as evaluation forecasting timestep increases, the performance gap between STMAE and the two variants gradually widens. This suggests that STMAE, aided by masked self-supervision, effectively alleviates the prediction degradation issue in spatial-temporal models. ### Ablation Study In STMAE, we propose a dual-masking strategy which creates two nontrivial reconstruction tasks from both the spatial and temporal perspectives for model pretraining. To evaluate this design, we establish three STMAE variants, namely \(\text{STMAE}_{\text{NT}}\), \(\text{STMAE}_{\text{NS}}\) and \(\text{STMAE}_{\text{U}}\), to show how the proposed masking strategies contribute to the forecasting performance. Specifically, \(\text{STMAE}_{\text{NT}}\) does not use temporal masking and only perform masking on the spatial side. Similarly, \(\text{STMAE}_{\text{NS}}\) corresponds to the case where spatial masking is not used but temporal masking on MTS data is performed during the pretraining stage. Lastly, \(\text{STMAE}_{\text{U}}\) only leverages simple masking strategies, where we perform uniform spatial-temporal masking on the MTS data. All ablation results are presented in Table 3, with AGCRN serving as the baseline model and PEMS04, PEMS08 as evaluation datasets. The experimental results yield the following insights: (1) On both datasets, \(\text{STMAE}_{\text{NT}}\) and \(\text{STMAE}_{\text{NS}}\) enhance the baseline's MAE performance. This demonstrates the effectiveness of both spatial masking and temporal masking during pretraining in improving the performance of spatial-temporal baselines. (2) \(\text{STMAE}_{\text{U}}\) outperforms \(\text{STMAE}_{\text{NT}}\) and \(\text{STMAE}_{\text{NS}}\) in terms of MAE metric. This suggests that jointly applying masking techniques from both structural and feature perspectives is more advantageous for MTS forecasting. (3) STMAE achieves superior performance compared to the baseline and all its variants in terms of MAE. Furthermore, it consistently ranks either first or second in terms \begin{table} \begin{tabular}{c|c c c c c} \hline & \multicolumn{3}{c}{PEMS04} & \multicolumn{3}{c}{PEMS08} \\ \hline Variant & MAE & MAPE & RMSE & MAE & MAPE & RMSE \\ \hline Baseline & 19.39 & 13.24 & **31.07** & 15.65 & 10.33 & 24.51 \\ \(\text{STMAE}_{\text{NT}}\) & 19.27 & 13.32 & **31.07** & 15.41 & 10.08 & 24.26 \\ \(\text{STMAE}_{\text{NS}}\) & 19.27 & **12.85** & 31.34 & 15.26 & 9.89 & 24.43 \\ \(\text{STMAE}_{\text{EV}}\) & 19.11 & 12.96 & 31.67 & 15.09 & 10.03 & 24.22 \\ \(\text{STMAE}\) & **19.05** & **12.91** & **31.32** & **15.01** & **9.79** & **23.97** \\ \hline \end{tabular} \end{table} Table 3: Ablation studies of \(\text{STMAE}_{\text{A}}\) on PEMS04 and PEMS08 datasets. Figure 2: Per-step MAE results of \(\text{STMAE}_{\text{A}}\) compared with its corresponding baseline and \(\text{STGCL}_{\text{A}}\). of MAPE and RMSE metrics. This underscores the advantages of our proposed approach, where the advanced dual-masking strategy can create challenging pretext tasks for spatial-temporal models, thus improving their capacities in MTS forecasting. ### Stability and Robustness. This section investigates STMAE's stability and robustness with an in-depth analysis of its learning behavior on PEMS04 and PEMS08 datasets. In Figure 3, we plot the learning curves for both the pretraining and fine-tuning stages of STMAE\({}_{\text{A}}\), alongside the learning curve for the AGCRN backbone. Our analysis of the plot leads us to the following conclusions: (1) Both the pretraining and fine-tuning stages of STMAE are stable, as both training curves of STMAE (_i.e._, pretraining loss and fine-tune training loss) depict a smoothly decreasing trend. (2) The pretraining stage proves advantageous for STMAE. Its benefit is highlighted by STMAE's notably smaller initial validation loss during the fine-tuning stage when compared to the baseline's validation loss. This underscores the effectiveness of the dual reconstruction objective employed during the pretraining phase, which enables STMAE to initialize more effectively. (3) In contrast to STMAE, the validation curves of the baseline exhibit a more erratic fluctuation behavior with a higher MAE level. In summary, all three observations above signify the stability and robustness of STMAE. ### Masking Ratio Sensitivity. In this section, we examine STMAE's sensitivity to its key hyper-parameters, _i.e._, the spatial and temporal masking ratios. We explore the impact of varying these ratios from 20% to 80% when coupling STMAE with AGCRN. The experimental results for PEMS04 and PEMS08 datasets are shown in Figure 4. From the heatmaps, we observe that STMAE exhibits optimal performance when the temporal masking ratio is set to 30% for both datasets. However, from the spatial perspective, STMAE attains its peak performance with a spatial masking ratio of 70% for PEMS08, while for PEMS04, 30% masking ratio is a more favorable choice. We attribute this difference to the greater physical structural density of the PEMS08 dataset. On a contrasting note, we observe that STMAE tends to underperform when the spatial and temporal masking ratios are either excessively high or too low. This observation, in conjunction with our ablation study, further indicates the effectiveness of our proposed dual spatial-temporal masking module. ## 6 Conclusion Spatial-temporal models have shown promising performance in MTS forecasting. Realizing their overfitting and robustness concerns induced by data scarcity and incompleteness issues, in this work, we explore generative SSL and propose a novel framework STMAE based on masking reconstruction. STMAE is a versatile framework that can be seamlessly integrated into existing spatial-temporal models. It comprises a pretraining stage and a fine-tuning stage. During the pretraining stage, we introduce a novel dual spatial-temporal masking strategy, including bias random walk-based spatial masking and patch-based temporal masking. This strategy creates two challenging pretext tasks, encouraging the encoder to learn robust spatial-temporal patterns. During the fine-tuning stage, we retain the encoder pretrained during the initial phase and then augment it with the original predictor for MTS forecasting purpose. Comprehensive experimental studies are conducted on multiple MTS benchmarks. The results demonstrate STMAE's efficacy in enhancing the performance of existing spatial-temporal models. Ablation studies further confirm the advantage of our proposed dual-masking strategy for the MTS forecasting task. Figure 4: Masking ratio sensitivity analysis of STMAE\({}_{\text{A}}\) on PEMS04 and PEMS08. Figure 3: Training and validation processes of STMAE\({}_{\text{A}}\) and AGCRN on PEMS04 and PEMS08.
2310.00361
Effect of alternating layered ansatzes on trainability of projected quantum kernel
Quantum kernel methods have been actively examined from both theoretical and practical perspectives due to the potential of quantum advantage in machine learning tasks. Despite a provable advantage of fine-tuned quantum kernels for specific problems, widespread practical usage of quantum kernel methods requires resolving the so-called vanishing similarity issue, where exponentially vanishing variance of the quantum kernels causes implementation infeasibility and trainability problems. In this work, we analytically and numerically investigate the vanishing similarity issue in projected quantum kernels with alternating layered ansatzes. We find that variance depends on circuit depth, size of local unitary blocks and initial state, indicating the issue is avoidable if shallow alternating layered ansatzes are used and initial state is not highly entangled. Our work provides some insights into design principles of projected quantum kernels and implies the need for caution when using highly entangled states as input to quantum kernel-based learning models.
Yudai Suzuki, Muyuan Li
2023-09-30T12:32:39Z
http://arxiv.org/abs/2310.00361v1
# Effect of alternating layered ansatzes on trainability of projected quantum kernel ###### Abstract Quantum kernel methods have been actively examined from both theoretical and practical perspectives due to the potential of quantum advantage in machine learning tasks. Despite a provable advantage of fine-tuned quantum kernels for specific problems, widespread practical usage of quantum kernel methods requires resolving the so-called vanishing similarity issue, where exponentially vanishing variance of the quantum kernels causes implementation infeasibility and trainability problems. In this work, we analytically and numerically investigate the vanishing similarity issue in projected quantum kernels with alternating layered ansatzes. We find that variance depends on circuit depth, size of local unitary blocks and initial state, indicating the issue is avoidable if shallow alternating layered ansatzes are used and initial state is not highly entangled. Our work provides some insights into design principles of projected quantum kernels and implies the need for caution when using highly entangled states as input to quantum kernel-based learning models. ## 1 Introduction Recent advances in quantum devices and their public accessibility have led a number of researchers to explore the applicability of quantum computing in various fields. Machine learning is one of such field where quantum computers can possibly enhance capability of the conventional methods. Remarkably, it has been shown that some quantum machine learning (QML) methods are theoretically guaranteed to outperform their existing classical counterparts for certain tasks [1, 2, 3, 4, 5, 6, 7, 8]. Motivated by these works, QML approaches have also been heuristically examined with the hope to discover practical advantages over classical ones. Quantum kernel methods are promising QML methods where the Hilbert space accessed by quantum computers are utilized as a feature space for machine learning tasks [9, 10]. More specifically, quantum computers are used to map data into quantum feature space (i.e., the Hilbert space) via quantum circuits; then a quantum kernel, an inner product of a pair of data-dependent quantum features, is computed. The core idea is that the quantum kernel can measure the similarity between data points in the quantum feature space, without explicitly determining the corresponding feature vectors that are exponentially large in the number of qubits. Much attention has been paid to quantum kernel methods because the provable advantage for a specific learning task has been shown [4] and supervised QML models can be recast in terms of kernel methods [11]. Despite the hope of quantum advantages for real-world machine learning tasks, it has been suggested that quantum kernel methods suffer from the so-called vanishing similarity issue or exponential concentration issue [12, 13], which undermines implementation feasibility and trainability of quantum kernel-based learning models. Analogous to the well-known barren plateau problems in variational quantum algorithms [14, 15, 16, 17, 18], vanishing similarity is a phenomenon where expectation value and variance of the quantum kernel decay exponentially quickly in the number of qubits. As a result, output values of quantum kernels for any pairs of data points result in the same value, i.e., concentrated around the expectation value. Firstly, this implies that an exponential number of measurement shots is needed to estimate each quantum kernel on quan tum hardware. It also implies that models constructed from quantum kernels fail to distinguish the difference between data points, leading to overfitting and poor performance to new unseen data [12; 13]. Recent works have attempted to analytically clarify the phenomenon and seek out a remedy to this issue. In particular, Ref. [13] analyzed the phenomenon for two types of fidelity-based quantum kernels, the commonly-used fidelity-based quantum kernel [9] and projected quantum kernels [5]. In addition, four causes of the problem were elucidated in the literature: expressivity of quantum circuits, global measurement, how entangled the data-embedded quantum states are and quantum noise. The analysis gives insight into design principles for quantum kernels. Scaling the rotation angles for data encoding gates could help avoid the issue at the cost of expressivity of quantum circuits [19; 20; 21]. Moreover, it has been shown that a new type of quantum kernel called the quantum Fisher kernel can mitigate the vanishing similarity issue because local similarities are measured via the information geometric quantity of quantum circuits [12]. In this work, we further examine projected quantum kernels from the perspective of the vanishing similarity issue. As mentioned above, Ref. [13] analyzed projected quantum kernels for globally random quantum circuits and reached a conclusion that one cannot mitigate the exponential concentration for the quantum circuits. On the other hand, according to Ref. [18] on how to remedy the barren plateau problem, using local cost functions and the so-called alternating layered ansatzes (ALAs) possibly resolves vanishing gradients. This suggests a possibility that projected quantum kernels can alleviate the issue because the difference of data is measured via a local quantity, i.e., reduced density matrices of the data-dependent quantum states. Therefore, this work analytically and numerically investigates the presence of the vanishing similarity issue in projected quantum kernels for different types of quantum circuits. To be more specific, we provide analytical expressions for expectation value and variance of projected quantum kernels using (1) \(n\)-qubit random quantum circuits and (2) the ALA with \(m\)-qubit local unitary blocks. We assume here that globally random quantum circuits and local unitary blocks in the ALAs form 2-designs [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 222; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 285; 289; 286; 287; 288; 289; 289; 291; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 311; 314; 315; 316; 317; 318; 329; 333; 34; 35; 36; 37; 38; 391; 33; 392; 34; 36; 37; 393; 38; 394; 395; 396; 397; 398; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 88; 89; 92; 89; 93; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 119; 121; 122; 123; 124; 125; 126; 127; 128; 139; 140; 141; 143; 144; 15; 156; 157; 158; 169; 170; 171; 18; 180; 181; 191; 192; 183; 185; 186; 187; 188; 193; 188; 194; 189; 195; 196; 197; 198; 199; 200; 210; 211; 222; 231; 24; 25; 26; 27; 28; 292; 293; 294; 295; 296; 297; 298; 299; 300; 310; 311; 320; 32; 333; 34; 35; 36; 37; 38; 39; 40; 41; 43; 41; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 77; 78; 89; 80; 80; 81; 83; 84; 85; 86; 87; 88; 89; 90; 91; 102; 103; 104; 105; 106; 107; 108; 109; 111; 117; 118; 119; 132; 133; 141; 15; 16; 17; 18; 19; 193; 194; 195; 196; 187; 197; 198; 199; 201; 22; 214; 22; 232; 24; 25; 26; 27; 28; 293; 31; 29; 30; 32; 33; 34; 36; 37; 38; 39; 41; 42; 43; 45; 46; 47; 48; 52; 54; 53; 54; 55; 56; 57; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 73; 74; 71; 74; 75; 76; 77; 78; 89; 80; 82; 83; 84; 86; 87; 88; 89; 93; 94; 95; 96; 97; 98; 109; 110; 12; 13; 14; 15; 16; 17; 18; 19; 19; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 52; 54; 57; 58; 59; 61; 70; 73; 74; 75; 76; 77; 78; 89; 80; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 103; 114; 115; 116; 17; 18; 19; 19; 201; 21; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 34; 35; 36; 37; 38; 39; 41; 42; 43; 45; 46; 47; 48; 49; 52; 53; 54; 56; 57; 58; 59; 60; 61; 62; 63; 6 26]. With this assumption, the globally random quantum circuits fail to avoid the issue, as demonstrated in Ref. [13]. As for the ALAs, we find that variance of projected quantum kernels depends on not only circuit depth and size of the local unitary blocks, but also initial state. This result indicates that variance of projected quantum kernel with shallow ALAs can avoid the vanishing similarity issue if the initial state is not highly entangled, such as a tensor product state. Fig. 1 illustrates this result. Moreover, we observe dependence on position of the reduced density matrices (accordingly, the light-cone of the reduced subsystem) used to calculate projected quantum kernels. This suggests that contribution of the term in the summed projected quantum kernels differs depending on position of the subsystems. We then validate these analytical results by performing numerical simulation. The rest of this paper is organized as follows. We provide overview of quantum kernel methods and details of projected quantum kernels in Section 2.1. Then we elaborate the setting of our analysis in Section 2.2. Our main analytical results on the vanishing similarity issue in projected quantum kernels is detailed in Section 3.1, which is followed by numerical simulation to demonstrate examples supporting the analytical results in Section 3.2. Lastly, Section 4 discusses the implication of our results and concludes this paper. ## 2 Preliminary In this section we first review quantum kernel methods and provide the details of projected quantum kernels. We also introduce the settings in our analysis. ### Quantum kernel methods Quantum kernel methods measure similarity between all possible pairs of data using a function called quantum kernel. Originally proposed was fidelity-based quantum kernel [9] defined as \[k_{Q}(\mathbf{x}_{i},\mathbf{x}_{j})=\operatorname{Tr}\left[\rho\left(\mathbf{x}_{i},\mathbf{ \theta}\right)\rho\left(\mathbf{x}_{j},\mathbf{\theta}\right)\right], \tag{1}\] where \(\rho(\mathbf{x},\mathbf{\theta})=U(\mathbf{x},\mathbf{\theta})\rho_{0}U^{\dagger}(\mathbf{x},\mathbf{ \theta})\) is the density matrix representation of quantum state generated by applying a unitary operator \(U(\mathbf{x},\mathbf{\theta})\) to initial state \(\rho_{0}\). The unitary operator is realized by a quantum circuit dependent on data \(\mathbf{x}\) and tunable parameters \(\mathbf{\theta}\), and plays a role of feature mapping; classical or quantum data are mapped to certain quantum states that have rich information on the dataset. Note that we also introduce parameters \(\mathbf{\theta}\), because such quantum feature map can be engineered by optimizing \(\mathbf{\theta}\) in practical situations [27]. Then, Gram matrix \(G\) whose \((i,j)\) element corresponds to kernel function with an input pair \((\mathbf{x}_{i},\mathbf{x}_{j})\), i.e., \[G_{i,j}=k_{Q}(\mathbf{x}_{i},\mathbf{x}_{j}),\] is used to perform machine learning tasks. Typically, kernel methods are used for classification tasks in combination with support vector machines. The classification problem is reduced to minimizing the following cost function \(L(\mathbf{\alpha})\) with respect to the parameter \(\mathbf{\alpha}\); \[L(\mathbf{\alpha})=-\sum_{i}^{N}\alpha_{i}+\frac{1}{2}\sum_{i,j}^{N}\alpha_{i} \alpha_{j}y_{i}y_{j}G_{ij} \tag{2}\] where \(N\) is the number of data points and \(y_{i}\in\{+1,-1\}\) is the label of data \(\mathbf{x}_{i}\). With optimal parameter \(\mathbf{\alpha}^{opt}\) obtained by solving Eq. (2), the prediction \(y(\mathbf{x}_{new})\) of unseen data \(\mathbf{x}_{new}\) can be written as \[y(\mathbf{x}_{new})=\operatorname{sign}\left(\sum_{i}\alpha_{i}^{opt}y_{i}k_{Q}( \mathbf{x}_{new},\mathbf{x}_{i})\right). \tag{3}\] While it has been proven that there exists a dataset that is not efficiently learnable by classical models but quantum kernels [4], fidelity-based quantum kernels in Eq. (1) suffers from vanishing similarity issue: expectation and variance of the quantum kernel declines exponentially as the number of qubits increases. More concretely, vanishing similarity issue is mathematically defined as \[\operatorname{Var}_{\{\mathbf{x},\mathbf{x}^{\prime}\}}[k_{Q}(\mathbf{x},\mathbf{x}^{\prime}) ]\leq B,\quad B\in\mathcal{O}(c^{-n}) \tag{4}\] with \(c>1\) and the number of qubits \(n\). Here, variance is taken over all possible input data pairs \(\{\mathbf{x},\mathbf{x}^{\prime}\}\). We remark that, as the quantum kernel depends on the data via a quantum feature map \(U(\mathbf{x},\mathbf{\theta})\), variance can be equivalently taken over \(\{U(\mathbf{x},\mathbf{\theta}),U(\mathbf{x}^{\prime},\mathbf{\theta})\}\) sampled from data (and parameters) dependent unitary ensemble, i.e., \(\text{Var}_{\{U(\mathbf{x},\mathbf{\theta}),U(\mathbf{x^{\prime}},\mathbf{\theta})\}}[k_{Q}(\mathbf{x}, \mathbf{x^{\prime}})]\). The reason why this is detrimental is two-fold [12, 13]. One is that an exponential number of measurements must be done to precisely estimate the quantum kernel. The other is trainability issue. The Gram matrix will be close to the identity matrix for a large number of qubits and thus the model of Eq. (3) obtained by minimizing the cost function in Eq. (2) would cause overfitting. A possible remedy to this problem is projected quantum kernels proposed in Ref. [5], where a few variations were introduced. A simple one is linear projected quantum kernel defined as \[k_{PQ}^{L}(\mathbf{x},\mathbf{x^{\prime}})=\sum_{\kappa}\text{Tr}\left[\text{Tr}_{S_{ \kappa}}\left[\rho(\mathbf{x},\mathbf{\theta})\right]\text{Tr}_{S_{\kappa}}\left[\rho (\mathbf{x^{\prime}},\mathbf{\theta})\right]\right] \tag{5}\] where \(S_{\kappa}\) denotes subspace for the \(\kappa\)-th qubit and \(\text{Tr}_{\tilde{S}_{\kappa}}\left[\cdot\right]\) is partial trace operation over the subspace \(\tilde{S}_{\kappa}\). Note that \(\tilde{S}\) is the compliment of the subspace \(S\). Also, the Gaussian projected quantum kernel is proposed: \[k_{PQ}^{G}(\mathbf{x},\mathbf{x^{\prime}}) \tag{6}\] \[=\exp\left(-\gamma\sum_{\kappa}\left\|\text{Tr}_{\tilde{S}_{ \kappa}}\left[\rho(\mathbf{x},\mathbf{\theta})\right]-\text{Tr}_{\tilde{S}_{\kappa}} \left[\rho(\mathbf{x^{\prime}},\mathbf{\theta})\right]\right\|_{2}^{2}\right)\] with a hyperparameter \(\gamma\in\mathbb{R}^{+}\) and the Hilbert-Schmidt norm \(\|\cdot\|_{2}\). A key point of projected quantum kernels is that similarity of data is measured using reduced density matrix \(\text{Tr}_{\tilde{S}_{\kappa}}[\rho(\mathbf{x},\mathbf{\theta})]\) instead of the density matrix \(\rho(\mathbf{x},\mathbf{\theta})\). Namely, local difference between data is compared in projected quantum kernels. According to Ref. [18], the barren plateau problem in variational quantum algorithms can be circumvented using local cost functions and the ALA. Similarly, projected quantum kernels also possess a local property that can help mitigate the vanishing similarity issue, which makes it favorable over traditional quantum kernels for practical applications. ### Setting in our analysis Although Ref. [13] demonstrates that projected quantum kernels with globally random quantum circuits cannot avoid the issue, it is a seemingly promising approach because of their locality. Thus, this work further analyzes projected quantum kernels from the vanishing similarity perspective, considering two types of quantum circuits. One is the \(n\)-qubit random quantum circuit and the other is the ALA with \(m\)-qubit local unitary blocks [18], as depicted in Fig. 2 (a) and (b), respectively. Let us note that the former quantum circuit is the same setting in Ref. [13], but the latter has not been examined for use of projected quantum kernels. We performed analytical calculation for the globally random quantum circuits as well to make sure of the validity of our analysis and show an exact expression of the variance. For ease of analytical investigation, we then assume that the globally random quantum circuits and local unitary blocks in the ALAs are independent and 2-designs [22, 23, 24, 25, 26], meaning that the quantum circuits (unitary blocks) have the same statistical property with Haar random unitary up to the second moment. In a broad sense, this assumption indicates that the quantum circuits or unitary blocks are expressive enough to uniformly explore the ensemble of Haar random states. We remark that, while quantum circuits might not be 2-designs in practice, some previous works have made similar assumptions to check the problems such as barren plateau [14, 18, 28, 29, 30] and vanishing similarity [12, 13, 19]. Specifically, we express the ALA as \[U(\mathbf{x},\mathbf{\theta}) =\prod_{d=1}^{L}V_{d}(\mathbf{x},\mathbf{\theta}) \tag{7}\] \[=\prod_{d=1}^{L}\left(\prod_{l=1}^{\zeta}W_{l,d}(\mathbf{x},\mathbf{ \theta}_{l,d})\right),\] where \(L\) is circuit depth and \(\zeta\) is the number of unitary blocks in each layer. Here we assume that the total number of qubits \(n\) satisfies \(n=m\zeta\). We note that the number of qubits on which both a unitary block in a layer and the one in the adjacent layer act is \(m/2\); for example, \(S_{(2,1)}\) and \(S_{(1,2)}\) have \(m/2\)-qubit subspace in common, where \(S_{(l,d)}\) is the subspace of qubits which the unitary block \(W_{l,d}\) acts on. The detail is illustrated in Fig. 2 (c). Throughout this manuscript, in lieu of Eqs. (5) and (6), we consider the following quantity; \[k_{PQ}^{(\kappa)}(\mathbf{x},\mathbf{x^{\prime}})=\text{Tr}\left[\text{Tr}_{\tilde{S}_ {\kappa}}\left[\rho(\mathbf{x},\mathbf{\theta})\right]\text{Tr}_{\tilde{S}_{\kappa}} \left[\rho(\mathbf{x^{\prime}},\mathbf{\theta})\right]\right]. \tag{8}\] We focus on this quantity because exploring it is sufficient to confirm the tendency of projected quantum kernels. Of course, the variance of Eq. (5) depends on the covariance terms and thus is not necessarily equal to that of the summation of Eq. (8) over possible \(\kappa\). However, in this case, every covariance term is equal to or more than zero and the difference between them does not matter in terms of scaling; see Appendix B.3 for more details. Moreover, without loss of generality, we assume that the subspace for the \(\kappa\)-th reduced density matrix (composed of \(n_{\kappa}\) qubits) appearing in Eq. (8), \(S_{\kappa}\), is completely included in the subspace on which one of the unitary blocks in the last layer acts, as is shown in Fig. 2 (c). We also assume that initial state \(\rho_{0}\) is an arbitrary pure state. ## 3 Results In what follows, we provide analytical results on the vanishing similarity issue in projected quantum kernels. Then, we show numerical results to check the reliability of our analysis. ### Main results We analytically calculate expectation value and variance of projected quantum kernels to check the existence of vanishing similarity issue. Here, we focus on two types of quantum circuits, that is, globally random quantum circuits and ALAs. Although the case for globally random quantum circuits has been analyzed in a previous study [13], we here check to confirm our analytical procedure and give an exact expression of the variance. We first show analytical results for the globally random quantum circuits with the full proof included in Appendix B.1. **Proposition 1**.: _Let us denote expectation value and variance of projected quantum kernel defined in Eq. (8) with \(n\)-qubit random quantum circuits as \(\langle k_{PQ,rqc}^{(\kappa)}\rangle\) and \(\mathrm{Var}[k_{PQ,rqc}^{(\kappa)}]\), respectively. If the \(n\)-qubit random quantum circuits form \(t\)-designs with \(t\geq 2\) and independent, then we have_ \[\langle k_{PQ,rqc}^{(\kappa)}\rangle=\frac{1}{2^{n_{\kappa}}}, \tag{9}\] \[\mathrm{Var}[k_{PQ,rqc}^{(\kappa)}]=\frac{2^{2n_{\kappa}}-1}{2^{2n_{\kappa}} \left(2^{n}+1\right)^{2}}\approx\frac{1}{2^{2n}}. \tag{10}\] We remind the readers that \(n_{\kappa}\) is the number of \(\kappa\)-th qubit(s) and \(n\) is the total number of qubits. Proposition 1 implies that the similarity between a pair of different data will be hard to distinguish regardless of the size of reduced density matrix Figure 2: Quantum circuits used in our analysis. Panels (a) and (b) show the globally random quantum circuit acting on all qubits and ALA, respectively. Panels (c) shows details of ALA and the setting of projected quantum kernel in our analysis. for a large number of qubits. Therefore, projected quantum kernels with globally random quantum circuits cannot avoid vanishing similarity issue. Note that the result is different from the previous result in Ref. [13] in a sense that we calculate the exact expectation rather than its upper bound, but the implication is consistent. Next, we provide the result obtained for the case of ALAs. We here obtain the lower bound of variance to see the absence of vanishing similarity issue. Please refer to Appendix B.2 for the proof. **Theorem 1**.: _For projected quantum kernel defined in Eq. (8) and ALA defined in Eq. (7), we denote its expectation value and variance as \(\langle k_{PQ,ala}^{(\kappa)}\rangle\) and \(\mathrm{Var}[k_{PQ,ala}^{(\kappa)}]\), respectively. Also, we assume that every unitary block in the ALAs, \(U(\mathbf{x},\mathbf{\theta})\) and \(U(\mathbf{x}^{\prime},\mathbf{\theta})\), is a \(t\)-design with \(t\geq 2\) and independent. Then the expectation value is_ \[\langle k_{PQ,ala}^{(\kappa)}\rangle=\frac{1}{2^{n_{\kappa}}}. \tag{11}\] _As for the variance, its lower bound is_ \[\mathrm{Var}[k_{PQ,ala}^{(\kappa)}]\geq\frac{2^{2m(L-1)}\left(2^{2n_{\kappa}} -1\right)}{(2^{2m}-1)^{2}(2^{m}+1)^{4(L-1)}2^{2n_{\kappa}}}F\left(\rho_{0},L \right), \tag{12}\] _with a function \(F(\rho_{0},L)\) of the initial state \(\rho_{0}\) and the depth \(L\). More specifically, we define the function as_ \[F \left(\rho_{0},L\right)\] \[=\left(2^{m}\sum_{h\in P(S_{(k_{u},1)}:S_{(k_{l},1)})}t_{h} \mathrm{Tr}\left[\rho_{0,\bar{h}}^{2}\right]-\sum_{\tau=0}^{L-1}\frac{c_{\tau} }{2^{m\tau}},\right)^{2}\,, \tag{13}\] _where \(t_{h},c_{\tau}\in\mathbb{R}^{+}\) and \(\rho_{0,\bar{h}}=\mathrm{Tr}_{\bar{h}}\left[\rho_{0}\right]\) is the partial trace of the initial state over the subspace \(\bar{h}\). Also, \(P(S_{(k_{u},1)}:S_{(k_{l},1)})\) is the set containing all the possible neighboring subspaces in \(\bigcup_{i=0}^{k_{u}-k_{u}}S_{(k_{u}+i,1)}\). Here, \(W_{k_{u},1}\) (\(W_{k_{l},1}\)) denotes the unitary block located at the upper (lower) edge of the light-cone in the first layer. We note that \(F(\rho_{0},L)=0\) if \(\rho_{0,\bar{h}}\) is the completely mixed state for all subspaces \(h\)._ Like the case for globally random quantum circuit, the expectation value is not dependent on the total number of qubits but the system size of the reduced density matrix. However, Eq (12) shows that the lower bound of the variance depends not only on the depth \(L\) and the size of the local unitary blocks \(m\), but also the initial state via the function \(F(\rho_{0},L)\). As shown in Eq. (13), the function contains purity of some subspace of initial state. Thus, depending on the choice of initial state, vanishing similarity issue can be avoided. For example, if initial state can be represented as a tensor product of arbitrary single-qubit state, i.e., \(\rho_{0}=\sigma_{1}\otimes\sigma_{2}\otimes\ldots\otimes\sigma_{n}\) with arbitrary single-qubit states \(\{\sigma_{i}\}\), then the function has a maximum value and the variance scales \(\Omega(2^{-2mL})\). On the other hand, if initial state is so entangled that \(\mathrm{Tr}[\rho_{0,\bar{h}}^{2}]\) is the completely mixed state for almost all \(h\), then the variance could decrease exponentially fast with respect to the number of qubits regardless of circuit depth. Note that it has been reported that initial state matters for the vanishing gradient problems in variational quantum algorithms [31, 32]. Thus, our result suggests that initial state should also be taken into account for the usage of projected quantum kernels. Moreover, we check dependence of the variance on position of the \(\kappa\)-th qubit. To be more specific, we consider the following situations; the \(\kappa\)-th qubit(s) is (are) located (i) in the middle of the last layer and (ii) in the unitary block at the edge i.e., \(W_{1,L}\) or \(W_{\zeta,L}\). Fig. 3 (a) and (b) illustrate these cases respectively. In addition, we assume that initial state is a tensor product state to check the relationship between the depth and the position of \(\kappa\)-th qubit(s). In the first case, this is exactly the same as the result shown in Eq (12), i.e., \(\Omega(2^{-2mL})\). For the second case, as demonstrated in Appendix B.2, the variance is \(\Omega(2^{-mL})\). The difference comes down to the number of unitary blocks in the light-cone. This implies that re Figure 3: Light-cone depending on the position of \(\kappa\)-th qubit(s). The blue regions represent the light-cone in the quantum circuits. Panel (a) shows the case (i) where the number of local unitary blocks in the light-cone is the largest, while Panel (b) shows the case (ii) with the smallest number of unitary blocks. duced density matrices at the edge of the layer contribute to the linear projected quantum kernel in Eq. (5) more than the ones in the middle due to the quadratic difference. We remark that the dependence of variance on the observables' position was argued in the context of variational quantum algorithms in Ref. [18], and the result we newly obtained here from the viewpoint of quantum kernel methods is similar to the statement shown in the literature; see Supplementary Information Figure. 2 of Ref. [18]. ### Numerical results We perform numerical simulations to demonstrate examples that support our analytical results. In particular, we focus on the behavior of variance for the ALA, because the one for the globally random quantum circuits has been analyzed in Ref. [13]. In the numerical experiments, ALAs with 2-qubit local unitary blocks shown in Fig. 4 are considered, where we employ data re-uploading techniques [33]. Namely, each local unitary block consists of embedding layer and the parameterized quantum circuit layers. Here, we use rotation \(Y\) and \(Z\) gates as single-qubit rotation gates acting on the \(i\)-th qubit, i.e., \(R_{\sigma_{i}}(\beta)=\exp(-\beta\sigma_{i}/2),\sigma_{i}\in\{Y_{i},Z_{i}\}\), and the controlled-Z gate as an entangler. As for the input data, we set the number of qubits equal to the dimension of the data and each component is randomly chosen from the uniform distribution ranging \([-\pi,\pi)\). Analogously, each parameter in the parameterized quantum circuit layers are selected uniformly at random from the range \([-\pi,\pi)\). Then we prepare five sets of parameters and five datasets containing 50 data paints to compute \(k_{PQ}^{(\kappa)}(\mathbf{x},\mathbf{x^{\prime}})\) in Eq. (8) with \(\mathbf{x}\neq\mathbf{x^{\prime}}\). We note that \(n_{\kappa}=1\) for our numerical simulations. Afterwards, the variance is calculated using the projected quantum kernels computed for different 25 settings of input dataset and the parameter set. The computation is performed for all possible \(\kappa\). When we encode the data into the quatum circuit, the \(i\)-th component of the input data, \(x_{i}\), is injected into the angle of the single-qubit rotation gates acting on the \(i\)-th qubit in every embedding layer; that is, \(R_{Y_{i}}(x_{i})\) (\(R_{Z_{i}}(x_{i})\)). We also assign each parameter to a single-qubit rotation gate in parameterized quantum circuit layers. Namely, no parameters are shared with different rotation gates. Fig. 4 depicts the details of the quantum circuit. The numerical simulation is performed using Qiskit [34]. We here summarize the numerical results from the following perspectives: (i) the dependence of variance on circuit depth for different initial Figure 4: Alternating layered ansatz used in our simulation. As an example, here we show an even \(n\)-qubit alternating layered ansatz with depth \(L=2\). The quantum circuit consists of \(2\)-qubit local unitary blocks denoted by red boxes, each of which has a data embedding layer and a parameterized quantum circuit layer. We note that \(Ry_{a}\) (\(Rz_{a}\)) represents a single-qubit rotation gate on \(Y\) (\(Z\)) axis, whose angle is determined by a function of \(\mathbf{x}\) or \(\mathbf{\theta}\) shown in the subscript. In the numerical experiments, \(i\)-th element of data \(x_{i}\) is encoded into single-qubit rotation gates (\(Ry\) and \(Rz\)) acting on the \(i\)-th qubit in every embedding layer. Also, each parameter is assigned to different single-qubit rotation gates in the parameterized quantum circuits layers. states, (ii) the dependence on position of the \(\kappa\)-th qubit and (iii) the relation between the variance and the number of qubits \(n\). #### 3.2.1 Dependence on circuit depth Fig. 5 shows the variance of projected quantum kernels against the depth \(L\) for different initial states, where the number of qubits \(n=9\) and the reduced density matrix with respect to the fifth qubit is considered for three initial states: a tensor product state \(\rho_{0}=\ket{0^{\otimes n}}\bra{0^{\otimes n}}\), the GHZ state \(\rho_{0}=\ket{\psi_{GHZ}}\bra{\psi_{GHZ}}\) with \(\ket{\psi_{GHZ}}=2^{-1/2}(\ket{0}^{\otimes n}+\ket{1}^{\otimes n})\) and initial states randomly sampled from the Haar measure. We choose these initial states with different degrees of entanglement to examine how entanglement of initial states affects the variance. As for the random initial states, we prepare five different states and the variance is averaged over the trials. It turns out that the variance decreases exponentially in circuit depth \(L\) for the case of the tensor product state and the GHZ state. On the other hand, if a random quantum state is prepared as the initial state, the variance is independent of the depth and much smaller than the ones for other cases. This is consistent with the analytical result shown in Theorem 1. As demonstrated in Eq. (12), the variance is determined by the depth and the function of the initial state \(F(\rho,L)\). For the first two cases, \(F(\rho,L)\) does not contribute to the variance so much because the reduced systems of the initial states are far from the completely mixed states; the purity is one for the tensor product state over any subspace \(h\), and the purity is \(1/2\) for the GHZ state if \(\bar{h}\neq\emptyset\) or \(h\neq\emptyset\) and otherwise one. Thus, the term other than \(F(\rho,L)\) comes into play; the variance vanishes exponentially with respect to the depth. Yet, the partial trace of a random quantum state can be close to the completely mixed state and thus \(F(\rho,L)\) plays an significant role in the variance rather than the remaining term. Hence, the variance is consistently small regardless of the depth. #### 3.2.2 Dependence on positions of reduced subsystems The variance against positions of the \(\kappa\)-th qubit for 9 qubits system is shown in Fig. 6. We notice that the variance of the reduced system at the edge of the layer is smaller than that of the systems in the middle for the tensor product state and the GHZ state, shown in Fig. 6 (a) and (b), respectively. Also, the gap of the variance between the systems at the edge and in the middle gets larger as the depth increases. This numerical result agrees with the statement in the previous section that the scaling of the variance differs depending on the number of local unitary blocks in the light-cone, and accordingly the position of the \(\kappa\)-th qubit. As for the random quantum state case in Fig. 6 (c), the depth and the position are less significant in the variance because the term \(F(\rho,L)\) contributes dominantly. #### 3.2.3 Dependence on the total number of qubits Fig. 7 (a) to (c) show the variance for the different number of qubits using a tensor product state, the GHZ state and random quantum states, respectively. For the tensor product and the GHZ state, the variance levels off for all cases of circuit depth when the number of qubits is larger than certain number. This is because the purity is constant for these cases and thus \(F(\rho,L)\) is saturated. Thus, we can confirm that the variance of these cases is irrelevant to the number of qubits. However, Fig. 7 (c) shows that the variance vanishes exponentially fast in the number of qubits. This would be attributed to the fact that there Figure 5: Variance of projected quantum kernel against the depth of quantum circuits. Here we used the 9-qubits ALAs with the depth \(L\in\{4,6,8,10,12,14\}\), and the reduced density matrix for the fifth qubit to compute the projected quantum kernel. We consider three initial states: a tensor product state (green), the GHZ state (blue) and random quantum states (red). The shaded region illustrates the standard deviation over five different random states. is an exponential decay in \(F(\rho,L)\) with respect to the number of qubits. Hence, this indicates that initial states really matter in the variance of projected quantum kernels. ## 4 Discussion & Conclusion In this paper we investigated vanishing similarity issue in projected quantum kernels from both analytical and numerical perspectives. We analytically showed that this issue is not avoidable for the case of globally random quantum circuits, which is consistent with previous results in Ref. [13]. In contrast, we found that projected quantum kernels with ALAs can avoid exponential decay of variance if the quantum circuits are shallow and initial state is not highly entangled. This implies the potential of projected quantum kernels for practical usability. In addition, we showed that initial state plays a significant role in the variance scaling and thus caution needs to be taken for preparing input states. We discuss the implication of our results in QML tasks below. First, our results suggest that there is a caveat when quantum data is used as input states in QML tasks. Some QML tasks handle quantum states as an input state, and then parameterized quantum circuits are applied to the state to seek out features suitable for the tasks. In this situation, the initial state could be more entangled than a tensor product state. Hence, there is a possibility that the vanishing similarity issue for projected quantum kernels could be exacerbated for some tasks. We also showed that the variance differs depending on the position of reduced density matrix. Thus, the contribution to projected quantum kernels in Eqs. (5) and (6) of reduced systems at the edge of the layer is larger than that Figure 6: Variance of projected quantum kernel against the position of \(\kappa\)-th qubit. We used the ALAs with 9-qubits. The variances of projected quantum kernel with depth \(L\in\{4,6,8,10,12,14\}\) are shown for the case of (a) a tensor product state, (b) the GHZ state and (c) random quantum states. In panel (c), the standard deviation is represented by the shaded region. Figure 7: Variance of projected quantum kernel against the number of qubits. We used the different number of qubits, \(n\in\{3,4,5,6,7,8,9\}\) and the ALAs with the different depth \(L\in\{4,6,8,10,12,14\}\). Here, we consider the reduced system of \(\lceil n/2\rceil\)-th qubit, i.e., the qubit in the middle of the width. Panels (a), (b) and (c) show the variances for a tensor product state, the GHZ state and random quantum states, respectively. of systems in the middle; the tendency gets worse as we increase circuit depth. This might result in poor performance to some tasks because the relevant information could be undermined. Hence, in some situations, it would be better to consider the gap, for example, by modifying weights of projected quantum kernel for the \(\kappa\)-th qubit(s). Moreover, our results indicate a situation where classical shadow can reduce quantum resources required to compute projected quantum kernels. Classical shadow is a technique to estimate properties of quantum states with a small number of measurement shots [35]. Thus far, some works have also used the technique for quantum kernel methods [5, 36]. On the other hand, classical shadow does not work when vanishing similarity arises. This is because the resolution needed to tell the difference in a pair of data through the quantum kernel is significantly high. Our Theorem 1 suggests that projected quantum kernels can utilize the power of classical shadows, when shallow ALAs are used and initial state is not highly entangled. We lastly remark that our analytical results are based on the assumption that quantum circuits and the local unitary blocks in the ALAs are 2-designs. This result is of significance in that we shed light on trainability and limitations of projected quantum kernels in general. On the other hand, as the no free lunch theorems [37, 38, 39] suggest, domain knowledge should be incorporated into the model. Actually, an emerging field called geometric quantum machine learning [40, 41, 42, 43, 44], where inductive bias such as symmetry is considered in constructing quantum models, has attracted much attention. Therefore it would be worthwhile to explore the existence of vanishing similarity issue by incorporating domain knowledge into the model for practical purpose. It would also be important to investigate advantages of projected quantum kernels for practical machine learning tasks handling quantum data as well as classical data. **Acknowledgement** The authors thank Kunal Sharma, Ryan Sweke, Khadijeh Najafi and Antonio Mezzacapo for stimulating discussions and comments on the manuscript. Part of this work was done when Y.S. was a research intern at IBM. Y.S. was supported by Grant-in-Aid for JSPS Fellows 22KJ2709.
2309.12160
Flow separation control design with experimental validation
Flow control aims at modifying a natural flow state to reach an other flow state considered as advantageous. In this paper, active feedback flow separation control is investigated with two different closed-loop control strategies, involving a reference signal tracking architecture. Firstly, a data-driven control law, leading to a linear (integral) controller is employed. Secondly, a phenomenological/model-driven approach, leading to a non-linear positive (integral) control strategy is investigated. While the former benefits of a tuning simplicity, the latter prevents undesirable effects and formally guarantees closed-loop stability. Both control approaches were validated through wind tunnel experiments of flow separation over a movable NACA 4412 plain flap. These control laws were designed with respect to hot film measurements, performed over the flap for different deflection angles. Both control approaches proved efficient in avoiding flow separation. The main contribution of this work is to provide practitioners simple but yet efficient ways to design a flow separation controller. In addition, a complete validation campaign data-set is provided.
T. Arnoult, G. Acher, V. Nowinski, P. Vuillemin, C. Briat, P. Pernod, C. Ghouila-Houri, A. Talbi, E. Garnier, C. Poussot-Vassal
2023-09-21T15:14:36Z
http://arxiv.org/abs/2309.12160v1
# Flow separation control design with experimental validation ###### Abstract Flow control aims at modifying a natural flow state to reach an other flow state considered as advantageous. In this paper, active feedback flow separation control is investigated with two different closed-loop control strategies, involving a reference signal tracking architecture. Firstly, a data-driven control law, leading to a linear (integral) controller is employed. Secondly, a phenomenological/model-driven approach, leading to a non-linear positive (integral) control strategy is investigated. While the former benefits of a tuning simplicity, the latter prevents undesirable effects and formally guarantees closed-loop stability. Both control approaches were validated through wind tunnel experiments of flow separation over a movable NACA 4412 plain flap. These control laws were designed with respect to hot film measurements, performed over the flap for different deflection angles. Both control approaches proved efficient in avoiding flow separation. The main contribution of this work is to provide practitioners simple but yet efficient ways to design a flow separation controller. In addition, a complete validation campaign data-set is provided. ## 1 Introduction ### Forewords on flow separation objective Flow separation over an aircraft flap is characterized by a decrease in the lift coefficient, an increase in the drag coefficient and can occur during the critical take-off and landing phases. Most aircraft circumvent this issue using slotted flaps, which however add structural weight and complicate the maintenance. Therefore, one solution would be to simplify these structures into plain flaps with integrated flow control devices to avoid flow separation. Flow control consists in modifying a flow general behavior with a space localized perturbation, in order to reach a flow configuration considered as favorable. In that sense, flow control can help reducing noise radiation, delaying the laminar-turbulent boundary layer transition or can help avoiding flow separation [12]. Flow control methods can be categorized as passive or active methods. Passive methods do not require any external source of energy to act on the flow. One may mention vortex generators, which have been widely used to prevent flow separation on aircraft wings. Vortices generated by these devices help re-energizing the boundary layer and therefore prevent its separation. However, they act permanently on the flow, even at off-points design. In that sense, an active flow control method can be employed in a closed-loop strategy as considered in this study. The actuators command can therefore be adapted depending on the needs. As discussed by Pastoor _et al._[22], closed-loop flow control strategies have been applied to different canonical flow configurations such as the flow around a cylinder, the flow over backward facing steps, the flow over open-cavities or separated flows over airfoils. These flows are described by different dynamics and different actuating strategies may be required to implement their closed-loop control. Considering the control of flow separation over airfoils, one may aim at modifying the mean flow properties. Several studies have focused on this case with different control methods and objectives. One control method consists in turning the actuators when a threshold value measured by the sensors is exceeded. This triggering method was used in several studies. For instance, Packard and Bons [21] and Rethmel _et al._[25] studied the flow separation control over wings based on NACA airfoils shape. In these studies, hot films sensors are placed on the models. The RMS (Root Mean Square) value of the hot films voltage is used to define the threshold value, according to which the flow is separated. In a similar way, Lombardi _et al._[19], Tewes _et al._[27] and Benard _et al._[2] propose a triggering control method based on pressure measurements. In [19] the triggering control criterion is based on unsteady pressure measurements performed on the model. A spectrum analysis of the pressure measurements is employed to determine the energy at a characteristic frequency indicating the onset of flow separation. If this energy value is overshot, then the plasma actuator placed at the wing leading edge is turned on. Regarding the studies [27] and [2], the triggering control methods are coupled with an hysteresis effect. Instead of reattaching a separated flow over the considered wings, both study maintain the flow attached and therefore reduce the energy required to control the flow. In [27], the threshold value is defined relatively to the leading edge pressure coefficient value, while in [2] the threshold value is set with respect to the pressure coefficient RMS computed at the leading edge. A second control methodology is a model free approach based on gradient methods such as extremum seeking and slope seeking. For instance, Benard _et al._ focus on flow reattachment over NACA 0015 airfoil with a slope seeking algorithm, which aims at maximising the lift coefficient. In a similar way, Becker _et al._[1] tend to maximize the lift coefficient of a high-lift configuration composed of a NACA 4412 main airfoil associated with a NACA 4415 flap, firstly with a SISO (Single Input Single Output) extremum seeking scheme, then with a SISO slope seeking algorithm, finally extended to a MIMO (Multiple Input Multiple Output) slope seeking approach. This study was carried out based on the pressure coefficient derived from unsteady pressure measurements. A third control approach is based on the use of black box models identifying the system's input and output transfer. Recently, Sanchez _et al._[26] explored a sliding mode approach and applied it on a numerical example. King _et al._[17] consider in their study the control of flow separation on a wing of a 1:2 scale model proposed by Airbus and expand their experiments on a full scale glider. In both cases, they identify a model using PRBS (Pseudo-Random Binary Signals) to drive the pulsed jets. From the black box model, the \(\mathcal{H}_{\infty}\) synthesis is employed to design a robust controller. Different type of controllers can be synthesized based on this black box model approach (see _e.g._[18]). ### Contribution statement The contribution of the paper is to deploy and evaluate in an experimental wind tunnel facility, involving a NACA 4412 plain flap airfoil, two active closed-loop control design strategies to drive the flow separation phenomena. The first one is a (model-free) _linear data-driven_ approach, while the second one is a _positive nonlinear phenomenological/model-driven_ strategy. The data-driven rationale is extensively detailed in [16], while the positive strategy is discussed in [4], initially considered for the control of biological systems in [5]. Both control methods are based on the use of on/off solenoid valves as actuators and on hot film sensors. Both control structure involve the same reference signal to track, which value is also discussed. Each strategy is validated in a wind tunnel facility (see Figure 1) and leads to a lift increase, and cancelled / reduced flow separation. As a glimpse of this paper result, Figure 2 illustrates the lift performances with and without flow separation control. In addition, Figure 3 illustrates the closed-loop frequency response (obtained with a frequency sweeping reference signal) for varying flap angles, with respect to the reference objective transfer. The rest of this note details the process for reaching such performances and derives a generic but yet simple approach to design and validate two flow separation feedback control laws. We believe that the proposed rationales are sufficiently simple to be applied on a variety of similar setups. ### Notations and paper organisation After recalling the flow separation problem in section 1, section 2 presents the considered experimental setup. Both linear data-driven and nonlinear positive model-driven flow control designs are recalled and validated in section 3. Conclusions and outlooks are gathered in section 4. By \(\mathbb{R}\), \(\mathbb{Z}\) and \(\mathbb{N}\) we indicate the sets of real, integer and natural (positive integer) numbers, respectively. The LTI dynamical system \(\mathbf{K}\) pencil is denoted \(\Lambda_{\mathbf{K}}\). We denote with \(h\in\mathbb{R}_{+}\), the sampling-time, with \(I\) the identity matrix and \(\imath\) (\(\sqrt{\imath}=-1\)) the complex variable. The time average of a quantity is denoted \(\langle\rangle\). Regarding the model, \(c_{\mathrm{flap}}\) and \(c_{\mathrm{tot}}\) respectively stand for the flap chord length and the model total length. The flap deflection angle is denoted \(\delta\). The freestream velocity is noted \(U_{\infty}\) and the Reynolds number based on the model total length is computed as \(Re=U_{\infty}c_{\mathrm{tot}}/\nu\), with \(\nu\) the air kinematic viscosity. Considering the actuators, \(f\), \(f^{+}\) and \(\alpha\) refer to the actuation frequency, its reduced form and to the duty cycle. The momentum coefficient \(C_{\mu}\) is derived from \(q_{\mathrm{jet}}\), \(U_{\mathrm{jet}}\), \(\rho_{\infty}\) and \(A_{\mathrm{ref}}\) respectively the actuators mass flow rate, actuators outlet velocity, freestream density and reference area chosen as the flap area. Concerning the sensors setup, the pressure coefficient \(C_{p}\) calculation is based on the static pressure \(p\) and the freestream pressure \(p_{\infty}\). The flap lower and upper surfaces pressure coefficients are denoted \(C_{p_{\mathrm{lower}}}\) and \(C_{p_{\mathrm{upper}}}\). These coefficients help computing the lift coefficient \(C_{L}\) as detailed in the following. In some of the following figures considering the controlled case, zones without and with actuation are distinguished. They respectively denote cases in which the control is applied but valves are not actuated or actuated with a duty cycle fixed by the controller. Figure 1: Wind tunnel facility view (Onera, Lille, France). The commanded horizontal wing is in between the two vertical structures. The flow is longitudinally travelling from the back of the photo. ## 2 Experimental control setup description ### Setup overview and wing surface properties Flow control experiments were carried out in the L1 wind tunnel facility at ONERA, Lille. It is characterized by a test section diameter of 2.40 m. The model is composed of a 867 mm long flat plate, stabilizing the boundary layer, followed by a plain flap of chord length \(c_{\text{flap}}=220\) mm, yielding a model total length \(c_{\text{tot}}=1087\) mm. The plain flap design is based on a NACA 4412 airfoil. As depicted in Figure 1, two lateral panels separated by 800 mm are placed inside the wind tunnel test section, next to the model borders to avoid side effects and ensure that the flow developing over the model is bi-dimensional. The flow separation control over this model has already been studied by Chabert _et al_[7, 8], who implemented a slope seeking control algorithm. The boundary layer developing over the model is turbulent. Its transition is triggered by a carborundum line placed at the flat plate leading edge. Velocity measurements upstream the model are performed with Pitot tubes. During the experiments, \(U_{\infty}\) is fixed to 34.5 m/s, yielding \(Re=2.39\times 10^{6}\). The turbulence level inside the wind tunnel is about 1.3 %. The model is placed inside the test section with a \(0^{\circ}\) angle of attack. The motorized flap can be deflected downward at an angle \(\delta\) varying between \(2^{\circ}\) and \(37^{\circ}\). Note that the flap is not an actuator used in the control loop but a way to modify the configuration accounting for uncertainties and allowing exhibiting the separation phenomena (Figure 2). ### Actuators and sensors description #### 2.2.1 Actuation system The actuation setup is composed of 7 slots spanning along the flap leading edge at a location of \(x/c_{\text{flap}}=0.08\). Separated by 7 mm from each other, the actuators slots are 90 mm long and 0.25 mm thick each. They cover 80% of the flap span and are supplied by on/off Festo MHE2 fast response solenoid valves, fed with pressurized air up to 7 bar. The slot outlet velocity is inclined by \(30^{\circ}\) relatively to the flap local tangent. The valves mass flow rate and the actuation frequency are fixed respectively to 21 g/s and to \(f=100\) Hz. The reduced actuation frequency referred as Figure 2: Evolution of the lift coefficient \(C_{L}\) without (blue curve) and with (red curve) control against the flap deflection angle \(\delta\) for \(U_{\infty}=\ 34.5\) m/s. \(f^{+}\) is therefore: \[f^{+}=\frac{f\ c_{\rm flap}}{U_{\infty}}\cong 0.64 \tag{1}\] The added momentum to the flow can be characterized by \(C_{\mu}\) defined as: \[C_{\mu}=\frac{q_{\rm jet}U_{\rm jet}}{\frac{1}{2}\rho_{\infty}U_{ \infty}^{2}A_{\rm ref}}, \tag{2}\] Considering the conditions detailed above, the constant blowing momentum coefficient has a value of \(1.6\%\). As detailed in [6], for a fixed mass flow rate supplied to the actuators, the momentum coefficient can be extended to pulsed blowing according to the following equation: \[\langle C_{\mu}\rangle=\frac{1}{\alpha}\frac{\rho_{\rm jet}\langle U_{\rm jet }^{2}\rangle A_{\rm jet}}{\frac{1}{2}\rho_{\infty}U_{\infty}^{2} A_{\rm ref}}, \tag{3}\] As the valves considered here are driven with square signals with a duty cycle \(\alpha\), the momentum coefficient can be simplified as: \[\langle C_{\mu}\rangle=\frac{1}{\alpha}\frac{q_{jet}U_{jet}}{ \frac{1}{2}\rho_{\infty}U_{\infty}^{2}A_{ref}}=\frac{1}{\alpha}C_{ \mu}. \tag{4}\] The \(\alpha\) variable then denotes the control signal used in the loop control. During the flow control experiments \(\langle C_{\mu}\rangle\) used reached a maximum value of \(4.6\%\). #### 2.2.2 Sensing system Regarding the sensors setup, 51 pressure taps are dispatched on both the flat plate and flap upper and lower surfaces. From these pressure measurements both the pressure coefficient at each Figure 3: Frequency-domain responses of the controlled system for \(U_{\infty}=\,34.5\) m/s. Coloured solid lines (response for different flap deflection angle \(\delta\)) and reference (dashed black). tap location and the global lift coefficient can be computed. The pressure coefficient is defined according to the following equation: \[C_{p}=\frac{p-p_{\infty}}{1}\frac{2}{\rho_{\infty}U_{\infty}^{2}}, \tag{5}\] The lift coefficient is derived from the pressure coefficient computations according to the following formula: \[C_{L}=\int_{0}^{1}(C_{p_{\text{lower}}}-C_{p_{\text{upper}}})\,d\frac{x}{c_{ \text{tot}}}, \tag{6}\] Both quantities \(C_{p}\) and \(C_{L}\) are used to assess the control effects, comparing cases of uncontrolled and controlled flows. #### 2.2.3 Experimental strategy To monitor the flow separation over the flap and implement the control part, eight Senflex(r) hot films are placed along the flap chord-wise direction. Connected to two Dantec(r) Streamlines units, hot films signals are recorded at a sampling frequency of 1.25 kHz over 3 minutes for each measurement points. The control tracking value is defined with respect to the fifth hot film measurements, located at the dimensionless abscissa \(x/c_{\text{ flap}}=0.511\), taking the coordinate origin at the flap leading edge. Control scripts, written in LabVIEW Real-Time 2011 via a PXIe-8102 controller, are embedded in a PXI chassis. Both the actuators and sensors setups are sketched in Figure 4. ### Performance characterisation toward specifications First measurements focused on the unforced flow characterization. The freestream velocity was fixed to \(U_{\infty}=34.5\) m/s and the flap was deflected from \(2^{\circ}\) down to \(37^{\circ}\). The evolution of the lift coefficient is presented in Figure 5 and similar results to the one presented in [8] are obtained. Following Figure 5 and as described by Hoerner [15], four zones can be distinguished. The first zone (I), for \(\delta\) between \(2^{\circ}\) and \(12^{\circ}\), describes a linear evolution of \(C_{L}\) against \(\delta\). The second zone (I)-(II) corresponds to a slower increase in the lift coefficient and spreads between \(12^{\circ}\) and \(20^{\circ}\), indicating the development of the flow separation over the flap. The zone denoted (II) corresponds Figure 4: Scheme of the model placed in the wind tunnel with the actuators command and hot films positions. to a plateau of \(C_{L}\) due to the recirculation bubble entirely developed over the flap. Finally, the zone (III) denotes a zone of non-linear increase in the lift coefficient. The non-linear evolution of \(C_{L}\) in (III) would be better observed with higher deflection angles, as pointed out by Hoerner [15]. The development of the recirculation area over the flap can also be observed in the pressure coefficient. Figure 6 highlights the evolution of \(C_{p}\) against the dimensionless abscissa \(x/c_{\text{flap}}\) for a deflection angle of \(18^{\circ}\). The pressure gradient between 0.19 and 0.46 is followed by a plateau of \(C_{p}\) between 0.46 and 0.65, indicating the flow separation. The longer the plateau is, the longer the flow recirculation area is. Therefore, as the flap deflection angle is increased, the \(C_{p}\) plateau spreads over a larger area. Signals of the 8 hot film sensors were also recorded during these tests. For each deflection angle, hot films time series have been averaged and normalized according to \(U^{*}=(\langle U\rangle-U_{min})/(U_{max}-U_{min})\), where \(U^{*}\) is the dimensionless hot film voltage, \(\langle U\rangle\) the hot film mean voltage, \(U_{min}\) and \(U_{max}\) respectively are the hot film minimum and maximum voltages. Figure 7 depicts the evolution of \(U^{*}\) for the fifth hot film on the flap, which is located at \(x/c_{\text{flap}}=0.511\). As the flap angle is increased the hot film normalized voltage decreases. This trend indicates a decreasing wall shear stress, which reaches a local minimum for \(\delta=20^{\circ}\). The first minimum at this flap angle, points out the flow separation at this location. When the flap is further deflected, the hot film voltage increases as the flow recirculation bubble intensifies and reaches a local maximum. The voltage reaches a global minimum for an angle of \(\delta=~{}32^{\circ}\). This second minimum may be due to the apparition of a second recirculation zone occurring for high deflection angles, as observed in [6]. ### Reference signal and control architecture Following these observations, a reference value for the fifth hot film sensor is defined, such that the flow separation over the flap is avoided. The normalized objective value \(U^{*}_{\text{obj}}\) is fixed to 0.3903. In Figure 8, this reference value is superimposed with the evolution of the fifth hot film normalized voltage. The controller aim is therefore to maintain the hot film normalized voltage to the reference value. Therefore, two different zones are defined in this chart. One for deflection angles \(\delta<13.8^{\circ}\) and the second one for \(\delta>13.8^{\circ}\). The first one corresponds to deflection angles for which actuators do not add momentum to the flow, as the hot film voltage is above the reference value. The second one defines angles \(\delta\) for which the Festo valves are cyclically actuated and aim at maintaining the hot film voltage to the reference value. Based on this reference value, from now on denoted \(\mathbf{r}\) Figure 5: Evolution of the unforced flow lift coefficient \(C_{L}\) against the flap deflection angle \(\delta\) (\(U_{\infty}=34.5\) m/s). ability of both control strategies (either linear data-driven or positive model-driven) to maintain this value is investigated. Based on the above considerations, a _reference signal tracking feedback control architecture_ can be set up. With reference to Figure 9, one aims at designing a \(h\)-sampled control law aiming at ensuring that the output signal \(\mathbf{y}(t_{k})\), measured by the fifth hot film, tracks \(\mathbf{r}(t_{k})\), the reference level previously defined. The controller provides a sampled-time continuous order \(\mathbf{u}(t_{k})\), transformed in an on-off one \(\overline{\mathbf{u}}(t_{k/N})\), leading to the controlled duty cycle \(\alpha\) applied by the PFA (Pulsed Fluidic Actuator), sampled \(N\) times faster (see also [23] for details on the PFA). The control problem boils down to a _reference tracking one_. **Remark 1** (Sensor location): _In the considered experimental case, the fifth sensor is selected. However, other locations or even multiple sensors may be considered. The impact of this placement/selection may be considered in future works. The choice for this sensor was dictated by the following considerations: first, the quality of the signal was good, second, it was located far enough to actually see the separation phenomenon._ **Remark 2** (Control architecture extensions): _Similarly, in the considered experimental context, a single input, single output controller is sought. The rest of the section sticks to this configuration. However, extensions to multi-input and single-output are possible. Extensions to multiple actuators are also possible but would lead to considerably more complex analysis._ ## 3 Flow separation control tuning Considering the control architecture in Figure 9, this section details the design and tuning of the controller. Two different strategies are implemented and evaluated. First, the linear Loewner Data-Driven Control (L-DDC) [16] (section 3.1) and second, a phenomenological-driven nonlinear positive control [4] (section 3.2). Both configurations performances are commented and illustrated in section 3.3. Figure 6: Evolution of the unforced flow pressure coefficient \(C_{p}\) against the dimensionless abscissa \(x/c_{\text{flap}}\). (\(\delta=18^{\circ}\) and \(U_{\infty}=34.5\) m/s). The two curves represent the profile upper and lower coefficients. ### L-DDC design #### 3.1.1 Idea and principle The L-DDC belongs to the so-called data-driven reference model approaches1. The L-DDC procedure boils down to two steps: first deriving the _ideal controller_ denoted \(\mathbf{K}^{\star}\), and second, the _controller identification_ via interpolation in the Loewner framework [20, 13]. Footnote 1: DDC methods have a long history dating to the proportional, integral, derivative (PID) tuning method by Ziegler-Nichols in early 40’s or the self tuning regulator byström in the 90’s (see _e.g._[29] for more details and references). This field remains still very active (see _e.g._[11]) We recall the main steps in the SISO case and with the considered reference tracking architecture. Following Figure 10, the objective is to find an LTI controller with transfer function \(\mathbf{K}:\mathbb{C}\backslash\Lambda_{\mathbf{K}}\to\mathbb{C}\) that minimizes the transfer difference between \(\mathbf{r}\) and \(\boldsymbol{\varepsilon}\), _i.e._ between the resulting closed-loop and a user-defined reference model \(\mathbf{M}:\mathbb{C}\backslash\mathbf{\Lambda_{M}}\to\mathbb{C}\). This is made possible through the definition of the ideal controller \(\mathbf{K}^{\star}\), being the LTI controller that would have given the desired reference model behaviour if inserted in the closed-loop. The latter is defined as \(\mathbf{K}^{\star}=\mathbf{H}^{-1}\mathbf{M}(I-\mathbf{M})^{-1}\), where \(\mathbf{H}:\mathbb{C}\backslash\mathbf{\Lambda_{H}}\to\mathbb{C}\) is the model of the system to control. In the data-driven case, when \(\mathbf{H}(z)\) is not explicitly known but may be evaluated at some frozen values \(z_{k}\in\mathbb{C}\), this definition may be recast as a set of \(k=1,\ldots,N\) equations: \[\mathbf{K}^{\star}(z_{k})=\mathbf{\Phi}_{k}^{-1}\mathbf{M}(z_{k})(I-\mathbf{ M}(z_{k}))^{-1}, \tag{7}\] where \(\mathbf{\Phi}_{k}=\mathbf{H}(z_{k})\in\mathbb{C}\) is the evaluation of the unknown model at \(z_{k}\). In an experimental context, one usually considers sampling \(\mathbf{H}\) at \(z_{k}=\imath\omega_{k}\) (\(\omega_{k}\in\mathbb{R}_{+}\)). In this case, \(\mathbf{\Phi}_{k}\) is the frequency response of the open-loop system at \(\omega_{k}\). Then, the couple \[\{z_{k},\mathbf{K}^{\star}(z_{k})\}_{k=1}^{N}, \tag{8}\] is referred to as the _raw data_ for our controller design. Finding a controller \(\mathbf{K}\) that _fits_ (8) can be considered to be an identification/interpolation problem which may be solved by many approaches. The Loewner framework [20] allows constructing both a function \(\mathbf{K}\) with minimal McMillan degree and realization order \(n\leq N\), satisfying conditions (7) or an approximation of it with a realization of order \(r<n\). Figure 7: Evolution of the normalized voltage of the fifth hot film against the deflection angle \(\delta\) (\(U_{\infty}=34.5\) m/s). **Remark 3** (Advantages of the L-DDC): _L-DDC is a combination of determining the ideal controller from frequency-domain data via a reference model and the use of the Loewner framework to construct a reduced order controller. Such an interpolatory-based data-driven control design solves problems faced by practitioners: (i) the controller design is directly obtained using open-loop raw data (8) collected on the experimental setup, (ii) without any optimization process, only linear algebra manipulation, (iii) and without any prior controller structure or order specification (these latter may be automated by a rank revealing factorization). This approach has proven to be effec Figure 8: Evolution of the normalized voltage of the fifth hot film (blue curve) and the reference value (red curve) against the deflection angle \(\delta\) (\(U_{\infty}=34.5\) m/s). The black curve separates areas without and with actuation. Figure 10: Data-driven control problem formulation. \(z\) denotes the complex variable either in the continuous or sampled-time. Figure 9: Overview of the considered closed-loop architecture. The **controller**, sampled at frequency \(h\), feeds a series of PFA acting along the wing span. The system is illustrated by the setup photo, and the measurement is achieved by the hot films located along the wing flap. The orange block is the overall system. tive for digital control [28] and on experimental application [23]. [13, sec. 4] provides practical details and didactic examples including infinite dimensional systems._ **Remark 4** (Limitations of the L-DDC): _As a linear controller design, it embeds regular linear limitations. One important has been highlighted during the experiments. It concerns the fact that it does not handle actuator limitations, which is in this case works on/off only and are only able to blow air. In presence of an integral action, this may result in stability issues. This point is discussed in sections 3.2 and 3.3._ #### 3.1.2 Application The frequency-domain response describing the separated flow dynamics over the flap is first needed. The operating point for this step is a deflection angle \(\delta=24^{\circ}\) and \(U_{\infty}=34.5\) m/s, operating point for which the flow is indeed separated at the considered measurement position. The actuators command signal \(\mathbf{u}(t_{k})\) consists in a logarithmic frequency sweep applied to the duty cycle with frequencies ranging from 0.01 Hz to 10 Hz over 180 s. From this input, the fifth hot film response \(\mathbf{y}(t_{k})\) is collected. The discrete frequency-domain transfer data from \(\mathbf{u}(t_{k})\) to \(\mathbf{y}(t_{k})\) is obtained and denoted \(\{\omega_{k},\mathbf{\Phi}_{k}\}_{k=1}^{N}\), where \(\omega_{k}\in\mathbb{R}_{+}\) is the pulsation and \(\mathbf{\Phi}_{k}\in\mathbb{C}\) is the SISO transfer response of the system (orange block in Figure 9) and \(N\in\mathbb{N}\) is the length of the FFT. The data-driven Bode-like diagram is presented in Figure 11. It exhibits a gain drop around 1 rad/s and a decay in the phase, characteristic of delayed and fractional systems2. Footnote 2: Note that from this point, model identification may be done in order to apply model-driven method. Here we skip this step to directly go to the control design. Simultaneously with the previous step, the objective closed-loop transfer function \(\mathbf{M}\) is defined as a first order model \(\mathbf{M}(s)=1/(s/w_{0}+1))\) where \(\omega_{0}=2\pi\) rad/s, is the natural cut-off frequency. We refer to the black dashed lines of Figure 3, given in the introduction. \(\mathbf{M}\) mainly aims at ensuring no steady-sate error (static gain objective set to one)3. With reference to equation (7) we are now Figure 11: Frequency response gain and phase diagrams of the data \(\mathbf{\Phi}_{k}\) collected during the open-loop experiments. ready to compute the ideal controller \(\mathbf{K}^{*}\) as well as its exact interpolation \(\mathbf{K}_{n}\), where \(n=128\) is automatically selected by the rank revealing factorization embedded in the Loewner process, and its approximations \(\mathbf{K}_{r}\) with an order \(r=1\). After time-domain discretisation (\(h=1/100\) s), Figure 12 illustrates the controller frequency response gains. The implemented linear controller is a pure sampled-time integrator with gain \(k=66.19\) (_i.e._\(\mathbf{K}_{r}(z)=66.19/(z-1)\)). Obviously, a proportional integral action model may also be identified with a better accuracy. Here we stick to the integral action in view of the nonlinear integral control analysis analysed in the next section. Figure 12 well illustrates that the exact interpolation perfectly matches the data and the approximation with an order \(r=1\) preserves the integral term. Such an observation, coupled with the knowledge of the system input-output positivity property (input blows air and output measures positive values only), motivates the use of a more involved control strategy discussed in the next section. ### Nonlinear positive control design #### 3.2.1 Idea and principle Observing that the considered system is stable (indeed, the configuration is an amplification one, but without any instability) and input-output positive (_i.e._ for any nonnegative input \(\mathbf{u}\), the output \(\mathbf{y}\) is nonnegative), it seems interesting to exploit this property for control purposes. Although not recent [10], positive systems have recently attracted a lot of attention due to their surprising properties; see e.g. [3, 9]. In particular, the theory of (linear) positive systems is playing an essential role in the modeling, the analysis and the control of compartmental systems which include biological, physiological, epidemiological and ecological systems as special cases [14]. Recently, a novel type of integral controller - the Antithetic Integral Controller (AIC) - was introduced in the context of biological control and chemical reaction networks [5]. The rationale for introducing such a controller was, among others, the derivation of a controller having a positive system representation that could always return a nonnegative control input. It was Figure 12: Bode gain diagrams of the ideal controller data \(\mathbf{K}^{*}\) (7), its exact interpolated sampled-time controller \(\mathbf{K}_{n}\) and its approximation \(\mathbf{K}_{r}\) with an order \(r=1\). later proved in [4] that this nonlinear integral controller enjoyed certain interesting properties which are absent from its linear (_i.e._ non-positive) counterpart. It is also worth mentioning that other nonlinear positive integral controllers exist [4], but the AIC exhibits a lot of the desirable behavioral properties of the usual integral controllers and this is the reason why it is considered here. Indeed, [4, Thm. 3.6] provides a stability proof of the closed-loop interconnection if the original underlying model is a linear positive one. In fact, those stability conditions coincide, in the worst-case, with the stability conditions of the standard integral controller, which indicates that using the AIC is not more constraining than using a linear integral controller. **Remark 5** (Closed-loop stability): _In the L-DDC setting, no stability proof can be guaranteed a-priori. This may be checked afterward with specific data-driven techniques (see e.g. [16, chap. 7] or [24]). However, one important feature of the positive design by Briat [4, equation 2.1] is that the closed-loop control of a stable positive system with an AIC ensures local exponential stability, while respecting input signal constraints, under very mild conditions._ #### 3.2.2 Application The original AIC is given in [4, equation 2.1] with the following equation set, \[\begin{array}{rcl}\dot{z}_{1}(t)&=&{\bf r}(t)-\eta z_{1}(t)z_{2}(t)\\ \dot{z}_{2}(t)&=&{\bf y}(t)-\eta z_{1}(t)z_{2}(t)\\ {\bf u}(t_{k})&=&kz_{1}(t)\end{array}. \tag{9}\] The discretized version (using the backward method) takes the form (\(h=1/100\) s): \[\begin{array}{rcl}z_{1}(t_{k}+h)&=&z_{1}(t_{k})+h\big{(}{\bf r}(t_{k})-\eta z _{1}(t_{k})z_{2}(t_{k})\big{)}\\ z_{2}(t_{k}+h)&=&z_{2}(t_{k})+h\big{(}{\bf y}(t_{k})-\eta z_{1}(t_{k})z_{2}(t_{ k})\big{)}\\ {\bf u}(t_{k})&=&kz_{1}(t_{k})\end{array}. \tag{10}\] Implemented in the real-time environment, user then tunes the values of \(k\in\mathbb{R}\) and \(\eta\in\mathbb{R}_{+}\) according to the desired controller. As (10) aims at reproducing the integral action, in our setting, gain \(k\) has been set equal to the integral term obtained with the L-DDC approach; _i.e._\(k=66.19\) (section 3.1) and \(\eta=300\), used to tend to a pure integral action. We refer to [4] for further details. ### Flow control experimental results To validate the proposed reference feedback (linear and nonlinear) integral control, four different type of experiments are carried out. In SS3.3.1, the lift coefficient \(C_{L}\) with feedback is computed and compare to the uncontrolled one (notice that this is the utilate objective of the control). In SS3.3.2, the robustness of the control with the deflection angle is analysed. In SS3.3.3, the frequency response of the controlled flap is computed and compared to the expected objective. Finally in SS3.3.4, some considerations on the nonlinear positive controller are discussed. #### 3.3.1 Lift coefficient gain evaluation (\(C_{l}\)) Both linear and non-linear integral controllers have been applied for the same flow conditions. As expected for deflection angles below \(13.8^{\circ}\), valves are not opened as the hot film voltage is above the reference value \({\bf r}\). When the hot film voltage tends to stand below the reference value, valves are opened with a duty cycle determined by the controller (this is typically the case when flap angle is increased). As an effect, the hot film voltage is maintained at the reference value thanks to the feedback control action. As presented in the introduction, Figure 2 highlights the benefit of control on the lift coefficient \(C_{L}\) increase. For \(\delta<13.8^{\circ}\), both uncontrolled and controlled flow present the same lift coefficient, as in both cases valves are not opened. However, for \(\delta>13.8^{\circ}\), curves for the uncontrolled and controlled cases do not superimpose anymore. The lift coefficient in the controlled case is higher than the one of the uncontrolled case. Regarding the controlled case, the linear evolution of \(C_{L}\) is extended up to \(\delta=24^{\circ}\). Between \(\delta=24^{\circ}\) and \(\delta=26^{\circ}\), \(C_{L}\) is reduced drastically. This phenomenon is due to the apparition of flow separation at the flap trailing edge. This flow separation does not spread over the entire flap as the control counters it and hold it at the flap trailing edge4. These results can also be observed in the analysis of the pressure coefficient evolution in both the uncontrolled and controlled flow cases. As both the linear controller and the non-linear positive controller have been applied with the same reference value on the hot film, both control cases yielded the same results on the lift and pressure coefficients. Footnote 4: This observation motivates future investigations involving additional sensors and a more involved multi-input control. #### 3.3.2 Robustness to flap angle deflection (\(\delta\)) In order to test the controllers robustness against the deflection angle, measurements were also performed with varying angles. In a given experiment, flap deflection angle was varied from \(\delta=8^{\circ}\) to \(\delta=18^{\circ}\) and to \(\delta=24^{\circ}\). The linear controller shows its limits due to its linear integral behaviour and the fact that it does not handle the positiveness of the system. As valves are not opened for \(\delta=8^{\circ}\), the linear controller takes into account the error and therefore accumulates an integral error. In that sense, when the deflection angle increases to \(\delta=18^{\circ}\) for which valves have to be open, the controller effect is in the wrong direction. As observed in Figure 13, once the linear controller has overcome the accumulated error, the controller is robust to the deflection angle variation from \(18^{\circ}\) to \(24^{\circ}\). As such an error could not be tolerated in real application, a way to circumvent it would be to implement an anti-windup on the linear controller. This may be done at a price of more involved calculus, usually involving a model, while here, the pure data-driven setup is employed. However, we proved in these experiments than an other simple and efficient way to deal with this integral behaviour is to implement a non-linear positive controller instead. As described in Figure 14, the delay resulting from the integral error issue totally vanishes considering this controller. In addition to that, in this setup, the input output stability is formally guaranteed. #### 3.3.3 Frequency-domain responses In addition, to compute the frequency response of the closed-loop control and ensure that the closed-loop performances meet the reference model \(\mathbf{M}\) objective, a sinus around the tracking Figure 13: Evolution of the flap deflection angle against time (left axis) and evolution of both the \(5^{\text{th}}\) hot film voltage and reference value (right axis) against time for the linear controller. value, with frequency sweep signal is given as reference. The time-domain responses of the output and control signals are reported in Figure 15. The top frame allows computing the frequency response diagram given in the introduction, illustrating that the performances are very close to the expected one fixed by \(\mathbf{M}\) (see Figure 3). On the bottom frame, the control signal of the linear controller shows to reach the saturation quite often, which is an other motivation for the nonlinear positive control. #### 3.3.4 Further remark on the nonlinear positive control Finally, as the nonlinear positive control law seems to be the best solution, a time-domain experiment where the reference is fixed but the flap deflection angle \(\delta\) travels from 34 degrees to 0 degree with a speed rate of 0.5 deg/s is performed (we compare here both strategies). Figure 16 (top) shows the control signal actuation for both the linear and nonlinear (positive) controllers. First it illustrates the fact that the positive controller avoids the saturation while the linear one tends to often reach then. In the same frame, for the positive control law, we compare the experimental control signal (solid orange) with the reconstructed theoretical continuous (9) (dashed yellow) and sampled (10) (dashed violet) when fed by the experimental data. Both lead to a perfect match which confirms the good implementation. Then Figure 16 (bottom) illustrates the positive sampled-time controller internal states, which both remain, as expected, positives. ## 4 Conclusions This paper presents experimental validations of active closed-loop control of flow separation over a plain flap. We numerically and experimentally demonstrate that flow separation may be improved by mean of a (SISO) reference signal tracking feedback control architecture, involving a controller with integral action. We also proposed two control laws: first _(i)_ a linear one where the integral gain is computed via a direct data-driven approach, and second, _(ii)_ a nonlinear positive controller to account for the system limitations, using the very same gain. Both strategies enabled to maintain a reference voltage value on the objective hot film placed at the flap mid-chord. The latter reference, being calculated based on the flow separation observation. Both controllers efficiency were assessed through lift coefficient calculations derived from pressure measurements (Figure 2). Their robustness to the flap deflection angle was also Figure 14: Evolution of the flap deflection angle against time (left axis) and evolution of both the 5\({}^{\text{th}}\) hot film voltage and reference value (right axis) against time for the positive controller. tested through experiments, during which the flap angle was continuously varied. Additionally, we demonstrate that the expected theoretical performances were recovered experimentally. The most significant advantage of these control techniques lies in the simplicity of their application, which is of importance for practitioners in view of experimentation and implementation. Application of such controllers could be extended to other flow control problems in future works, as well as more detailed validations. ## Acknowledgments This work was funded by the French National Research Agency (ANR) in the framework of the ANR ASTRID MATURATION CAMELOTT-MATVAL Project. It is supported by the regional platform CONTRAERO in the framework of the CPER ELSAT 2020 Project. The Defence Innovation Agency (DIA) has also financially sustained this work. The authors also thank RENATECH, the French national nanofabrication network, and FEDER. It has also been financed by the ONERA research project FluiDyCon, Fluid Dynamical Control.
2309.10777
The Kinematic Structure of Magnetically Aligned HI Filaments
We characterize the kinematic and magnetic properties of HI filaments located in a high Galactic latitude region ($165^\circ < \alpha < 195^\circ$ and $12^\circ < \delta < 24^\circ$). We extract three-dimensional filamentary structures using \texttt{fil3d} from the Galactic Arecibo L-Band Feed Array HI (GALFA-HI) survey 21-cm emission data. Our algorithm identifies coherent emission structures in neighboring velocity channels. Based on the mean velocity, we identify a population of local and intermediate velocity cloud (IVC) filaments. We find the orientations of the local (but not the IVC) HI filaments are aligned with the magnetic field orientations inferred from Planck 353 GHz polarized dust emission. We analyze position-velocity diagrams of the velocity-coherent filaments, and find that only 15 percent of filaments demonstrate significant major-axis velocity gradients with a median magnitude of 0.5 km s$^{-1}$ pc$^{-1}$, assuming a fiducial filament distance of 100 pc. We conclude that the typical diffuse HI filament does not exhibit a simple velocity gradient. The reported filament properties constrain future theoretical models of filament formation.
Doyeon Avery Kim, Susan E Clark, Mary E Putman, Larry Li
2023-09-19T17:25:15Z
http://arxiv.org/abs/2309.10777v1
# The Kinematic Structure of Magnetically Aligned HI Filaments ###### Abstract We characterize the kinematic and magnetic properties of HI filaments located in a high Galactic latitude region (\(165^{\circ}<\alpha<195^{\circ}\) and \(12^{\circ}<\delta<24^{\circ}\)). We extract three-dimensional filamentary structures using fil3d from the Galactic Arecibo L-Band Feed Array HI (GALFA-HI) survey 21-cm emission data. Our algorithm identifies coherent emission structures in neighboring velocity channels. Based on the mean velocity, we identify a population of local and intermediate velocity cloud (IVC) filaments. We find the orientations of the local (but not the IVC) HI filaments are aligned with the magnetic field orientations inferred from Planck 353 GHz polarized dust emission. We analyze position-velocity diagrams of the velocity-coherent filaments, and find that only 15 percent of filaments demonstrate significant major-axis velocity gradients with a median magnitude of 0.5 km s\({}^{-1}\) pc\({}^{-1}\), assuming a fiducial filament distance of 100 pc. We conclude that the typical diffuse HI filament does not exhibit a simple velocity gradient. The reported filament properties constrain future theoretical models of filament formation. keywords: ISM: clouds - ISM: kinematics and dynamics - ISM: magnetic fields - ISM: structure ## 1 Introduction Filamentary structures thread the Milky Way on almost every length scale (Molinari et al., 2010; Hacar et al., 2013; Zucker et al., 2015; Kalberla et al., 2016; Clark & Hensley, 2019). These linear structures are observed in various molecular clouds and their ubiquity may be linked to the physics of star formation (Kutner et al., 1977; Molinari et al., 2010; Arzoumanian et al., 2011; Palmeirim et al., 2013; Hacar et al., 2022). Recent observations confirm that a similar intricate network of filaments exists in HI clouds in a range of Galactic environments (McClure-Griffiths et al., 2006; Clark et al., 2014; HI4PI Collaboration et al., 2016; Peek et al., 2018; Soler et al., 2020). Despite their prevalence, the detailed physics that shapes filamentary structures is not well-understood. There is some evidence that the magnetic field plays an important role, as non-self-gravitating filaments are observed to be aligned with the magnetic field in both dust and atomic gas (Miville-Deschenes et al., 2010; Clark et al., 2014; Panopoulou et al., 2016). Diffuse HI filaments are particularly well-aligned with the ambient magnetic field traced by both starlight polarization (Clark et al., 2014) and polarized dust emission (Clark et al., 2015; Kalberla et al., 2016). Similar behavior is observed in low column density dusty filaments (Planck Collaboration et al., 2016); at a higher column density, the relative alignment between the magnetic field and filament long axes trends toward perpendicular (Soler et al., 2013; Planck Collaboration et al., 2016; Stephens et al., 2022). Various physical mechanisms for filament formation have been proposed. Filaments of cold gas in the warm ISM, for instance, are proposed as a product of thermal instability and turbulent compression and shear (Heitsch et al., 2011; Inoue & Inutsuka, 2016). The joint influences of turbulence and magnetic fields can form thin, elongated density structures (Smith et al., 2016; Gazol & Villagran, 2021). Furthermore, stretching induced by turbulence alone can form filaments that are confined by the Lorentz force (Hennebelle, 2013; Ibanez-Mejia et al., 2022; Seifried et al., 2020). Because a filamentary geometry can result from a variety of physical scenarios, the fact of filamentary does not on its own specify the filament formation mechanism; there may be multiple mechanisms operating across interstellar environments (e.g., Hacar et al., 2022). In addition to its morphology, the kinematic structure of filaments can help to constrain the physics of their formation. Although a comprehensive analysis of diffuse HI filament kinematics has not yet been disclosed, some visually-inspected tentative velocity gradients along HI filaments are reported (Kalberla et al., 2016). The kinematic structure of filaments has been analyzed more thoroughly in molecular environments. Clear velocity gradients are sometimes identified in molecular filaments, either running length-wise along the filament long axis (Dobashi et al., 1992), or along the orthogonal axis (Fernandez-Lopez et al., 2014). Non-self-gravitating filaments ("striations") in the vicinity of a dense filament of the Taurus molecular cloud display a large-scale velocity gradient suggestive of accretion along the striations (Goldsmith et al., 2008; Palmeirim et al., 2013). The kinematic description of some filaments depends on both the spatial scales and conditions. For instance, Hacar et al. (2013) suggests that the Taurus B213 filament is actually composed of many distinct velocity structures, while a level of turbulence and type of gas tracers influence the alignment level of the filament (Heyer et al., 2020). To further search for clues on the formation and evolution of filamentary structures, in this paper we examine the kinematics of HI filaments. To access the kinematics of HI filaments, we utilize three-dimensional data from the Galactic Arecibo L-Band Feed Array HI (GALFA-HI) survey (Peek et al., 2018). As Clark et al. (2014) demonstrated, angular resolution and sensitivity are critical for identifying and characterizing slender HI filaments. We use GALFA-HI's highest available spatial and spectral resolution (\(4^{\prime}\) and 0.184 km/s respectively) to study coherent structures in position-position-velocity space, i.e. the kinetic structure of 3D filaments (Beaumont et al., 2013; Clark et al., 2019). In this paper, we investigate the kinematic properties and magnetic field orientation of filamentary HI structures at high Galactic latitudes. The paper is organized as follows. In Sections 2 and 3, we introduce the data and outline the steps to extract the 3D HI filaments. Section 4 discusses our methods to examine the magnetic field orientation and kinematic properties of individual filaments. In Section 5, we present our results. Finally, we discuss the possible implications of our results in Section 6 and conclude in Section 7. ## 2 Data HI filaments are extracted from cubes of neutral hydrogen produced by the Galactic Arecibo L-Band Feed Array HI (GALFA-HI) Survey (Peek et al., 2018). GALFA-HI is a high angular and kinematic resolution survey of Galactic HI covering 13,000 deg\({}^{2}\) (approximately 1/3 of the sky) with \(4^{\prime}\) spatial resolution. We use the publicly-available GALFA-HI data which have a pixel size of 1 arcmin\({}^{2}\) and 0.184 km/s channel spacing over the velocity range \(|v|<188.4\) km/s. All velocities are measured in the LSR frame. The median rms noise is 352 mK at this resolution. In this paper, we focus on filaments residing in a high Galactic latitude region with an area of 360 deg\({}^{2}\) at \(165^{\circ}<a<195^{\circ}\) and \(12^{\circ}<\delta<24^{\circ}\) which spans Galactic coordinates: l=[210\({}^{\circ},340^{\circ}\)], b=[59\({}^{\circ},90^{\circ}\)]. In Figure 1, we show an overlay of integrated intensity maps evaluated in two different velocity ranges over the spatial region we cover. The blue map is integrated over \(-50\leq v\leq-20\) km \(\cdot\) s\({}^{-1}\), while the red is integrated over \(-20\leq v\leq 20\) km \(\cdot\) s\({}^{-1}\). As shown, HI structures are visually distinct at different velocities. We also employ the Planck 353 GHz (PR3.1) Stokes linear polarization maps (Planck Collaboration et al., 2018). The native spatial resolution of the Planck data is FWHM=\(4.9^{\prime}\), comparable to GALFA-HI (Planck Collaboration et al., 2015). For our analysis, we smooth the data to FWHM=\(1^{\circ}\) to improve the signal-to-noise of the Planck data. ## 3 Detecting 3D filaments We outline a procedure to extract 3D filaments from an emission cube. This algorithm is referred to as fil3d and will be described further and released to the public in a forthcoming work (Putman et al. in prep). To extract filamentary structures embedded in the diffuse ISM in the Milky Way, we first filter out large-scale diffuse Galactic emission. We apply an unsharp mask (USM), which effectively performs a high-pass spatial filter on the raw data. For this step, we first smooth each velocity slice of the data cube with a \(30^{\prime}\) Gaussian beam, then subtract the smoothed version from the original, and finally threshold the smoothed, subtracted data at zero. We then run FilFinder(Koch & Rosolowsky, 2015) to identify filamentary structures on each velocity channel of USM GALFA-HI data. FilFinder employs the techniques of mathematical morphology to identify and segment two-dimensional filamentary structures over a wide dynamic range in brightness (Shihi, 2009; Koch & Rosolowsky, 2015). To eliminate irregularities while maintaining a main structure, the algorithm first flattens and smooths the image, then applies an adaptive threshold to pick out all linear structures. These possible filament candidates are then trimmed to a pixel-wide skeleton with minimum connectivity using a Medial Axis Transform (Arcelli & Di Baja, 1985). The resulting skeletons are "pruned" to be final filamentary structures by removing short branches that trace small deviations from the long axis of a filament. With the above procedures executed over the GALFA-HI velocity range \(|v|\leq 188.4\)km \(\cdot\) s\({}^{-1}\), we end up with collections of two-dimensional filamentary structures in individual velocity channels. We will refer to each FilFinder-detected 2D filament as a "node". From these, we search for spatially overlapping node objects in neighboring velocity channels to obtain velocity-coherent structures. For instance, we start with a node at one channel, then search for objects at the next velocity channel that Figure 1: An overlay of integrated intensity (moment 0) maps evaluated at different velocity ranges gridded on both ICRS and galactic projections. The blue plot shows the moment 0 evaluated in the velocity range [-50, -20] km/s and the red plot shows the moment 0 evaluated from [-20, 20] km/s. significantly overlaps (share 85% of pixels in common) with the first node. This search is executed one node at a time and continues until there are no objects found to yield a significant overlap in subsequent channels. As fil3d parameters, we assume the distance to be 100 parsecs and set the characteristic scale width as 0.1 parsec, which is the resolution of the GALFA-HI data at this distance. Each collection of 2D nodes forms a 3D filament and if a node is not matched with another in either adjacent velocity channel, it is rejected. We verify that all 3D filaments are unique in that each occupies a distinct set of spectral and spatial coordinates. The final projected shape of the 3D filament is defined by the line-of-sight sum of its constituent 2D nodes: we refer to this shape as the "merged mask" of the 3D filament. In the high Galactic latitude region studied here, we initially find 325 3D filament candidates and apply the below filters to obtain 269 3D filaments for our final sample. For the final sample to ensure filament-like morphology, we only accept 3D structures with merged masks of aspect ratio (the ratio between the length of the major axis to its minor axis) of at least 6:1. We also apply a linewidth filter (see SS4.1), which removes an additional 10% of the candidate filaments in our region. Roughly 1% of the HI flux in this region of the sky corresponds to 3D filaments. Figure 2 shows the merged masks of our final selection of 3D filaments. An example of the individual nodes (or channel maps) for a filament as found by fil3d is shown in Fig. 3. ## 4 Analysis Methods ### Linewidth Estimation The thermal linewidth is a useful indicator of the physical nature of ISM structures. fil3d catalogs filamentary structures which span greater than two velocity channels, and over half of the 3D filaments found occupy only two channels. Considering the fine spectral resolution of GALFA-HI (0.184 km/s), such a narrow velocity width deems unphysical. Cold gas in the Milky Way has a typical linewidth of around 2-3 km/s (Kalberla & Haud, 2018). Upon inspection of the two-channel filaments, many were found to have additional emission within the merged mask in adjacent channels that was not captured by our criteria as clearly filamentary. Thus, rather than adopting the fil3d velocity range as the filament velocity width, we define a procedure for fitting a line profile that we consider to be more representative of all the emission associated with each filament. To derive the linewidth of each filament, we find the median USM intensity for each channel within the filament's merged mask area for channels \(\pm\)10 km/s from the filament's central velocity as shown in Figure 4. For this analysis, we re-bin the data into 4\({}^{\prime}\) pixels, to approximately match the GALFA-HI angular resolution. In Figure 4, the blue dots denote the data points, and the orange line indicates a best-fit Gaussian to the data. The linewidth we adopt going forward for each filament is the full-width-half-max (FWHM) of the orange curve. Figure 4 shows two intensity peaks, one at \(\approx\) 0 km s\({}^{-1}\) and another at \(\approx\) 2 km s\({}^{-1}\). The peak near the fil3d detected range is associated with the selected filament, while the 0 km s\({}^{-1}\) peak corresponds to contamination from another filament. We further implement a two-step examination to eliminate filaments from unassociated emission that is not coincident with the peak velocity in the intensity spectrum. As the first step, we check that the fil3d-detected channels are located within 1\(\sigma\) of the peak of the fitted Gaussian. For filaments that meet this criterion, we then examine the individual channel maps to make sure that the filament emission is located within the merged mask area and does not extend significantly beyond it. Approximately ten percent of the total detected filaments are eliminated in these two-step examinations. ### Magnetic Field Orientation The thermal emission from interstellar dust is linearly polarized because the short axes of dust grains are preferentially oriented parallel to the ambient magnetic field (Purcell, 1975; Andersson et al., 2015). Figure 2: The merged mask of all 269 extracted 3D filaments projected along the line-of-sight. Filaments shown have aspect ratios greater than 6:1. The color denotes the central channel of the detected velocity range for each filament. Thus, the linear polarization of this radiation is orthogonal to the plane-of-sky magnetic field orientation in the dusty ISM. To measure the magnetic field orientation towards HI filaments, we use Planck polarization maps (Planck Collaboration et al., 2018) at 353 GHz, a frequency dominated by thermal dust emission. It is important to note that photometric dust polarization measures emission integrated over the line of sight and is therefore not velocity-resolved as our 3D filaments are. The polarization fraction (p) and polarization angle (\(\psi\)) in our analysis follow the IAU convention in Equatorial coordinates and are defined with the observed Stokes parameters (I,Q,U): \[p=\frac{\sqrt{Q^{2}+U^{2}}}{I}, \tag{1}\] \[\psi=0.5\times\arctan(-\mathrm{U},\mathrm{Q}). \tag{2}\] To obtain the mean dust polarization fraction and polarization angle for an individual filament, we apply equations (1) and (2) respectively. We define the plane-of-sky magnetic field orientation (\(\phi\)) to be rotated 90 degrees from the measured polarization angle (\(\psi\)). We compute the magnetic field orientation for each filament by measuring the mean Stokes parameters within the merged mask area. The propagated statistical uncertainties (used in SS 6.2) are computed from the noise covariance matrices and quantified in Planck Collaboration et al. (2015) as \[\sigma_{\phi}=28.65^{\circ}\sigma_{P}\times\frac{1}{P}\sqrt{\frac{Q^{2}\mathbf{ C}_{UU}+U^{2}\mathbf{C}_{QQ}-2QU\mathbf{C}_{QU}}{Q^{2}\mathbf{C}_{QQ}+U^{2} \mathbf{C}_{UU}+2QU\mathbf{C}_{QU}}}, \tag{3}\] where \(\mathbf{C}_{QQ}\), \(\mathbf{C}_{UU}\) are the internal variances and \(\mathbf{C}_{QU}\) denotes the off-diagonal terms of the noise covariance matrix. The uncertainty on the polarized intensity is given by \[\sigma_{P}^{2}=\frac{1}{P^{2}}(Q^{2}\mathbf{C}_{QQ}+U^{2}\mathbf{C}_{UU}+2QU \mathbf{C}_{QU}). \tag{4}\] ### Filament Position-Velocity Diagrams Given the relative orientation of the filaments with respect to the magnetic field and the resolution of the HI observations, we focus on analyzing the velocity gradients along the long axis of the filaments. To compute the velocity gradient, we employ the position-velocity (PV) diagram of each filament. A PV diagram measures the distribution of velocities along a projected position and has been employed in understanding the large-scale kinematics of gaseous structures (Garcia-Burillo et al., 2003; Veena et al., 2018). We construct PV diagrams here to determine the direction and magnitude of any velocity gradient. For consistency, we define the direction of the gradient to point from the lowest velocity to the highest velocity. To ensure we capture the internal kinematics of filaments, we construct two distinct PV diagrams with the USM data. The first PV diagram evaluates the intensity within spinal pixels \(\tau(x^{\prime},y^{\prime})\), or pixels that pass through the long axis spine (central region) of a 3D filament. Figure 5 shows the contour of the mask of a full 3D filament over the brightness temperature map of the filament at different velocity channels. For the first PV diagram, we construct the spinal pixels of 3D filaments by rotating the masks of the 3D filament to project onto a global horizontal axis. Here, the amount of rotation (\(\theta\)) is the angle between the spinal pixels of the non-rotated filament \(\tau(x,y)\) and the global horizontal axis. After the rotation, all pixels are projected onto Figure 4: A spectrum of the filament shown in Figure 3. The x-axis shows the velocity and the y-axis denotes the USM median intensity within a merged masked area of the filament (blue dots). The orange curve is the best-fit Gaussian of this velocity spectrum. The gray region indicates the fi13d-detected velocity range. The physical velocity width of a filament is defined as the full-width-half-max (FWHM) of the fitted Gaussian. The best-fit Gaussian peaks near the fi13d-detected velocity range, and we confirm via visual inspection that emission within the Gaussian-selected velocity range is associated with the fi13d-detected filament. Figure 3: Channel maps of one of the 3D filaments. The background image shows the brightness temperature from the raw data (before the USM is applied). The red contours outline the shapes of the nodes at velocity channels within the fi13d velocity width. This same filament will be used for several of the following plots. grids of equal size. The spinal pixels are the central pixels in every column of this projected mask: \[\tau(x^{\prime},y^{\prime})=Median(\mathbf{R}(\theta)\cdot\tau(x,y)) \tag{5}\] where prime indicates the rotated frame and \(\mathbf{R}(\theta)\) represents a rotation matrix written as \[\mathbf{R}(\theta)=\left[\begin{array}{cc}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{array}\right]. \tag{6}\] In the second PV diagram, we evaluate the median brightness temperature along the long axis of a filament. As shown, a majority of the 3D filaments have irregular shapes that the spinal pixels may not always capture the full extent of a filament. Addressing this account, we evaluate the range of velocities at the median brightness temperature along a given column of the rotated merged mask. In the right panel of Figure 6, we show the range of velocities at the median brightness temperature along a given column of the rotated merged mask. As shown in Figure 6, not surprisingly, both the "spinal row" and "median brightness temperature" PV diagrams are highly correlated. This similarity shows that either method is broadly representative of the filament velocity structure. ### Measuring Velocity Gradients We determine the presence (or lack) of velocity gradient along the major axis of filaments using PV diagrams from SS4.3. To extract a single velocity value along a filament and evaluate the magnitude and direction, we use the intensity-weighted mean velocity along each projected position (the position used as the x-axis of Figure 6) and fit a slope to the points (as shown in Figure 7). For example, the intensity-weighted mean velocity at position \(P=p_{j}\) is expressed as \[\boldsymbol{\omega}(p_{j})=\frac{\Sigma_{\mathrm{Vthm}}\ v\cdot\mathbf{I}(p_{ j},v)}{(\Sigma_{\mathrm{vvshm}}\ \mathbf{I}(p_{j},v))}, \tag{7}\] where \(v_{\mathrm{fwhm}}\) above is evaluated over the FWHM from the velocity width fit in SS4.1 at a given position. In the top row of Figure 7, we show the intensity-weighted mean velocity along the projected long axe of a filament (x-axis). With these data for each filament, we perform a weighted least squares regression to obtain a best-fit linear model (solid line). The second moment, \(\boldsymbol{\xi}^{2}\), is used as the weight in the regression. At each pixel (\(p_{j}\)): \[\boldsymbol{\xi}(p_{j})=\sqrt{\frac{\Sigma_{\mathrm{vvshm}}(v-\boldsymbol{ \omega}(p_{j}))^{2}\mathbf{I}(p_{j},v)}{\Sigma_{\mathrm{vvshm}}\ \mathbf{I}(p_{j},v)}}, \tag{8}\] which shares the same notation as Equation 7. We show the square Figure 5: Channel maps similar to Figure 3, but the white contours outline the merged mask area of the 3D filament (combined shape of all nodes) and the background is made with the USM data instead. Figure 6: Two position-velocity (PV) diagrams are built for each filament. The PV diagram on the left is based on the spinal pixels, while the right is built based on the median brightness temperature of each column for the merged mask (see text). The position (x-axis) denotes the projected length of the merged mask under an assumption that the distance is 100 pc. The velocity axis covers the FWHM range from Figure 4. The color bar range is identical to Figure 5. root of the second moment (\(\xi\)) in the bottom row of Figure 7. The magnitude of the slope represents the magnitude of a potential velocity gradient, and the sign of the slope denotes the directional component parallel to the filament's long axis. Not all filaments demonstrate velocity gradients (examples are shown in Figure A3). To select filaments that have significant velocity gradients, we evaluate the goodness of the linear fit to the data. This can be assessed from the coefficient of determination (\(R^{2}\)), which is the ratio of the variance explained by a fitted model to the total variance. The values for \(R^{2}\) range from 0 to 1. An ideal model which perfectly explains all the variance in an observation will result in \(R^{2}=1\). A poor fit, on the other hand, will result in a \(R^{2}\) close to 0. We evaluate \(R^{2}\) independently for the two PV diagrams described in SS4.3 as both capture related yet separate kinematics of a given filament. We consider a filament velocity gradient significant if one of the PV diagrams has the \(R^{2}\) slope fit greater than 0.5 (see Figure 8). ## 5 Results ### Local and IVC Filament Populations Figure 9 shows the central velocity distribution of the 3D filaments and suggests they can be separated into two distinct groups. The first group, 157 filaments, has a central velocity near \(v\approx 0\) km/s, which suggests the filaments belong to local gas co-rotating with the local standard of rest (LSR). The second group has 48 filaments with velocities \(v<-30\) km/s. This second group of filaments is likely part of a previously identifiedintermediate velocity cloud (IVC) that has velocities that deviate from a simple model of Galactic rotation (Wakker, 2004; Putman et al., 2012). We assess the properties of the local and IVC filaments separately. In Figure 10, we compare the line widths, median column densities, and lengths of the two populations. In general, we make use of the USM data because the raw data can be affected by some contamination from diffuse Galactic emission. The raw data is used when estimating column densities. As shown on the left of Figure 10, the IVC filaments tend to have larger velocity widths (linewidths of filaments are evaluated using the technique described in SS4.1). The mean linewidth of the local filaments is 3.1 km/s, consistent with the typical thermal linewidths of cold HI structures in the Milky Way, and with the theoretical cold neutral medium (CNM) temperatures (Wolfire et al., 1995; Kalberla & Kerp, 2009). The IVC filament linewidths peak at 6.2 km/s, indicating this is a population of warmer filaments. This would be expected for filaments directly associated with an IVC complex (Haud, 2008). We evaluate the column densities of the filaments and compare Figure 8: Histograms of the velocity gradient slope magnitudes evaluated from spinal and median brightness temperature PV diagrams using a uniform distance of 100 pc. The high \(R^{2}\) samples (orange) have preferentially higher magnitudes, around 0.1 to 1 km s\({}^{-1}\)pc\({}^{-1}\), compared with the total distribution (blue). Figure 7: The intensity-weighted mean velocity translated from the PV diagrams in Figure 6 (top) and the square root of the second moment of velocity, \(\xi\) (bottom), along the major axis of a filament. The x-axis denotes the projected filament length in parsecs (at an assumed distance of 100 pc) and the y-axis is the intensity-weighted mean velocity evaluated in each column of the PV diagram. The grey bands represent the model uncertainty derived from one sigma uncertainty of fitted parameters. The label indicates the best-fit slope magnitude (equivalent of gradient magnitude) and its uncertainty and has units of km s\({}^{-1}\) - pc\({}^{-1}\). We evaluate \(R^{2}\) to determine the goodness-of-fit and consider \(R^{2}>0.5\) fits to have statistically significant gradients. them in Figure 10. The column density \(N_{\rm HI}\) is computed with \[N_{\rm HI}=1.824\cdot 10^{18}\int_{v_{0}}^{v_{m}}T_{b}(v)dv~{}{\rm cm}^{-2}, \tag{9}\] where \(T_{b}(v)\) is the brightness temperature for a given point within the merged mask area at one velocity channel \(v\) and \(\int|\dots|\) indicates the moment 0 evaluated over the linewidth of each filament. As demonstrated from the center plot of Figure 10, the median column densities of the two populations are comparable with a median of \(\approx 10^{19.6}\) cm\({}^{-2}\). These values are derived from the raw data because the USM data over-subtracted the column density. Thus the estimated column densities with the raw data may be biased high by the inclusion of diffuse Galactic emission un-associated with the filaments. The rightmost panel of Figure 10 shows the major-axis filament lengths for each filament population, assuming all filaments are at a fiducial distance of 100 pc. The distributions of the two populations are fairly consistent. However, if the likely distance difference between the IVC and the local filaments is taken into account, the length distribution of the IVC filaments will shift further right. ### Alignment with the Magnetic Field The magnetic field orientation (\(\phi\)) of individual filaments is measured using the Planck 353 GHz Stokes parameter maps (see SS4.2). The left panel of Figure 11 demonstrates the difference between the mean magnetic field orientations (\(\phi\)) and spatial orientations (\(\theta\)) of filaments, while the right panel shows the measurement uncertainty evaluated from the smoothed Planck Stokes covariance using Equation 4 over the area of the filament merged mask. Consistent with Clark et al. (2014), Figure 11 demonstrates that the majority of the 3D HI filaments are well-aligned with the ambient magnetic field with relatively low \(\phi\) uncertainties in general. However, we find different behavior when we compare the local and IVC populations. Figure 12 compares the relative orientations of the filaments and the magnetic field (\(|\theta-\phi|\)) for the two populations. As shown, the local filaments are well-aligned with the ambient magnetic field, and most perpendicular filaments in Figure 11 are identified to be the IVC filaments. The dust polarization traces the plane-of-sky component of the magnetic field orientation in a density-weighted integral along the line of sight. To further evaluate the relative orientation between the major axes of filaments and the magnetic field, we compute the extended projected Rayleigh statistics (PRS, Jow et al., 2018). The global PRS value (\(Z_{x}\)) quantifies the level of agreement between the orientations of the filaments and their average relative orientation with respect to the local magnetic field: \[Z_{x}=\frac{\sum_{i}^{N}\cos[(\theta_{i}-\phi_{i}^{\prime})]}{\sqrt{n/2}}, \tag{10}\] where \(\theta\) denotes a projection angle, \(\phi^{\prime}\) and \(n\) are the mean magnetic field orientation measured in the same frame as \(\theta\) and the number of filaments, respectively, and \(Z_{x}>>0\) indicates strong parallel alignment while \(Z_{x}<<0\) indicates strong perpendicular alignment. We obtain \(Z_{x}=14.4\) and \(\sigma_{Z_{x}}=0.94\) for the overall filament population. The number density averaged \(Z_{x}\) of the local filaments is greater than that of the IVC filaments, which re-confirms what we visually inspect from Figure 12: While the HI filaments at high Galactic latitudes are generally well-aligned with the ambient magnetic field, the local filaments are better aligned with the field. We find no significant correlation between the column densities of filaments and the level of magnetic field alignment, consistent with the findings for low-density dust filaments in Planck Collaboration et al. (2016). ### Filament Kinematics We examine the internal kinematics of the filaments by analyzing the velocity gradients. As mentioned in Section 4.3, we extract velocity gradients parallel to major axes of filaments with two types of PV diagrams, then fit linear models to estimate the magnitudes and the directions of gradients. To select filaments with significant velocity gradients, we compare the R-squared metric on the fitted linear models. Approximately 15 percent of the HI filaments have \(R^{2}>0.5\) for at least one of two PV diagrams, and we consider those samples to demonstrate statistically significant velocity gradients. The estimated gradients from the two PV diagrams are highly correlated: the Spearman correlation coefficient between the two methods is 0.96. The strong agreement between these two methods builds confidence that our gradient inference is robust to particular choices in the PV diagram construction. A majority of filaments do not show significant velocity gradients along their length. One driving factor for this seems oscillatory patterns observed in the PV diagrams. Oscillations in the intensity-weighted mean velocity along the length of filaments, as seen in Figure 7 and multiple examples in the Appendix, can suppress the magnitude of a slope and yield a lower \(R^{2}\) value. Gradient magnitudes are computed assuming a fiducial distance of 100 pc, roughly the distance to the Local Bubble (LB) wall (Lallement et al., 2022). Our velocity gradient measurements are limited by the GALFA-HI sensitivity and resolution: we are only sensitive above \(10^{-2}\) km s\({}^{-1}\)pc\({}^{-1}\) for a typical filament. The magnitude distribution shown in Figure 8 demonstrates that the statistically significant gradients (\(R^{2}\geq 0.5\)) all have a magnitude greater than the GALFA-HI resolution limit and have a median velocity gradient of \(0.5\) km\({}^{-1}\). pc\({}^{-1}\). If the distance to the IVC filaments are taken account, their gradient magnitude would be smaller than \(\mathcal{O}(0.1)\) km\(\cdot\)s\({}^{-1}\) pc\({}^{-1}\). Figure 13 illustrates the direction of gradients. The background is the moment 1 map evaluated in the [-10, 10] km/s range, and the arrows are positioned at each filament's location and point in the direction of increasing velocity. The colors of the arrows denote the central channel detected by fi13d, and the bold arrows show the Figure 9: A kernel density estimate (KDE) plot of central velocities of all HI filaments. The distribution shows a clear bi-modality, separating the local and IVC filaments in our sample. filaments with \(R^{2}>0.5\). Although no global trend is evident, some regions of local bulk flow seem to be captured (for example, the bottom right) by our analysis. The filaments with significant velocity gradients do not appear to correlate with other physical properties such as the filament's central velocity, column density, or magnetic field alignment; however, there is a weak correlation with filament length. Shorter filaments are more likely to have a statistically significant gradient, and the local filaments tend to be somewhat shorter than the IVC filaments on the sky. For instance, 92% of the local filaments with velocity gradients have their major axes shorter than approximately 5 parsecs (72 % of all local filaments have lengths less than 5 parsecs). We attempted to evaluate the gradient perpendicular to the major axis of filaments; however, at most only two resolution elements are available across the minor axis of a filament. We performed a preliminary investigation of the kinematic structure perpendicular to the major axis over a region slightly wider than the filament masks. Though a few filaments show gradients, we did not find clear evidence for perpendicular velocity gradients. To investigate a perpendicular gradient, much higher spatial resolution data of the filaments are needed. ## 6 Discussion ### Origin and Galactic Environment The HI filaments can be placed into two groups based on their mean detected velocities as shown in Figure 9. The first group clusters around \(v=0\) km/s, which is consistent with gas associated with the solar neighborhood (we refer to this filament population as "local"). The second group of filaments clusters around at \(-60\)kms\({}^{-1}<v<-30\) kms\({}^{-1}\), and can be associated with an intermediate velocity cloud (referred to as "IVC"). The bimodality in the filament population is robust to fil3d parameter choices. Given their low absolute velocities and position at high Galactic latitude, the "local" filaments are likely located relatively nearby. A prominent feature of the nearby ISM is the wall of the LB. The LB is a low-density cavity of the ISM that surrounds the Sun (Lallement et al., 2014; Pelgrims et al., 2020). The column densities of our filaments imply they are located at a distance at least as far as the wall of the LB. The distance to the LB wall varies as a function of position in the sky, but is at least 100 pc away in most directions (e.g. Cox & Reynolds, 1987; Snowden et al., 2000; Murray et al., 2020; Lallement et al., 2022): thus we set 100 pc as a lower-limit for filaments' estimated distance. The formation of the local filaments is plausibly be linked to the formation of the LB. The winds from massive stars and explosions from nearby supernovae (Cox & Smith, 1974), perhaps from the Sco-Cen association (Crutcher, 1982), injected the energy needed to stretch the cavity wall and redistribute the interstellar medium over spatial scales of a few hundred parsecs. In this process, filamentary structures can be created from the compressed interstellar magnetic fields in the walls or shells of HI gas shaped by the expanding bubbles (Alves et al., 2018; Frisch & Dwarkadas, 2018). Under this scenario, the projected magnetic field follows the curvature of the expanding bubble, which leads to a large-scale correlation between HI geometry and magnetic field orientation. The existence of IVC filaments with similar properties, however, suggests that if the above bubble-linked formation mechanism is correct, it must be a sufficient, but not necessary, filament formation catalyst. Our IVC filaments directly overlap with the large IVC complex called the Intermediate-Velocity Arch (IV Arch), which stretches from \(\ell\approx 115^{\circ}\), \(b\approx 35^{\circ}\) to \(\ell\approx 200^{\circ}\), \(b\approx 70^{\circ}\)(Wakker, 2004). The IV Arch is at a z-height above the Galactic plane between 0.8 and 1.5 kpc (Kuntz & Danly, 1996), and has a local maximum N\({}_{\rm{HI}}\) column density in the spatial and kinematic region of our IVC filaments (Knude & Fabricius, 2005). The physical environment of the IV Arch is not well-known, but its physical properties, including the magnetic field, local energy sources, and ISM composition, are unlikely to be identical to those of the LB. Despite these differences, 3D filaments are still found, indicating a variety of physical conditions can produce them. The difference in distance between the IVC and the LB leads to a natural size disparity between the two filament populations. The rightmost panel of Figure 10 shows that the two filament groups have a similar length, however, the IV Arch is approximately ten times further away than the neutral wall of the LB, which leads the IVC filaments to have lengths of around tens of parsecs. The linewidth difference between the two populations may represent the lower-pressure environment of the IVCs. Future work that maps filaments across the sky at a range of velocities will be key to further investigating the properties of HI filaments in different Galactic locations. Figure 10: _Left:_ Comparison of velocity widths between the local and IVC filaments derived from USM data. _Center:_ The median column density comparison. The column density is computed from the raw data using Equation 9 with the integral over the velocity width from the FWHM estimation. _Right:_ The length of the filament major axes, when a uniform “fiducial” distance of 100 pc is assumed. ### Implications of Filaments' Magnetic Field Alignments Figure 11 demonstrates that a majority of the HI filaments are well aligned with the plane-of-sky magnetic field orientation inferred from the Planck dust polarization angle. This agrees with previous analyses of HI filaments (Clark et al., 2015; Clark & Hensley, 2019), which find that structures in the diffuse medium are preferentially oriented to the local magnetic field. These previous works quantified the orientation of linear HI structures in individual velocity channel maps using the Rolling Hough Transform (Clark et al., 2014). Distinct from previous work, we explicitly measure the orientation of three-dimensional, velocity-coherent HI filaments. When comparing the relative magnetic field orientations between the local and IVC filament populations, we find the IVC filaments are less-aligned to the Planck 353 GHz magnetic field. This is perhaps not surprising. Polarized dust emission is a line-of-sight (LOS) integrated quantity. Along a single LOS, multiple layers of dust clouds with different spectral energy distributions (SEDs) and magnetic properties may exist, each contributing to the measured polarization angle (Clark, 2018; Pelgrims et al., 2021). While "local" gas within the LB has a relatively uniform galactic dust-to-gas emission ratio with a moderate HI column density (Jones et al., 1995), the dust content significantly decreases for more distant gas clouds (e.g. IVCs and HVCs) (Peek, 2013). A lower dust content may result from lower metallicities, usually related to the amount of dust, of distant clouds (Wakker & van Woerden, 1997), or less heating from the interstellar radiation field due to their distance from the Galactic disc (Saul et al., 2014). The density-weighted dust polarization at high Galactic latitudes is dominated by the local ISM, because the mean dust column of the local ISM is approximately twice that of the IVCs (Panopoulou et al., 2019). In other words, a lack of alignment between the dust polarization and the IVC filaments does not necessarily indicate the IVC filaments are not well-aligned with their _local_ magnetic field. For instance, Panopoulou et al. (2019) estimated the plane-of-sky magnetic field orientation as a function of distance using stellar distance and starlight polarization measurements. They found that two clouds at different distances (IVC and LVC) exhibit significant differences in column density and polarization properties; not only do the two clouds differ in \(\rho\), but also the mean magnetic field orientation at the distance of each cloud differs by \(60^{\circ}\). Clark & Hensley (2019) found that the linear HI structures at the velocities coincident with these two clouds agree well with the magnetic field orientations measured by Panopoulou et al. (2019). In order to examine the relative orientation of the IVC filaments to their ambient magnetic field, this type of tomographic investigation (e.g., Pelgrims et al., 2023) is required to disentangle the magnetic field structures along the LOS. ### Velocity Gradients A number of theoretical models predict velocity gradients along various ISM filaments. HI filaments, if created by thermal instability and turbulent compression and shear, are aligned with the local magnetic field due to the turbulent shear strain induced at the shock front (Inoue & Inutsuka, 2016; Ntormousi et al., 2016). Some literature predicts bulk gas motions along filaments as the magnetic field directs the assembled flow along the field lines (Crutcher et al., 2010; Tritsis & Tassis, 2016). In our study, only approximately 15% of the identified HI filaments demonstrate small velocity gradients along their major axes. Because the majority of filaments do not exhibit clear velocity gradients, we conclude that long-axis velocity gradients are not a ubiquitous characteristic of this filament population. Furthermore, the filaments' velocity gradients are not strongly correlated with their magnetic field alignments or column densities. Only a weak anti-correlation between the length of filament and velocity gradients is observed. The viewing angle between the 3D filament orientation and the line of sight may contribute to an anti-correlation between filament length and velocity gradient magnitude (Fernandez-Lopez et al., 2014; Chen et al., 2020). For fixed values of gas velocity gradient and filament Figure 11: Left: Difference between the filament orientation (\(\theta\)) and the magnetic field orientation inferred from the _Planck_ 353 GHz polarization observations. (\(\phi\)). Right: Mean angle uncertainty of the magnetic field orientation at the position of the filaments in the _Planck_ data. Figure 12: The comparison in absolute difference distribution between the spatial orientations of the filaments (\(\theta\)) and the mean magnetic field orientations (\(\phi\)) from the Planck 353GHz data. Local filaments tend to be more aligned with the ambient magnetic field compared with the IVC filaments. length, filaments oriented more parallel to the line of sight should exhibit stronger radial velocity gradients and have shorter plane-of-sky extents. Although we see a hint of this expected anti-correlation in the data, we cannot conclude this trend is physically meaningful since most of our filaments do not exhibit clear velocity gradients in the first place. Moreover, we find that longer filaments are more likely to include knots of emission with more complicated velocity structures, as seen in Figure 7 and A3, which results in a lower \(R^{2}\) score. This physically or unphysically associated emission that is often present in the PV diagrams of longer filaments dilutes the magnitude of the velocity gradients and affects the goodness of fit. Periodic velocity structures, somewhat similar to what is seen here, are also reported along the lengths of molecular filaments (Hacar et al., 2013; Barnes et al., 2018; Liu et al., 2019; Henshaw et al., 2020; Hacar et al., 2022). Velocity oscillations are present over different scales in the molecular filaments, and their origin is often assumed to be related to small-scale gravitational accretion or outflows of young stellar objects (Hacar & Tafalla, 2011; Liu et al., 2019; Henshaw et al., 2020). We note that the mean gradient magnitudes between the molecular and our HI filaments are similar (Goodman et al., 1993; Hacar & Tafalla, 2011; Fernandez-Lopez et al., 2014; Jimenez-Serra et al., 2014; Dhabal et al., 2018; Chen et al., 2020). Although a strong correlation between the HI and \({}^{13}\)CO gas velocities are noted in molecular cloud candidates (Soler et al., 2019), our HI filaments are not self-gravitating, nor near star-forming regions. Any similarity between the kinematic structure of HI and molecular filaments must either be coincidental or related to other physics. Further studies are needed to explore the similarity between the velocity structures of atomic and molecular filaments. ## 7 Conclusion In this work, we study the kinematics and magnetic field alignment of 3D HI filaments at high Galactic latitude. The highlights of our findings can be summarized as follows. * We use a new filament-finding algorithm, fil3d, which searches for velocity-coherent filamentary structures. fil3d first finds filamentary ("node") objects in every velocity channel with FilFinder(Koch & Rosolowsky, 2015), and then constructs three-dimensional filaments by extending nodes that significantly overlap with neighboring velocity channels. We run fil3d on GALFA-HI in a high Galactic latitude region, and identify 269 3D HI filaments after aspect ratio and velocity profile filtering. * We observe our 3D filaments can be separated into two groups based on their mean detected velocities. The two groups differ in line widths, magnetic field alignments, and physical sizes, but share similar morphological properties. The results suggest the two groups of filaments originate from the Local Bubble and the IV-Arch respectively. * We derive physical line widths of HI filaments by fitting a Gaussian to the USM intensity spectrum of individual filaments. The estimated velocity widths agree well with those of CNM structures. The typical linewidth of local and IVC filaments are 3.1km/s and 6.2km/s, respectively. * We find the local HI filaments are well-aligned with the ambient magnetic field measured from the Planck 353GHz data. IVC filaments do not show the same level of alignment. This difference is likely due to the fact that the polarized dust emission is LOS integrated and the IVC filaments do not dominate the column. A further tomographic effort is needed to disentangle the magnetic field structures along the line of sight. * We develop a method of assessing filament velocity gradients from two types of PV diagrams. We find 15 percent of our filaments show significant velocity gradients along their long axes with their typical gradient amplitudes ranging between 0.1 to 1.2 km\(\cdot\)s\({}^{-1}\)pc\({}^{-1}\). The results of this work show the importance of velocities and spectral and spatial resolution in studies of HI filaments. This paper presents the first finding of the alignment of velocity-coherent filamentary structures with the magnetic field and the presence of these structures at the Milky Way's disk-halo interface in the IVCs. Future tomography of the Galactic magnetic field will provide further insight into the alignment of filaments with the magnetic field at varying distances (Tassis et al., 2018). The finding that HI filaments Figure 13: Directional component of the filament velocity gradients expressed in the form of arrows which point to the direction of the higher velocity. The colors of the arrows indicate the filaments’ central detected velocity. The darker outlined arrows denote the ones with significant velocity gradients with \(R^{2}>0.5\). The background shows the first moment of the region evaluated from -10 to 10 km/s, but the color-scale is saturated for a better demonstration. do not typically display long-axis velocity gradients sets constraints on theoretical models of diffuse filament formation. ## Acknowledgements The authors thank helpful discussions with Blakesley Burkhart, Chang-Goo Kim, Mordecai-Mark Mac Low, Lorenzo Sironi, Snezana Stanimirovic, and Jacqueline van Gorkom. D.A.K thanks Eric Korpela for help with the GALFA-HI data cube. This work was partly supported by the National Science Foundation under Grant No. AST-2106607. This project makes use of astropy (Astropy Collaboration et al., 2013; Price-Whelan et al., 2018), FilFinder(Koch & Rosolowsky, 2015), healpy (Zonca et al., 2019; Gorski et al., 2005), numpy and scipy (Virtanen et al., 2019), matplotlib (Hunter, 2007), and statsmodel (Seabold & Perktold, 2010). ## Data Availability This publication utilizes data from Galactic ALFA HI (GALFA-HI) survey data set obtained with the Arecibo L-band Feed Array (ALFA) on the Arecibo 305 m telescope (available at [https://purcell.sssl.berkeley.edu/](https://purcell.sssl.berkeley.edu/)). This paper also makes use of observations obtained with Planck ([http://www.esa.int/Planck](http://www.esa.int/Planck)), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. The data that support the plots within this paper and other findings of this study are available from the corresponding author upon requests.
2309.12148
Neural Modelling of Dynamic Systems with Time Delays Based on an Adjusted NEAT Algorithm
A problem related to the development of an algorithm designed to find an architecture of artificial neural network used for black-box modelling of dynamic systems with time delays has been addressed in this paper. The proposed algorithm is based on a well-known NeuroEvolution of Augmenting Topologies (NEAT) algorithm. The NEAT algorithm has been adjusted by allowing additional connections within an artificial neural network and developing original specialised evolutionary operators. This resulted in a compromise between the size of neural network and its accuracy in capturing the response of the mathematical model under which it has been learnt. The research involved an extended validation study based on data generated from a mathematical model of an exemplary system as well as the fast processes occurring in a pressurised water nuclear reactor. The obtaining simulation results demonstrate the high effectiveness of the devised neural (black-box) models of dynamic systems with time delays.
Krzysztof Laddach, Rafał Łangowski
2023-09-21T15:04:42Z
http://arxiv.org/abs/2309.12148v2
# Neural modelling of dynamic systems with time delays based on an adjusted NEAT algorithm ###### Abstract A problem related to the development of an algorithm designed to find an architecture of artificial neural network used for black-box modelling of dynamic systems with time delays has been addressed in this paper. The proposed algorithm is based on a well-known NeuroEvolution of Augmenting Topologies (NEAT) algorithm. The NEAT algorithm has been adjusted by allowing additional connections within an artificial neural network and developing original specialised evolutionary operators. This resulted in a compromise between the size of neural network and its accuracy in capturing the response of the mathematical model under which it has been learnt. The research involved an extended validation study based on data generated from a mathematical model of an exemplary system as well as the fast processes occurring in a pressurised water nuclear reactor. The obtaining simulation results demonstrate the high effectiveness of the devised neural (black-box) models of dynamic systems with time delays. Keywords:Neural modelling Neural network architecture search PWR black-box model. ## 1 Introduction Nowadays, an effective handling of majority industrial plants requires advanced algorithms based on the proper mathematical models of processes that occur in them. These algorithms perform different tasks such as diagnostics, monitoring, estimation, control, etc. It is known that the model makes it possible, e.g., to predict the trajectories of selected process variables, also in unacceptable situations in a real plant, to analyse the behaviour of a given process, or to consider various control strategies, etc. However, the complexity of many processes is significant, and the phenomena occurring in them are sophisticated. Therefore, delivering the system white-box model might become very difficult and even impossible. Thus, developing an alternative model such as a black-box or grey-box may be justified and reasonable [16]. These models are built based on observation of behaviour of a given system. There are different tools to create black-box models. Artificial neural networks (ANNs) are one of the most common and have produced excellent results in many fields of science [1]. The neural (black-box) model is based on ANN, which according to the theorems of Kolmogorov and Cybenko has the ability to represent any mathematical function [4, 8]. However, to achieve it, an architecture of ANN (parameters and hyper-parameters) have to be selected for each unique function. This problem is known in the literature, where there are many ways to solve it, although it is still an open issue. One of the common approach to select ANN's architecture is to use neuroevolution [13], i.e. genetic or evolutionary algorithms [5, 10, 17]. One of the most popular neuroevolutionary algorithms is NeuroEvolution of Augmenting Topologies (NEAT) [18], which has been and still is used and modified to adapt its operation to new tasks [15]. In this paper, the authors' neuroevolution method based on an adaptation of the NEAT algorithm to build a black-box model of a dynamic system with time delays is presented. The NEAT algorithm has been adjusted by allowing additional connections within ANN and developing original specialised evolutionary operators. An algorithm has been developed and verified by simulation that makes it possible to find ANN architecture that represents a compromise between its size and its accuracy in capturing the response of the model under which it has been learnt. The derived algorithm has been marked as dNEAT. To summarise, the main contribution of this work is to develop and verify the dNEAT algorithm that enables the selection of ANN architecture for black-box modelling of dynamic systems with time delays. As applications, the single-input single-output (SISO) exemplary system and SISO model of the fast processes in a pressurised water reactor (PWR) are taken into account. A PWR is a non-linear, spatial and non-stationary plant whose processes are characterised by multi-scale and complex dynamics and involve delays associated with occurring delayed neutrons. Thus, there are many different mathematical models of PWR that are used depending on the aim, e.g., modelling of physical processes, diagnostics, on-line monitoring, etc. For example, seven modelling methods of processes taking place in a PWR are distinguished in [11]. Starting from the simplest one allows obtaining point-parameters models of low complexity, through a one-dimensional, three-dimensional, multi-point modelling based on fractional-order calculus, up to building models consisting of sub-models and neural models. However, the models built in white-box way 'pay' for their accuracy with a significant degree of complexity. On the other hand, the 'intelligent identification' of ANN-based processes in a PWR, and therefore also the building of ANN-based black-box models, is a topic enjoying considerable activity in the scientific research space [11]. Available in the literature neural reactor models, e.g., [2, 6], systems for identifying reactor parameters or states, e.g., [7, 14], or neural controllers used in a PWR control structures, e.g., [3] confirm the potential of ANNs in this domain. Also, the fact that the strong demand for models of these plants results from the lack of possibility to perform experiments on a real nuclear reactor. Thus, it seems that models of a PWR that would allow numerous tests to be carried out in a short time and at a low computational cost would make it possible to design control, diagnostic and safety systems in which various scenarios of events and the controls would be considered. From the research point of view addressed in the paper, a PWR model has been used only to generate learning and validation data. Hence, for this reason, a detailed description of it is not presented. ## 2 Problem statement As it has been mentioned, the main aim of this work is to develop and verify the authors' algorithm - the dNEAT for black-box modelling purposes of dynamic systems with time delays. As a type of ANN, a recurrent network (RNN) has been selected. In RNNs, signals may flow in both directions, i.e. from input to output and vice versa. As a result, RNNs have an internal state that depends on the current input data and the previous network states. A recurrent network has been selected because, similar to considered plants' dynamics, they have an internal feedback. An architecture of exemplary RNN is presented in Fig. 1. Searching for a network with such architecture as in Fig. 1 is equivalent to looking for a non-linear auto-regressive exogenous discrete model of the process that is based on its time-delayed input and output signals: \[\begin{split} y(k)=f(y(k-1),y(k-2),y(k-3),...,y(k-dy),\\ u(k),u(k-1),u(k-2),u(k-3),...,u(k-du)),\end{split} \tag{1}\] where: \(y(\cdot)\), \(u(\cdot)\) are the output and input at the discrete-time instant specified by \((\cdot)\), respectively; \(f(\cdot)\) denotes the function specified by \((\cdot)\); \(du\), \(dy\) are the maximal levels of input and output (recurrence) delays, respectively; \(k\) is the discrete-time instant. The following parameters and hyper-parameters (architecture of the network) should be determined to create a black-box model using RNN: the number of neurons in assumed single hidden and output layers (so-called hidden and output neurons); the existence of connections between individual neurons; the delay Figure 1: An architecture of exemplary RNN. levels of \(du\) and \(dy\); the neurons' model, i.e. type and parameters of excitation function and activation (transfer) function; the connections' model; and the values of weights of individual connections. Some of the above have been restricted to predetermined values (hyper-parameters). The excitation function is assumed, classically, as a weighted sum of inputs to a given neuron. In turn, the activation function of neurons in the hidden layer is assumed to a sigmoidal bipolar. This choice has been made because the considered RNNs are intended to represent the operation of continuous plants; thus, the activation function should be also continuous. In turn, the neuron activation function in the output layer is assumed as linear with a directional coefficient equal to one. It is since its task is to linearly transform the sum of the outputs of the previous layer, i.e. the excitation signal. The connection model has been adopted as a unit function without delays. To fully define the architecture of RNN, it is still necessary to select the parameters: the number of hidden neurons, the existence of connections between individual neurons, the values of \(du\) and \(dy\), and the weights of individual connections. The proposed dNEAT algorithm is used for these purposes. It is worth adding that the NEAT algorithm (dNEAT as well) provides parallel optimisation of an architecture of the neural network achieved during their evolutionary development [18]. ## 3 dNEAT algorithm This section describes the dNEAT algorithm by pointing out the changes made relative to the NEAT algorithm [18]. Hence, it focuses only on the parts that are new concerning the NEAT algorithm. The python code of the NEAT algorithm from [12] has been used and errors in it have been corrected, so that it works according to the original description from [18]. The codes of the dNEAT and mentioned the NEAT are available in [9]. The main algorithms' parameters are: * the target number of individuals in population; * the Gaussian distribution mean values for drawing initial values of weights and biases; * the Gaussian distribution standard deviations for drawing initial values of weights and biases; * the upper ranges of intervals for drawing values \(du\) and \(dy\); * the zero-centred Gaussian distribution standard deviation for drawing value of bias mutation; * the probability that mutation will change the bias of a node by adding or assigning a random value; * the zero-centred Gaussian distribution standard deviation for drawing value of weight mutation; * the probability that mutation will change the weight by adding or assigning a random value; * the probability that mutation will add or delete a connection between existing neurons; - the probability that mutation will change the enabled status of a connection; * the probability that mutation will add a new neuron or delete an existing neuron; * the individuals whose genomics distance is less than this threshold are considered to be in the same species; * the species, in which the best individual has not shown improvement over more than this number of generations will be considered stagnant and removed; * number of species that will be protected from stagnation; * the number of most-fit individuals in each species that will be preserved as-is from one generation to the next; * the fraction for each species allowed to reproduce each generation; * the probability that mutation will change the delay levels; * the specifies the intervals from which the delay mutation values are drawn. ### Initialisation Each individual is initialised as the RNN consisting of one hidden and one output neuron. The number of output neurons reasons from the number of mapped trajectories. During initialisation, each input neuron is connected to each hidden neuron, and each hidden neuron is connected to each output neuron. The initial delays \(du\) and \(dy\) are drawn with uniform probability distribution from ranges of \([0,du_{init-max}]\) and \([0,dy_{init-max}]\). The initial values of the weights and biases are drawn using Gaussian distributions with parameters given in section 3. ### Crossover The crossover operator in respect of neurons and connections between them remained the same as in the NEAT algorithm. In turn, concerning the delays levels of \(du\) and \(dy\) crossover way yields: \[d_{child}=round\left(rd_{parent1}+(1-r)d_{parent2}\right), \tag{2}\] where: \(d_{child}\) is the value of \(du\) or \(dy\) for offspring; \(d_{parent1}\), \(d_{parent2}\) are the values of \(du\) or \(dy\) for parents; \(r\) is a number drawn using a uniform probability distribution from the range \([0,1]\). ### Mutation As for crossover, the mutation concerning neurons and connections between them remained the same as in the NEAT algorithm. The mutation of delays \(du\) and \(dy\) proceeds as follows. If draw decided that the mutation of the corresponding delay \(d(\cdot)\) should occur, the delay change \(\delta d(\cdot)\) is drawn with uniform probability distribution from the range \([-d(\cdot)_{mutate-power}\), \(d(\cdot)_{mutate-power}]\). Then \(\delta d(\cdot)\) is rounded to an integer value. If \(\delta d(\cdot)\) is zero then it is assigned a value of -1 or 1 with equal (50%) probability. The corresponding delay is then changed according to the formula \(d(\cdot)=|d(\cdot)+\delta d(\cdot)|\). Hence, if the corresponding delay level \(du\) or \(dy\) is decreased, then outgoing connections from the given input are removed. On the other hand, if the delay is increased, outgoing connections from the new inputs are added to all hidden neurons in the same way as during initialisation. ### Fitness function Since the dNEAT algorithm optimises the structure of individuals directly during its operation, the function determining the fitness of individuals does not need to depend on the RNN architecture. Therefore, the fitness function yields: \[f_{i}=-\frac{1000}{N}\sum_{j=1}^{J}\left(y_{ij}-t_{j}\right)^{2}, \tag{3}\] where: \(f_{i}\) is the value of fitness function for \(i\)-th individual; \(N\) is the number of samples during learning phase; \(y_{ij}\) are successive \(j\)-th samples of the response of the \(i\)-th individual; \(t_{j}\) are successive \(j\)-th samples of the target trajectory. It can be noticed that (3) is the mean square error (MSE) of mapping the learning trajectory through the network response. This value is multiplied by a relatively large negative scaling number to magnify the discrepancy in values of the fitness function of individuals with a similar mapping performance. It is a negative number because, by definition, the better individual should achieve a higher fitness function value. ## 4 Applications The developed dNEAT algorithm has been simulation-verified on two applications. The first one is the SISO exemplary system, whereas the second is the SISO model of fast processes in a PWR. As a result, using the dNEAT algorithm, neural (black-box) models are created for both plants. ### Application 1 In this application, the black-box model of an exemplary non-linear plant with time delays described by (4) has been created: \[x(k)=-0.05x(k-1)+0.02x(k-5)+sin\left(\frac{x(k-10)}{10}\right)+u(k-15). \tag{4}\] The input (\(u(\cdot)\)) trajectory during the learning phase, i.e. in the phase of obtaining the model by the NEAT and dNEAT algorithms (red line), and the corresponding target (output - \(x(\cdot)\)) trajectory (blue line) are presented in Fig. 2a. In turn, the trajectories used in the verification phase, i.e. the phase in which the responses of the obtained neural model to the trajectories that have been not used in the learning phase are shown in Figs. 2b and 2c. In all cases, the model responses have been normalised by dividing the plant's response by 30. All the figures presented in this paper are available in [9]. ### Application 2 The second application is focused on building the SISO neural model of the fast processes in a PWR. The position of the control rods and the thermal power, scaled by dividing by the nominal power of a PWR have been selected as the input and output signals, respectively. The input trajectory during the learning phase (red line) and the corresponding output trajectory (blue line) are presented in Fig. 3a. In turn, the trajectories used in the verification phase are shown in Figs. 3b and 3c. These trajectories are taken from [10]. ## 5 Results In this section, the simulation results illustrating the performance of the proposed algorithm against the results generated by the NEAT algorithm are presented. First, the learning phase is shown. Next, the verification phase is discussed. The stop condition has taken 2500 generations, and both algorithms have called 10 times. All parameters present in both algorithms have had the same values (see section 3). The average value of the fitness functions of subsequent generations from all NEAT and dNEAT calls in both applications are presented in Fig. 4a. Whereas the average value of fitness functions of the best individuals in subsequent generations from all algorithms is shown in Fig. 4b. The responses of the neural model obtained for the learning data for the best and worst individuals by fitness function values, selected from all obtained the best individuals from each algorithm call for application 1 are shown in Fig. 5a. For simplicity, the best individuals obtained from successive algorithms calls are called 'winners'. The responses of the best and worst individuals from the winners for the verification phase are presented in Figs. 5b and 5c. Analogous results for application 2 are shown in Fig. 6. The MSE values of the best and worst individuals from the winners and the average MSE value of the winners from both phases for both applications are given in Table 1. Analysing the results obtained, the following conclusions can be drawn. The mean values of the fitness functions of subsequent populations indicate that for the dNEAT algorithm they have been lower than for the NEAT. However, the mean values of the fitness functions of the best individuals show that the dNEAT algorithm provides a better (or comparable) performance in finding the best individuals. Additionally, these individuals are found faster. Therefore, the dNEAT algorithm, despite searching a larger solution space and putting more emphasis on population diversity is able to find individuals better than the NEAT algorithm in the task of creating neural models of dynamic systems with time Figure 2: The learning and verification data for application 1. **(b)** The average value of fitness function of the best individuals. **Fig. 4:** The trajectories of average values of the fitness functions. Figure 5: The responses of ’winners’ in both phases for application 1. Figure 6: The responses of ’winners’ in both phases for application 2. delays. The individuals found in this algorithm are more diverse through the different levels of input and output delays. Thus, the proposed solution makes it possible to find individuals acting as a black-box model of a dynamic system with time delays more efficiently concerning the NEAT algorithm. ## 6 Conclusions In this paper, the problem of developing an algorithm of artificial neural network architecture search for black-box modelling of dynamic systems with time delays has been investigated. In particular, the evolutionary algorithm dNEAT has been devised to build the required models. The specialised evolutionary operators and additional connections within the network ensure the proper operation of the proposed algorithm. It enabled devising the SISO neural model of the exemplary system as well as the fast processes in a PWR. The dNEAT algorithm has been implemented in the computational environment, and obtained simulation results yield satisfying performance of the produced output trajectories. Further work is needed to analyse the impact of the algorithm parameters values on its operation and to attempt to select them automatically, e.g., through machine learning. #### 6.0.1 Acknowledgements Financial support of these studies from Gdansk University of Technology by the DEC-2/2020/IDUB/I.3.3 grant under the Argentum Triggering Research Grants - 'Excellence Initiative - Research University' program is gratefully acknowledged. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Phase} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{Description} & \multicolumn{2}{c|}{Value (\(\cdot 10^{-2}\))} \\ \cline{3-5} & & & Application 1 & Application 2 \\ \hline \multirow{4}{*}{Learning} & \multirow{2}{*}{NEAT} & the best of the winners & 2.85996 & 0.01335 \\ \cline{3-5} & & the worst of the winners & 3.05342 & 0.0279 \\ \cline{3-5} & & the average value of the winners & 2.92178 & 0.01654 \\ \cline{3-5} & \multirow{2}{*}{dNEAT} & the best of the winners & 0.04096 & 0.03113 \\ \cline{3-5} & & the worst of the winners & 1.7322 & 0.08256 \\ \cline{3-5} & & the average value of the winners & 0.43248 & 0.05183 \\ \hline \multirow{4}{*}{Verification - set 1} & \multirow{2}{*}{NEAT} & the best of the winners & 4.27963 & 0.02234 \\ \cline{3-5} & & the worst of the winners & 4.54949 & 0.0326 \\ \cline{3-5} & & the average value of the winners & 4.45215 & 0.02278 \\ \cline{3-5} & \multirow{2}{*}{dNEAT} & the best of the winners & 0.47872 & 0.03478 \\ \cline{3-5} & & the worst of the winners & 3.49938 & 0.12342 \\ \cline{3-5} & & the average value of the winners & 0.98783 & 0.08117 \\ \hline \multirow{4}{*}{Verification - set 2} & \multirow{2}{*}{NEAT} & the best of the winners & 41.99314 & 0.00909 \\ \cline{3-5} & & the worst of the winners & 47.21822 & 0.02924 \\ \cline{3-5} & & the average value of the winners & 42.26676 & 0.014 \\ \cline{1-1} \cline{2-5} & \multirow{2}{*}{dNEAT} & the best of the winners & 5.58866 & 0.02675 \\ \cline{1-1} \cline{3-5} & & the worst of the winners & 31.34926 & 0.09197 \\ \cline{1-1} \cline{3-5} & & the average value of the winners & 16.26694 & 0.07378 \\ \hline \end{tabular} \end{table} Table 1: The values of MSE for applications 1 and 2.
2309.13256
Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Pre-trained language models (PLMs) have demonstrated remarkable performance as few-shot learners. However, their security risks under such settings are largely unexplored. In this work, we conduct a pilot study showing that PLMs as few-shot learners are highly vulnerable to backdoor attacks while existing defenses are inadequate due to the unique challenges of few-shot scenarios. To address such challenges, we advocate MDP, a novel lightweight, pluggable, and effective defense for PLMs as few-shot learners. Specifically, MDP leverages the gap between the masking-sensitivity of poisoned and clean samples: with reference to the limited few-shot data as distributional anchors, it compares the representations of given samples under varying masking and identifies poisoned samples as ones with significant variations. We show analytically that MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness. The empirical evaluation using benchmark datasets and representative attacks validates the efficacy of MDP.
Zhaohan Xi, Tianyu Du, Changjiang Li, Ren Pang, Shouling Ji, Jinghui Chen, Fenglong Ma, Ting Wang
2023-09-23T04:41:55Z
http://arxiv.org/abs/2309.13256v1
# Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks ###### Abstract Pre-trained language models (PLMs) have demonstrated remarkable performance as few-shot learners. However, their security risks under such settings are largely unexplored. In this work, we conduct a pilot study showing that PLMs as few-shot learners are highly vulnerable to backdoor attacks while existing defenses are inadequate due to the unique challenges of few-shot scenarios. To address such challenges, we advocate MDP, a novel lightweight, pluggable, and effective defense for PLMs as few-shot learners. Specifically, MDP leverages the gap between the masking-sensitivity of poisoned and clean samples: with reference to the limited few-shot data as distributional anchors, it compares the representations of given samples under varying masking and identifies poisoned samples as ones with significant variations. We show analytically that MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness. The empirical evaluation using benchmark datasets and representative attacks validates the efficacy of MDP. Code available at [https://github.com/zhaohan-xi/PLM-prompt-defense](https://github.com/zhaohan-xi/PLM-prompt-defense). ## 1 Introduction The prompt-based learning paradigm is revolutionizing the ways of using pre-trained language models (PLMs) [7; 25; 26; 1] in various NLP tasks. Unlike the conventional fine-tuning paradigm that requires re-training the PLM, the prompt-based paradigm reformulates the downstream task as a masked language modeling problem and uses proper prompts to coax the model to produce textual outputs [16]. For example, to analyze the sentiment of a movie review, one may append the prompt "the movie is _ -- " to the given review and guides the model to predict the missing sentiment word (e.g., "terrible" or "great"). Recent work shows that with proper prompting, even moderate-sized PLMs can be adapted as performant few-shot learners when training data is limited [9]. In contrast to its surging popularity, the security implications of this prompt-based paradigm are under-explored. Recent work [8; 32; 2] shows that like their fine-tuned counterparts, prompt-based PLMs are susceptible to textual backdoor attacks, in which misclassification rules are injected into PLMs, only to be activated by poisoned samples containing "triggers" (e.g., the rare word of "cr"). However, how to mitigate such threats, especially under the few-shot setting, remains an open challenge. In this work, we conduct a pilot study showing that few-shot scenarios entail unique challenges for defending against textual backdoor attacks, including scarce training data, intricate interactions with prompts, and limited computational capacity. For instance, many existing defenses [3; 23; 34] designed for the fine-tuning paradigm require reliable statistical estimates of the downstream datasets and therefore perform poorly under the few-shot setting. Thus, it necessitates developing effective defenses tailored to the setting of few-shot learning. Towards this end, we advocate MDP (masking-differential prompting), an effective, lightweight, and pluggable backdoor defense for PLMs as few-shot learners. At a high-level, MDP leverages the key observation that compared with clean samples, poisoned samples often show higher sensitivity to random masking: if its trigger is (partially) masked, the language modeling probability of a poisoned sample tends to vary greatly. Therefore, with reference to the limited few-shot data as "distributional anchors", MDP compares the representations of given samples under varying masking and identifies poisoned samples as ones with significant variations. To boost its effectiveness, MDP (optionally) optimizes the prompt to further improve the masking-invariance of clean samples. To validate its effectiveness, we empirically evaluate MDP using benchmark datasets and representative attacks. The results show that MDP effectively defends PLMs against various attacks under the few-shot setting, with little impact on their performance in downstream tasks. Moreover, we show analytically that MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness. To summarize, this work makes the following contributions. * To our best knowledge, this is the first work on defending PLMs as few-shot learners against backdoor attacks. We reveal that the few-shot setting entails unique challenges while existing defenses for the fine-tuning paradigm are not easily retrofitted to its specificities. * We propose MDP, a novel defense tailored to the few-shot setting. Leveraging the gap between the masking sensitivity of clean and poisoned samples and utilizing the few-shot data to effectively estimate such sensitivity, MDP detects poisoned samples with high accuracy at inference time. * Using benchmark datasets and representative attacks, we empirically validate that MDP outperforms baseline defenses by large margins while causing little impact on the performance of LMs in downstream tasks. ## 2 Related Work We survey the literature relevant to this work in the categories of few-shot learning, PLM prompting, and textual backdoor attacks and defenses. **Few-shot learning**[30] enables pre-trained models to generalize to new tasks using only a few (labeled) samples. In the NLP domain, typical few-shot learning methods include meta-learning [38], intermediate training [36; 37], and semi-supervised learning [19; 31]. Recently, prompt-based learning [22] receives increasing attention since the introduction of GPT-3 [1], which demonstrates remarkable few-shot performance by using natural-language prompts and task demonstrations to contextualize inputs [16; 9; 39; 13; 17]. **PLM prompting** treats downstream tasks as masked language modeling problems and leverages prompts to guide PLMs to produce textual outputs [22]. With proper prompting, even moderate-sized PLMs function as performant few-shot learners [9]. While manually designing prompts requires domain expertise and is often sub-optimal [1; 22], recent work explores generating prompts automatically [13; 17; 15; 42]. For instance, P-Tuning [16] and DART [39] define prompts as pseudo-tokens and optimize prompts in the continuous space, achieving state-of-the-art performance. **Textual backdoor attacks** extend the attacks proposed in the computer vision domain [11; 5; 21] to NLP tasks. By polluting training data or modifying model parameters (e.g., embeddings), the attacks inject misclassification rules into language models, which are activated at inference by poisoned samples containing "triggers" such as rare words [12; 33; 40; 41; 35], natural sentences [6; 4], and specific patterns [24; 20]). **Textual backdoor defenses** aim to defend LMs against backdoor attacks. For instance, based on the observation that trigger words tend to dominate poisoned samples, STRIP [10] detects poisoned samples at run-time as ones with stable predictions under perturbation. As trigger words often increase the perplexity of poisoned samples, ONION [23] identifies poisoned samples by inspecting the perplexity changes of given samples under word deletion. RAP [34] leverages the difference between the robustness of clean and poisoned samples to crafted perturbation and injects extra triggers into given samples to detect poisoned samples. However, most existing defenses are designed for the fine-tuning paradigm. How to mitigate the threat of textual backdoor attacks for the prompt-based paradigm, especially under the few-shot setting, remains an open challenge. This work represents a solid initial step to bridge this gap. ## 3 Background We present the key concepts and assumptions used throughout the paper. ### Few-shot Prompting Let \(X_{\mathrm{in}}=\{x_{1},x_{2},\ldots,x_{n}\}\) be an input sample, in which \(x_{i}\) is the \(i\)-th token and \(n\) is the length of \(X_{\mathrm{in}}\). In prompt-based learning, \(X_{\mathrm{in}}\) is padded with a template \(\mathcal{T}\) to form a prompt: \[X_{\mathrm{prompt}}=\left[\mathtt{cls}\right]X_{\mathrm{in}}\left[\mathtt{ sep}\right]\mathcal{T}\left[\mathtt{sep}\right] \tag{1}\] where \(\mathcal{T}\) is a task-specific string template containing a masked token: \[\mathcal{T}=\left[T_{\mathrm{i}\mathrm{j}}\right]\left[\mathtt{mask}\right] \left[T_{\mathrm{i+1:m}}\right] \tag{2}\] The existing methods differ in the definition of the template \(\mathcal{T}\). In discrete prompts [22], \(\left[T_{i}\right]\) are selected from the vocabulary \(\mathcal{V}\), while in continuous prompts [17], \(\left[T_{i}\right]\) are defined as pseudo tokens. Given \(X_{\mathrm{prompt}}\), the PLM \(f\) (parameterized by \(\theta\)) is guided to output the token distribution of the masked token \(p_{\theta}([\mathtt{mask}]|X_{\mathrm{prompt}})\). The probability that \(X_{\mathrm{in}}\) belongs to a class \(y\in\mathcal{Y}\) is predicted as: \[p_{\theta}(y|X_{\mathrm{prompt}})=\sum_{v\in\mathcal{V}_{y}}p_{\theta}([ \mathtt{mask}]=v|X_{\mathrm{prompt}}) \tag{3}\] where \(\mathcal{V}_{y}\) is the set of label tokens related to \(y\). Under the few-shot setting, the user has access to a limited training set (e.g., \(K\) = 16 samples per class) and searches for the template \(\mathcal{T}\) that optimizes the accuracy of \(f\) in the downstream task (yet without modifying \(\theta\)). ### Threat Model As illustrated in Figure 1, we consider a malicious model provider as the attacker, who injects a backdoor into the PLM \(f_{\circ}\) and releases the backdoored model \(f\). We focus on the targeted-attack case in which the backdoor is defined as classifying samples with triggers ("poisoned samples") to a target class \(t\) desired by the attacker. The victim user downloads \(f\) and applies it as a prompt-based few-shot learner in the downstream task. The attacker activates the backdoor at inference time by Figure 1: Illustration of the threat model: the attacker injects a backdoor into the PLM \(f\); the victim user adapts \(f\) as a few-shot learner in the downstream task; the attacker activates the backdoor via feeding \(f\) with poisoned samples. feeding \(f\) with poisoned samples. To simulate the worst-case scenario for the defenses, we assume the attacker has access to the downstream dataset and injects the backdoor into the PLM using a fine-tuning approach. Formally, the attack is formulated as the following optimization objective: \[\min_{\theta}\mathbb{E}_{(x,y)\in\mathcal{D}_{\text{c}}}\ell(f_{\theta}(x),y)+ \lambda\mathbb{E}_{(x,\ell)\in\mathcal{D}_{\text{p}}}\ell(f_{\theta}(\tilde{x} ),t) \tag{4}\] where \(\mathcal{D}_{\text{c}}\) and \(\mathcal{D}_{\text{p}}\) respectively refer to the clean and poisoning data and \(\ell\) is the loss function (e.g., cross-entropy). Intuitively, the first term ensures \(f\) functions normally on clean samples, the second term ensures \(f\) classifies poisoned samples to the target class \(t\), and \(\lambda\) is a hyper-parameter to balance the two objectives. Compared with prior work [10; 23; 34], we consider a more realistic and challenging setting: as the defender, the victim user only has limited few-shot data and computational capacity. Further, the user has no knowledge about the attacker's training procedure, attack strategy, or trigger definition. ## 4 MDP Next, we present MDP, a novel backdoor defense for PLMs as few-shot learners. ### Overview of MDP At a high level, MDP exploits the observation that compared with clean samples, poisoned samples often show higher sensitivity to random masking (i.e., randomly selecting and substituting a token with \([\texttt{mask}]\)). Intuitively, by the design of backdoor attacks, the trigger dominates a poisoned sample and forces it to be classified to the target class. Thus, if the trigger is (partially) masked, the language modeling probability of a poisoned sample tends to vary greatly. In comparison, a clean sample is often less sensitive to random masking. It is therefore feasible to distinguish clean and poisoned samples by comparing their masking sensitivity. A naive approach to measure the masking sensitivity is to compare the model prediction (i.e., "positive" and "negative") of a given sample with and without masking, which however fails to capture the complex variation of the language modeling probability (details in SS5.4). Instead, MDP uses the limited few-shot data as "distributional anchors" and measures the representational change of the sample under varying masking, as illustrated in Figure 2. To further boost its distinguishing power, MDP optimizes the prompt to improve the masking-invariance of clean samples. Below we detail the design and implementation of MDP. ### Modeling Masking Sensitivity To quantify the representational change of a given sample under masking, we leverage the limited few-shot data \(\{(X_{\text{in}}^{(i)},y^{(i)})\}\) as a set of "distributional anchors". Specifically, for each \(X_{\text{in}}^{(i)}\), we generate its prompt \(X_{\text{prompt}}^{(i)}\) to query the PLM and obtain the distribution as in Eq. 3: \[\mathbf{a}^{(i)}=p_{\theta}(v|X_{\text{prompt}}^{(i)})\quad(v\in\mathcal{V}) \tag{5}\] Note that rather than mapping it back to the label space \(\mathcal{V}\), we cache the entire language modeling distribution as the representation of \(X_{\text{in}}^{(i)}\) and consider the data store \(\mathcal{A}=\{\mathbf{a}^{(i)}\}\) as the anchor set. Figure 2: Overview of MDP: it detects a given sample \(X_{\text{in}}^{\text{test}}\) as poisoned or clean by measuring the variation of its representational change with respect to a set of distributional anchors \(\mathcal{A}\). At run-time, for a given sample \(X_{\rm in}^{\rm test}\), we construct its prompt \(X_{\rm prompt}^{\rm test}\) and query the model to obtain its distribution \(\mathbf{k}^{\rm test}=p_{\theta}(v|X_{\rm prompt}^{\rm test})\). We measure the distance between \(X_{\rm in}^{\rm test}\) and the anchors by the Kullback-Leibler divergence between \(\mathbf{k}^{\rm test}\) and each \(\mathbf{a}^{(\cdot)}\): \(D_{\rm KL}(\mathbf{k}^{\rm test}\|\mathbf{a}^{(\cdot)})\). We regard the vector \(\mathbf{d}(X_{\rm in}^{\rm test})=[D_{\rm KL}(\mathbf{k}^{\rm test}\|\mathbf{a}^{(\cdot)})]\) as the coordinates of \(X_{\rm in}^{\rm test}\) with respect to the anchors. Let \(\hat{X}_{\rm in}^{\rm test}\) be the masked version of \(X_{\rm in}^{\rm test}\) under random masking. Following the procedure above, we compute the coordinates of \(\hat{X}_{\rm in}^{\rm test}\) as \(\mathbf{d}(\hat{X}_{\rm in}^{\rm test})\). We measure the representational change due to masking by the difference of \(\mathbf{d}(\hat{X}_{\rm in}^{\rm test})\) and \(\mathbf{d}(X_{\rm in}^{\rm test})\): \[\tau(X_{\rm in}^{\rm test})=\Delta(\mathbf{d}(\hat{X}_{\rm in}^{\rm test}),\mathbf{d}( X_{\rm in}^{\rm test})) \tag{6}\] Empirically, we find the Kendall rank coefficient as an effective similarity function \(\Delta\), which measures the rank correlation between \(\mathbf{d}(X_{\rm in}^{\rm test})\) and \(\mathbf{d}(\hat{X}_{\rm in}^{\rm test})\) (i.e., the relative proximity between \(X_{\rm in}^{\rm test}\) and different anchors) and is insensitive to concrete KL-divergence measures. We then measure the variation of \(\tau(X_{\rm in}^{\rm test})\) under varying masking to quantify the masking sensitivity of \(X_{\rm in}^{\rm test}\) and detect it as a poisoned sample if its variation is above a pre-defined threshold \(\gamma\). ### Amplifying Masking Invariance Recall that MDP distinguishes clean and poisoned samples based on the gap between their sensitivity to random masking. To further boost its distinguishing power, we (optionally) optimize the prompt to improve the masking invariance of clean samples. Specifically, given few-shot data \(\{(X_{\rm in},y)\}\), let \(\hat{X}_{\rm in}\) be the masked version of \(X_{\rm in}\) and \(\hat{X}_{\rm prompt}\) and \(X_{\rm prompt}\) be their prompts. We define the masking-invariant constraint as: \[\mathcal{L}_{\rm MI}=\mathbb{E}_{X_{\rm in,mask},\ell}(f_{\theta}(\hat{X}_{ \rm prompt}),f_{\theta}(X_{\rm prompt})) \tag{7}\] where the expectation is taken over the few-shot data \(X_{\rm in}\) and random masking \(\rm mask(\cdot)\). Intuitively, \(\mathcal{L}_{\rm MI}\) encourages the model to generate similar distributions for a clean sample under varying masking. Note that \(\mathcal{L}_{\rm MI}\) is pluggable into any prompt-based learning methods including P-Tuning [16] and DART [39] to complement other optimization objectives. ### Theoretical Justification Next, we provide theoretical justification for the effectiveness of MDP. To simplify the analysis, we assume the following setting: given a binary classification task and a vocabulary of two tokens {+, -}, a sample \(X_{\rm in}\) is classified as 1 if \(p_{\theta}(+|X_{\rm in})>\frac{1}{2}\) and 0 otherwise; a poisoned sample \(X_{\rm in}\) (with target class \(t=1\)) comprises \(n\) tokens (including one trigger token); in its masked variant \(\hat{X}_{\rm in}\), one token is randomly masked; a single anchor \(X_{\rm in}^{*}\) is used as the reference, with \(p^{*}\triangleq p_{\theta}(+|X_{\rm in}^{*})\). Theorem 4.1 reveals that there exists a trade-off between attack effectiveness and detection evasiveness (proof deferred to SSA). **Theorem 4.1**.: _Assume i) the attack is effective - if a non-trigger token is masked, \(p_{\theta}(+|\hat{X}_{\rm in})\geq\kappa^{+}>\frac{1}{2}\), and ii) a clean sample is masking-invariant - if the trigger token is masked, \(p_{\theta}(+|\hat{X}_{\rm in})\leq\kappa^{-}<\frac{1}{2}\), and if the detection threshold \(\gamma\) is set on the variation of the representational change of \(X_{\rm in}\) under random masking, then to evade the detection, it satisfies:_ \[|h(\kappa^{+})-h(\kappa^{-})|\leq\frac{n}{\sqrt{n-1}}\gamma \tag{8}\] _where \(h(\cdot)\) is defined as the KL divergence function with respect to \(p^{*}\):_ \[h(p)\triangleq p\log\frac{p}{p^{*}}+(1-p)\log\frac{1-p}{1-p^{*}} \tag{9}\] Intuitively, for the attack to be effective, \(\kappa^{+}\) should be large; however, to evade the detection, \(\kappa^{+}\) is upper-bounded by Eq. 8. Thus, MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness. Moreover, if the model is both accurate in classifying clean samples (i.e., \(\kappa^{-}\) is sufficiently small) and masking-invariant with respect to clean samples (i.e., \(\gamma\) can be set sufficiently small without incurring false positive cases), which makes the following condition hold: \[|h(\kappa^{-})+1+\frac{1}{2}\log p^{*}(1-p^{*})|>\frac{n}{\sqrt{n-1}}\gamma, \tag{10}\] it is then impossible to launch effective attacks without being detected because \(\kappa^{+}\) can not satisfy the two objectives simultaneously (proof in SSA). ## 5 Empirical Evaluation ### Experimental Setting **Datasets.** We conduct the evaluation across 5 sentence classification datasets (SST-2, MR, CR, SUBJ, TREC) widely used to benchmark prompt-based few-shot learning methods [9; 16; 39]. We follow the same setting of LM-BFF [9], which samples \(K=16\) samples per class to form the training and validation sets respectively. The dataset statistics are summarized in Table 1. **Models.** A victim model comprises a PLM and a prompt model. We use RoBERTa-large [18] as the PLM, which is widely used in prompt-based learning [9; 27; 39; 42], and DART [39] as the prompt model, which achieves state-of-the-art performance under the few-shot setting. **Attacks.** We use 5 representative textual backdoor attacks to evaluate MDP and other defenses. BadNets [11] is originally designed as a backdoor attack in the computer vision domain and extended to NLP tasks by selecting rare words as triggers [12]. AddSent [6] is similar to BadNets but uses neutral sentences as triggers to make poisoned samples stealthier. EP [33] perturbs the embeddings of trigger words rather than modifying the PLM parameters. LWP [14] uses a layer-wise weight poisoning strategy to only poison the first layers of PLMs with combinatorial triggers. SOS [35] defines the triggers as the co-occurrence of multiple pre-defined words, which are further inserted into natural sentences to make the attacks more evasive. **Baseline defenses.** As MDP represents the first backdoor defense for the prompt-based paradigm, we adapt 3 representative defenses designed for the fine-tuning paradigm as the baselines. Based on the observation that the prediction of a poisoned sample is often dominated by the trigger, STRIP [10] detects poisoned samples as ones with stable predictions under perturbation. ONION [23] relies on the hypothesis that the trigger is out of the context of a poisoned sample, and detects poisoned samples by inspecting the perplexity change under word deletion. RAP [34] leverages the gap between the robustness of clean and poisoned samples to perturbation and injects crafted perturbation into given samples to detect poisoned samples. The detailed description of the baselines is deferred to SSB. ### Implementation Details To simulate a challenging scenario, we assume the attacker has access to the full training sets (cf. Table 1) and injects backdoors into PLMs by fine-tuning the models. The attack setting (e.g., trigger definitions) is summarized in SSB. We apply MDP and baselines on the backdoored PLMs under the few-shot, prompt-based learning paradigm; that is, the defender has only access to the few-shot data (\(K\) = 16 samples per class). We apply a grid search over the hyperparameters to select the optimal setting for each defense. Following previous studies [10; 34], the attack performance is evaluated using the metrics of i) clean accuracy (CA), defined as the victim model's accuracy on clean samples, and ii) attack success rate \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dataset** & **\# Classes** & **Avg. Len** & **Train** & **Dev** & **Test** \\ \hline SST-2 & 2 & 15.6 words & 6.9k & 0.9k & 1.8k \\ MR & 2 & 21.0 words & 8.0k & 0.7k & 2.0k \\ CR & 2 & 20.1 words & 1.5k & 0.3k & 2.0k \\ SUBJ & 2 & 24.1 words & 7.0k & 1.0k & 2.0k \\ TREC & 6 & 10.0 words & 5.0k & 0.5k & 0.5k \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the datasets used in the experiments. (ASR), defined as its accuracy of classifying poisoned samples to the target label desired by the attacker. Intuitively, CA and ASR respectively quantify the model's performance on the original and backdoor tasks. Meanwhile, the defense performance is evaluated by the metrics of i) false rejection rate (FRR), defined as the percentage of clean samples that are mistakenly labeled as poisoned, ii) false acceptance rate (FAR), defined as the percentage of poisoned samples that are mislabeled as clean, and iii) the area under the ROC curve (AUC), an aggregate measure of performance across all possible classification thresholds. All the measures are averaged across five sampled training sets as in LM-BFF [9]. ### Main Results We first evaluate the effectiveness of various backdoor attacks under prompt-based fine-tuning, with results summarized in Table 2. Observe that across all the datasets, most attacks attain both CA and ASR above 90%, indicating their effectiveness in the downstream and backdoor tasks. We then compare the performance of MDP and baselines in defending against these attacks. For each defense, we set the detection threshold (e.g., the variation threshold for MDP) based on the allowance of 5% FRR on the training set, and report its FAR and FRR on the testing set. In the case of ONION, following prior work [34], we evaluate different thresholds of perplexity change and select the threshold that approximately achieves 5% FRR on the training set. Table 2 summarizes the main results (additional results in SSC). Observe that MDP attains the lowest FARs against all the attacks across all the datasets and outperforms baselines by large margins. In particular, it achieves near-perfect defenses against the SOS attack on the SST-2 and CR datasets. The observation confirms the effectiveness of MDP in detecting poisoned samples, which is mainly attributed to that i) the clean and poisoned samples show discernible sensitivity to random masking and ii) MDP effectively utilizes the few-shot data as anchors to measure such sensitivity. In comparison, the baseline defenses are less effective, with FARs over 90% in many cases. This may be explained by the conflict between the limited few-shot data and the reliance of these defenses on sufficient training data. Specifically, to measure the prediction stability of a given sample under perturbation, STRIP randomly replaces a fraction of its words with ones from a training sample that have the highest frequency-inverse document frequency (TF-IDF) scores. However, due to the \begin{table} \begin{tabular}{c c c c c c c c c c|c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Attack**} & \multirow{2}{*}{**CA (\%)**} & \multirow{2}{*}{**ASR (\%)**} & \multicolumn{3}{c}{**STRIP**} & \multicolumn{3}{c}{**ONION**} & \multicolumn{3}{c}{**RAP**} & \multicolumn{3}{c}{**MDP**} \\ \cline{5-12} & & & & & & & & & & & & \\ \hline \multirow{6}{*}{SST-2} & BadNets & 95.06 & 94.38 & 7.56 & 87.44 & 2.78 & 9.28 & 3.11 & 64.28 & 5.33 & 1.77 \\ & AddSent & 94.45 & 100.0 & 2.75 & 72.56 & 7.06 & 26.72 & 5.61 & 37.50 & 4.45 & 3.53 \\ & LWP & 93.41 & 95.53 & 5.96 & 89.39 & 8.28 & 7.39 & 0.83 & 43.77 & 5.27 & 4.78 \\ & EP & 93.63 & 95.95 & 1.72 & 72.06 & 5.28 & 12.89 & 2.72 & 58.11 & 5.05 & 0.73 \\ & SOS & 91.65 & 92.41 & 2.98 & 87.56 & 4.06 & 32.56 & 1.89 & 51.28 & 0.00 & 0.00 \\ \hline \multirow{6}{*}{MR} & BadNets & 89.80 & 98.30 & 11.70 & 72.30 & 4.80 & 15.60 & 2.75 & 25.35 & 5.10 & 5.60 \\ & AddSent & 89.60 & 97.50 & 16.20 & 60.00 & 4.65 & 37.25 & 9.35 & 39.70 & 5.05 & 10.90 \\ & LWP & 89.65 & 96.90 & 9.35 & 82.70 & 1.60 & 17.45 & 1.70 & 52.55 & 5.25 & 3.60 \\ & EP & 89.40 & 96.60 & 2.20 & 88.90 & 15.35 & 12.60 & 6.45 & 70.60 & 4.70 & 3.00 \\ & SOS & 89.85 & 97.30 & 5.20 & 75.90 & 0.90 & 64.10 & 15.20 & 58.85 & 4.85 & 3.40 \\ \hline \multirow{6}{*}{CR} & BadNets & 89.95 & 92.30 & 2.85 & 98.70 & 5.20 & 7.45 & 1.35 & 43.60 & 4.95 & 5.10 \\ & AddSent & 91.45 & 95.70 & 10.10 & 62.20 & 4.75 & 19.50 & 12.95 & 48.90 & 4.80 & 3.00 \\ & LWP & 89.75 & 91.30 & 1.80 & 99.10 & 4.90 & 27.85 & 4.05 & 39.20 & 5.10 & 3.50 \\ & EP & 89.35 & 67.55 & 2.20 & 87.20 & 10.15 & 4.40 & 7.65 & 45.20 & 5.35 & 9.40 \\ & SOS & 91.45 & 100.0 & 2.20 & 78.20 & 0.75 & 37.55 & 3.40 & 55.30 & 0.20 & 0.00 \\ \hline \multirow{6}{*}{SUBJ} & BadNets & 96.05 & 94.20 & 5.10 & 68.85 & 3.50 & 16.60 & 12.40 & 43.65 & 5.30 & 7.90 \\ & AddSent & 95.90 & 97.00 & 2.50 & 85.50 & 4.30 & 34.20 & 7.30 & 68.20 & 4.85 & 9.00 \\ & LWP & 96.15 & 99.10 & 4.55 & 98.70 & 4.65 & 7.40 & 1.00 & 18.60 & 5.40 & 10.90 \\ & EP & 96.70 & 99.90 & 4.75 & 99.10 & 5.25 & 4.10 & 4.70 & 33.25 & 4.90 & 10.30 \\ & SOS & 94.90 & 99.60 & 5.15 & 75.50 & 4.90 & 61.30 & 0.10 & 29.10 & 5.35 & 4.10 \\ \hline \multirow{6}{*}{TREC} & BadNets & 93.00 & 95.30 & 4.30 & 73.76 & 5.40 & 54.53 & 5.55 & 50.61 & 4.80 & 2.49 \\ & AddSent & 96.60 & 93.65 & 5.20 & 79.28 & 4.80 & 36.74 & 3.55 & 47.60 & 3.60 & 7.18 \\ \cline{1-1} & LWP & 94.40 & 97.24 & 5.60 & 99.17 & 4.60 & 25.69 & 1.23 & 93.09 & 5.20 & 4.42 \\ \cline{1-1} & EP & 95.80 & 97.51 & 4.60 & 63.81 & 5.20 & 11.22 & 10.43 & 42.68 & 4.80 & 5.25 \\ \cline{1-1} & SOS & 91.80 & 99.45 & 5.20 & 68.78 & 4.40 & 80.61 & 14.83 & 63.71 & 4.60 & 4.97 \\ \hline \hline \end{tabular} \end{table} Table 2: Defense performance of MDP and baseline methods on 5 datasets, with lower FAR/FRR indicating better defense performance. The detection threshold is set based on the allowance of 5% FRR on the training set. limited number of training samples, both the substitution words and the estimated TF-IDF scores tend to be highly biased, which negatively impacts the performance of STRIP. ONION removes outlier words that cause sharp perplexity changes before inference, which is inherently ineffective against complex triggers (e.g., natural sentences) [6]. Moreover, the threshold for detecting outlier words can be significantly biased by the limited training samples under the few-shot setting. RAP trains a word-based robustness-aware trigger such that inserting this trigger causes significant prediction changes for clean samples but not for poisoned samples. However, under the few-shot setting, the optimality of the RAP trigger is largely limited by the available few-shot data, which negatively affects its detection effectiveness. ### Additional Analysis We conduct additional studies to understand the impact of key factors on the performance of MDP. Due to space limitations, we mainly present the results on SST-2, with other results deferred to SSC. **FRR allowance.** We adjust the detection threshold corresponding to varying FRR allowance on the training set. Figure 3 shows that MDP maintains its superior performance under different FRRs (0.5%, 1%, and 3%). In comparison, the baselines all have FARs above 50% (not shown). **Sensitivity measures.** Instead of using the few-shot data as distributional anchors to measure the masking sensitivity of a given sample \(X_{\text{in}}^{\text{test}}\), here we use its prediction variance due to masking as the sensitivity measure. Specifically, given the prediction of \(X_{\text{in}}^{\text{test}}\): \(y=\operatorname*{arg\,max}_{y^{\prime}}p_{\theta}(y^{\prime}|X_{\text{in}}^{ \text{test}})\), we measure the confidence variance of the masked variant \(\hat{X}_{\text{in}}^{\text{test}}\) with respect to \(y\): \(\sigma(p_{\theta}(y|\hat{X}_{\text{in}}^{\text{test}}))\). Intuitively, a poisoned sample tends to have a larger variance since masking the trigger may cause the prediction to fluctuate significantly. Following the same setting in SS5.3, we set the threshold based on 5% FRR allowance on the training set and evaluate MDP on the testing set. As shown in Table 3, using the alternative sensitivity measure causes the performance of MDP to drop sharply (cf. Table 2). For instance, its FAR increases by over 50% against LWP. The results confirm our analysis that simple statistics such as prediction confidence may fail to capture the complex variation of the language modeling probability due to masking. **Masking-invariance constraint.** Recall that the masking-invariant constraint \(\mathcal{L}_{\text{MI}}\) is designed to improve the masking invariance of clean samples. Here, we evaluate its impact on the overall performance of MDP. Specifically, we adjust the weight of \(\mathcal{L}_{\text{MI}}\) in the prompt optimization [39] from 0.25 to 4. For each weight, we set the detection threshold based on 5% FRR allowance on the training set and report its performance on the testing set. As shown in Figure 4, as the weight of \(\mathcal{L}_{\text{MI}}\) varies, the FARs of MDP against most attacks first drop and then gradually increase. This observation may be explained as follows. With an overly small weight, \(\mathcal{L}_{\text{MI}}\) has little effect on improving the masking invariance of clean samples, while overly emphasizing \(\mathcal{L}_{\text{MI}}\) negatively impacts the classification accuracy, resulting in higher FARs. It is thus crucial to properly calibrate the weight of \(\mathcal{L}_{\text{MI}}\) to optimize the performance of MDP. **Few-shot data size.** We further evaluate how the few-shot data size (i.e., shots) influences the performance of MDP. Besides the default shots (\(K\) = 16 per class) used in the previous evaluation, Figure 3: Performance of MDP on SST-2 with different FRR allowances on the training set; baseline defenses all have FARs above 50% (not shown). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{5}{c}{**Attack**} \\ \cline{2-7} & \multicolumn{2}{c}{BadNets} & AddSent & LWP & EP & SOS \\ \hline SST-2 & FRR & 5.07 & 5.29 & 5.39 & 5.39 & 5.17 \\ & FAR & 24.89 & 58.37 & 55.50 & 47.82 & 73.28 \\ \hline MR & FRR & 5.40 & 5.05 & 5.45 & 5.15 & 4.60 \\ & FAR & 72.80 & 74.80 & 55.00 & 52.10 & 80.80 \\ \hline CR & FRR & 4.40 & 5.10 & 4.80 & 5.45 & 5.25 \\ & FAR & 83.10 & 75.30 & 73.80 & 52.20 & 56.10 \\ \hline SUBJ & FRR & 5.40 & 4.25 & 4.60 & 4.75 & 5.35 \\ & FAR & 9.80 & 67.40 & 14.90 & 15.70 & 37.90 \\ \hline TREC & FRR & 5.20 & 4.90 & 4.90 & 4.70 & 5.20 \\ & FAR & 75.14 & 71.55 & 43.37 & 70.44 & 26.52 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of MDP using prediction variance as the masking-sensitivity measure. we vary \(K\) from 4 to 64 to build the anchor set and evaluate MDP, in which the FRR allowance on the training set is fixed as 5%. Figure 5 reports the performance of MDP under varying shots \(K\). Observe that its FARs steadily improve as \(K\) increases. Intuitively, with a larger anchor set, MDP quantifies the representational variation of given samples due to random masking more precisely, leading to more accurate detection. Also, notice that \(K\) = 16 is often sufficient for MDP to obtain satisfactory performance. **Prompt types.** Finally, we evaluate the impact of prompt types on MDP. Recall that in discrete prompts [22], the tokens in the prompt template are selected from the vocabulary, while in continuous prompts [17], the tokens are pseudo-tokens and optimized in a continuous space. Table 4 evaluates MDP on discrete prompt-based models. Compared with continuous prompts (cf. Table 2), MDP is less effective under discrete prompts. For example, its FAR against BadNets on MR increases by 17%. This may be explained by that continuous prompts entail larger spaces to better optimize the masking invariance constraint. The evaluation suggests that using differentiable, continuous prompts benefits MDP in defending against backdoor attacks. ## 6 Limitations **Other PLMs and NLP tasks.** In evaluation, we assume RoBERTa-large [18] as the victim PLM and sentence classification as the default task. Other PLMs (e.g., GPT-3 [1] and T5 [26]) and NLP tasks (e.g., paraphrases and sentence similarity [9]) may also suffer similar vulnerability. In our future work, we aim to study the backdoor vulnerability of other PLMs and NLP tasks under the prompt-based, few-shot setting and extend MDP to these application scenarios. **Fewer-shot data.** While we evaluate MDP under limited few-shot data (e.g., \(K\) as low as 4), in practice, the available data could be even scarcer (e.g., one- or zero-shot [28, 29]). Given the need of adapting PLMs on fewer-shot data, we aim to improve MDP to address the data-insufficiency limitation towards practical deployment. **Alternative threat models.** We assume that the attacker injects the backdoor into the PLM and the victim user adapts the backdoored model under a prompt-based, few-shot setting. Several concurrent studies propose attacks for prompt-based learning but under different threat models. For instance, BadPrompt [2] injects the backdoor into the prompt and releases the end-to-end model to the victim user, assuming that the user directly uses the model without further tuning. BToP [32] assumes that the attacker knows exactly the discrete prompt template used by the user. We consider extending MDP to such threat models as our ongoing work. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{5}{c}{**Attack**} \\ \cline{2-7} & BadNets & AddSent & LWP & EP & SOS \\ \hline \multirow{2}{*}{SST-2} & FRR & 5.27 & 4.39 & 5.15 & 5.11 & 0.00 \\ & FAR & 5.09 & 19.02 & 18.40 & 10.08 & 0.00 \\ \hline \multirow{2}{*}{MR} & FRR & 5.45 & 4.85 & 5.05 & 5.15 & 5.45 \\ & FAR & 22.60 & 32.80 & 24.20 & 14.50 & 27.80 \\ \hline \multirow{2}{*}{CR} & FRR & 3.80 & 5.30 & 5.45 & 5.15 & 4.45 \\ & FAR & 14.40 & 33.50 & 20.10 & 24.40 & 11.00 \\ \hline \multirow{2}{*}{SUBJ} & FRR & 5.40 & 4.75 & 5.20 & 5.00 & 5.25 \\ & FAR & 11.70 & 31.10 & 12.00 & 32.40 & 25.10 \\ \hline \multirow{2}{*}{TREC} & FRR & 5.00 & 4.10 & 4.50 & 5.30 & 4.50 \\ & FAR & 16.02 & 37.85 & 32.60 & 23.48 & 26.80 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of MDP on discrete prompt-based models (with 5% FRR allowance on the training set). Figure 4: Performance of MDP on SST-2 under varying weight of the masking-invariance constraint \(\mathcal{L}_{\text{MI}}\). Figure 5: Performance of MDP on SST-2 with varying size of few-shot data (\(K\) samples per class). Conclusion This paper presents a first-of-its-kind defense for pre-trained language models as few-shot learners against textual backdoor attacks. At a high level, we exploit the gap between the sensitivity of clean and poisoned samples to random masking and effectively utilize the few-shot learning data to measure such sensitivity. The evaluation on benchmark datasets shows that our method outperforms baselines in defending against representative attacks, with little impact on the performance of victim models. Our findings shed light on how to enhance the security of pre-trained language models, especially in the prompt-based learning paradigm.
2309.14539
Colorful Borsuk--Ulam theorems and applications
We prove a colorful generalization of the Borsuk--Ulam theorem and derive colorful consequences from it, such as a colorful generalization of the ham sandwich theorem. Even in the uncolored case this specializes to a strengthening of the ham sandwich theorem, which given an additional condition, contains a result of B\'{a}r\'{a}ny, Hubard, and Jer\'{o}nimo on well-separated measures as a special case. We prove a colorful generalization of Fan's antipodal sphere covering theorem, we derive a short proof of Gale's colorful KKM theorem, and we prove a colorful generalization of Brouwer's fixed point theorem. Our results also provide an alternative between Radon-type intersection results and KKM-type covering results. Finally, we prove colorful Borsuk--Ulam theorems for higher symmetry.
Florian Frick, Zoe Wellner
2023-09-25T21:28:39Z
http://arxiv.org/abs/2309.14539v1
# Colorful Borsuk-Ulam Theorems and Applications ###### Abstract. We prove a colorful generalization of the Borsuk-Ulam theorem and derive colorful consequences from it, such as a colorful generalization of the ham sandwich theorem. Even in the uncolored case this specializes to a strengthening of the ham sandwich theorem, which given an additional condition, contains a result of Barany, Hubard, and Jeronimo on well-separated measures as a special case. We prove a colorful generalization of Fan's antipodal sphere covering theorem, we derive a short proof of Gale's colorful KKM theorem, and we prove a colorful generalization of Brouwer's fixed point theorem. Our results also provide an alternative between Radon-type intersection results and KKM-type covering results. Finally, we prove colorful Borsuk-Ulam theorems for higher symmetry. FF and ZW were supported by NSF CAREER Grant DMS 2042428. ## 1. Introduction _Colorful_ (or _rainbow_) results are popular across combinatorics and discrete geometry. These results take the following general form: If sets \(S_{1},\ldots,S_{n}\) have property \(P\), then there is a transversal with property \(P\). Here a _transversal_ of \(S_{1},\ldots,S_{n}\) is a set \(\{s_{1},\ldots,s_{n}\}\) with \(s_{i}\in S_{i}\). For example, Barany's colorful Caratheodory theorem [4] states that if \(S_{1},\ldots,S_{d+1}\subset\mathbb{R}^{d}\) satisfy \(0\in\operatorname{conv}S_{i}\) for all \(i\in[d+1]=\{1,\ldots,d+1\}\), then there is a transversal \(S\) with \(0\in\operatorname{conv}S\). The "non-colorful" case \(S_{1}=\cdots=S_{d+1}\) reduces to Caratheodory's theorem [13] that if \(0\) is in the convex hull of \(S\subset\mathbb{R}^{d}\), then some convex combination of at most \(d+1\) elements of \(S\) is equal to \(0\). Other colorful results in geometry include colorful Helly theorems [4, 23], Gale's colorful generalization of the KKM theorem [21], and colorful versions of Tverberg's theorem [5, 37, 9]. Prominent examples in combinatorics include results on rainbow arithmetic progressions [22, 2], rainbow matching results (such as [1]) and rainbow Ramsey results (see [20] for a survey) among several others. Topological methods have proven to be a powerful tool in attacking combinatorial and discrete-geometric problems [8, 15, 25, 27]. Among the standard techniques are fixed point theorems (an early example is Nash's result on equilibria in non-cooperative games [30]) and equivariant methods such as the Borsuk-Ulam theorem (see for example [27, 10]), which states that any continuous map \(f\colon S^{d}\to\mathbb{R}^{d}\) from the \(d\)-sphere \(S^{d}\), which is _odd_ (i.e., \(f(-x)=-f(x)\) for all \(x\)), has a zero. Here we ask: _Can colorful results be lifted to colorful topological methods?_ For fixed point theorems this is true in quite some generality [31, 18, 19]. There is an abundance of generalizations of the Borsuk-Ulam theorem; see for example [17, 28, 16, 12, 35]. Here we prove a colorful generalization of the Borsuk-Ulam theorem. We introduce one piece of terminology: We say that a matrix \(A\in\mathbb{R}^{d\times d}\) has _rows in intersecting cube facets_ if for any two distinct rows \(a\) and ## 1. Introduction Let \(f\colon S^{d}\to\mathbb{R}^{d+1}\) be an odd and \(d\)-dimensional vector space with \(d\) elements. We say that \(f\) is _\(d\)-dimensional_ if \(f\) is \(d\)-dimensional, and \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. We say that \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional if \(f\) is \(d\)-dimensional. the corresponding set covering variant (Theorem 6.2). We then prove a colorful generalization of Theorem 6.2, which in the special case \(p=2\) gives our earlier colorful generalization of Fan's set covering result; see Theorem 6.6. Meunier and Su already proved a colorful generalization of Fan's theorem [28], which also exhibits a colorful Borsuk-Ulam phenomenon. Their generalization is different from ours and neither easily implies the other. We will discuss the differences after stating our colorful generalization of Fan's theorem (Theorem 3.3). ## 2. Preliminaries In this section we collect a few definitions used throughout. We refer to Matousek's book [27] for further details. A _simplicial complex_\(\Sigma\) is a collection of finite sets closed under taking subsets. We refer to the ground set \(\bigcup_{\sigma\in\Sigma}\sigma\) as the _vertex set_ of \(\Sigma\). Elements \(\sigma\in\Sigma\) are called _faces_; inclusion-maximal faces are _facets_ and two-element faces are _edges_. For a simplicial complex \(\Sigma\) on vertex set \(V\) we denote its _geometric realization_ by \(|\Sigma|=\bigcup_{\sigma\in\Sigma}\operatorname{conv}\{e_{v}\ :\ v\in\sigma\}\subseteq \mathbb{R}^{V}\), where \(e_{v}\) denote the standard basis vectors of \(\mathbb{R}^{V}\) and \(\operatorname{conv}(X)\) denotes the convex hull of the set \(X\). The simplicial complex of all subsets of \([n]=\{1,2,\ldots,n\}\) is the _\((n-1)\)-simplex_\(\Delta_{n-1}\). For ease of notation we will denote the geometric realization of \(\Delta_{n-1}\) also by \(\Delta_{n-1}\). Observe that \(\Delta_{n-1}\) is the convex hull of the standard basis vectors in \(\mathbb{R}^{n}\). We call \(\Sigma\) a _triangulation_ of \(S^{d}\) if \(\Sigma\) is a simplicial complex whose geometric realization is homeomorphic to \(S^{d}\). Let \(\Sigma\) and \(\Sigma^{\prime}\) be simplicial complexes on vertex sets \(V\) and \(V^{\prime}\), respectively. A map \(\varphi\colon V\to V^{\prime}\) is _simplicial_ if for every \(\sigma\in\Sigma\) we have that \(\varphi(\sigma)\in\Sigma^{\prime}\). In this case we write \(\varphi\colon\Sigma\to\Sigma^{\prime}\). By convex interpolation any simplicial map induces a continuous map \(|\Sigma|\to|\Sigma^{\prime}|\). A triangulation \(\Sigma\) of \(S^{d}\) is _centrally symmetric_ if there is a simplicial map \(\iota\colon\Sigma\to\Sigma\) and homeomorphism \(h\colon|\Sigma|\to S^{d}\) such that for every \(x\in|\Sigma|\) we have that \(h(\iota(x))=-h(x)\). The smallest centrally symmetric triangulation of \(S^{d}\) is given by the (boundary of the) _crosspolytope_\(\partial\Diamond_{d+1}\), the simplicial complex on \(V=\{1,\ldots,d\}\cup\{-1,\ldots,-d\}\), where \(\sigma\subseteq V\) is a face of \(\partial\Diamond_{d+1}\) if for every \(i\in\sigma\) we have that \(-i\notin\sigma\). The geometric realization \(|\partial\Diamond_{d+1}|\) can be realized in \(\mathbb{R}^{d+1}\) with convex faces by taking the boundary of the convex hull of \(\{\pm e_{1},\ldots,\pm e_{d+1}\}\). A simplicial map \(\iota\colon\Sigma\to\Sigma\) induces a _\(\mathbb{Z}/p\)-action_ if the \(p\)-fold composition \(\iota^{p}\) is the identity. In this case \(\Sigma\) is a _\(\mathbb{Z}/p\)-equivariant triangulation_. For \(s\in\mathbb{Z}/p\) and \(x\in|\Sigma|\) we write \(s\cdot x\) for \(\iota^{s}(x)\). The \(\mathbb{Z}/p\)-action is _free_ if \(s\cdot x\neq x\) for all \(s\in\mathbb{Z}/p\setminus\{0\}\) and all \(x\in|\Sigma|\). Given two spaces \(X\) and \(Y\) (homeomorphic to simplicial complexes) with \(\mathbb{Z}/p\)-actions, a map \(f\colon X\to Y\) is _\(\mathbb{Z}/p\)-equivariant_ if \(f(s\cdot x)=s\cdot f(x)\) for all \(s\in\mathbb{Z}/p\) and all \(x\in X\). A map \(f\colon S^{d}\to\mathbb{R}^{n}\) is _antipodal_ or _odd_ if \(f(-x)=-f(x)\) for all \(x\in S^{d}\). We reserve the term _map_ for a continuous function. The same definition applies for a map \(f\colon S^{d}\to\mathbb{R}^{n\times m}\) to the space of \((n\times m)\)-matrices. For any such map, we write \(f_{i}\colon S^{d}\to\mathbb{R}^{m}\) for the map to the \(i\)th row of \(f\) and \(f_{ij}\colon S^{d}\to\mathbb{R}\) for the map to the entry in row \(i\) and column \(j\) of \(f\). The _degree_\(\deg f\) of a map \(f\colon S^{d}\to S^{d}\) is the integer \(k\) such that the induced map on top homology \(f_{*}\colon H_{d}(S^{d})\cong\mathbb{Z}\to H_{d}(S^{d})\cong\mathbb{Z}\) is multiplication by \(k\). The Borsuk-Ulam theorem [11] can be stated in various forms; here we collect three such statements: **Theorem 2.1** (Borsuk-Ulam theorem).: 1. _Any odd map_ \(f\colon S^{d}\to\mathbb{R}^{d}\) _has a zero._ 2. _Any odd map_ \(f\colon S^{d}\to S^{d}\) _has odd degree._ _._ 3. _For any odd map_ \(f\colon S^{d}\to\mathbb{R}^{d+1}\) _there is an_ \(x\in S^{d}\) _such that all coordinates of_ \(f(x)\) _are the same._ Item (b) implies item (a), which is easily seen to be equivalent to the statement that any odd map \(S^{d-1}\to S^{d-1}\) has nonzero degree. For item (c) observe that for the diagonal \(D=\{x\in\mathbb{R}^{d+1}\ :\ x_{1}=x_{2}=\cdots=x_{d+1}\}\) the composition of \(f\colon S^{d}\to\mathbb{R}^{d+1}\) with the projection \(\mathbb{R}^{d+1}\to\mathbb{R}^{d+1}/D\cong\mathbb{R}^{d}\) is an odd map. This composition has a zero if and only if there is an \(x\in S^{d}\) with \(f(x)\in D\). We will use one immediate corollary of the Borsuk-Ulam theorem, which we state below. Any non-surjective map \(S^{d}\to S^{d}\) has degree zero; thus we get: **Corollary 2.2**.: _Any odd map \(f\colon S^{d}\to S^{d}\) is surjective._ The Borsuk-Ulam theorem has been generalized to \(G\)-equivariant maps beyond \(G=\mathbb{Z}/2\); see for example Dold [16]. Here we will need the following (see [26, Cor. 2.2]): **Lemma 2.3**.: _For any free \(\mathbb{Z}/p\)-action on \(S^{d}\), any \(\mathbb{Z}/p\)-equivariant map \(f\colon S^{d}\to S^{d}\) has degree \(1\bmod p\)._ Let \(\Sigma\) be a simplicial complex on vertex set \(V\). The _deleted join_\(\Sigma_{\Delta}^{\star 2}\) of \(\Sigma\) is a simplicial complex on vertex set \(V\times\{1,2\}\), where \((\sigma_{1}\times\{1\})\cup(\sigma_{2}\times\{2\})\) is a face of \(\Sigma_{\Delta}^{\star 2}\) if \(\sigma_{1}\) and \(\sigma_{2}\) are faces of \(\Sigma\) such that \(\sigma_{1}\cap\sigma_{2}=\varnothing\). The deleted join of the \(n\)-simplex is \((\Delta_{n})_{\Delta}^{\star 2}=\partial\!\!\setminus_{n+1}\) the boundary of the crosspolytope. Notice that any point \(z\in|(\Delta_{n})_{\Delta}^{\star 2}|\) in the geometric realization of the boundary of the crosspolytope is of the form \(\lambda x+(1-\lambda)y\) for \(\lambda\in[0,1]\) and \(x,y\in|\Delta_{n}|\) that are contained in vertex-disjoint faces. This is true more generally, for points in the geometric realization of the deleted join of any simplicial complex. We surpress bars, and denote the geometric realization of the deleted join of the simplex by \((\Delta_{n})_{\Delta}^{\star 2}\) for ease of notation. Thus the notation \(\lambda x+(1-\lambda)y\in(\Delta_{n})_{\Delta}^{\star 2}\) refers to the point in \(|(\Delta_{n})_{\Delta}^{\star 2}|\) determined by \(\lambda\in[0,1]\) and \(x,y\in|\Delta_{n}|\) in vertex-disjoint faces. The \(p\)_-fold join_\(\Sigma^{\star p}\) of \(\Sigma\) is a simplicial complex on vertex set \(V\times\{1,2,\ldots,p\}\), where \((\sigma_{1}\times\{1\})\cup(\sigma_{2}\times\{2\})\cup\cdots\cup(\sigma_{p} \times\{p\})\) is a face of \(\Sigma_{\Delta}^{\star p}\) if for all \(i\), \(\sigma_{i}\) is a face of \(\Sigma\). If we additionally require that \(\sigma_{i}\cap\sigma_{j}=\varnothing\) when \(i\neq j\) we get the \(p\)_-fold deleted join_\(\Sigma_{\Delta}^{\star p}\). The \(p\)-fold join of \(S^{d}\) is homeomorphic to \(S^{p(d+1)-1}\). The _barycentric subdivision_\(\Sigma^{\prime}\) of \(\Sigma\) is the simplicial complex on vertex set \(\Sigma\) with \(\{\sigma_{1},\ldots,\sigma_{k}\}\) is a face of \(\Sigma^{\prime}\) if \(\sigma_{1}\subset\sigma_{2}\subset\cdots\subset\sigma_{k}\). Figure 1. A filled triangle and its barycentric subdivision, which has a vertex for every face of the triangle. ## 3. Proof of the colorful Borsuk-Ulam theorem In 1952 Ky Fan published his "combinatorial lemma" generalizing Tucker's lemma [17], which is a discretized version of the Borsuk-Ulam theorem. Ky Fan's lemma applies to iterated barycentric subdivisions of the boundary of the crosspolytope and states that if the vertices of such a subdivision are labelled with the \(2m\) numbers \(\{\pm 1,\pm 2,\ldots,\pm m\}\) such that labels of antipodal vertices sum to zero, while labels of vertices connected by an edge do not sum to zero, then the number of facets labelled \(\{k_{1},-k_{2},k_{3},\ldots,(-1)^{d}k_{d+1}\}\), where \(1\leq k_{1}<k_{2}<\cdots<k_{d+1}\), is odd. Below we give a short proof of a version of Ky Fan's lemma that applies in greater generality, that is, for any centrally symmetric triangulation, while having a slightly weaker conclusion. In fact, with more care one could derive a generalization of Ky Fan's lemma from our setup below, but we will not need this generality. The proof we present here is not new; see De Loera, Goaoc, Meunier and Mustafa [14]. Further note that generalizations of this theorem have been proven in other settings; for example, Musin proved a generalization of Fan's lemma for manifolds [29] and Zivaljevic proved a generalization for oriented matroids [34]. **Theorem 3.1**.: _Let \(\Sigma\) be a centrally symmetric triangulation of \(S^{d}\) with vertex set \(V\). Let \(\ell\colon V\to\{\pm 1,\pm 2,\ldots,\pm(d+1)\}\) be a map with \(\ell(-v)=-\ell(v)\) for all \(v\in V\). Fix signs \(s_{1},\ldots,s_{d+1}\in\{-1,+1\}\). Then either \(\Sigma\) has an edge \(e\) with \(\ell(e)=\{-j,+j\}\) for some \(j\in[d+1]\) or \(\Sigma\) has a facet \(\sigma\) with \(\ell(\sigma)=\{s_{1}\cdot 1,\ldots,s_{d+1}\cdot(d+1)\}\)._ Proof.: Assume that \(\Sigma\) has no edge \(e\) with \(\ell(e)=\{-j,+j\}\) for \(j\in[d+1]\). Then the map \(\ell\) induces a simplicial map \(L\colon\Sigma\to\partial\Diamond_{d+1}\) to the boundary of the crosspolytope \(\partial\Diamond_{d+1}\) by identifying label \(j\) with standard basis vector \(e_{j}\), and similarly identifying \(-j\) with \(-e_{j}\). This map is odd, and thus \(L\) is surjective by Corollary 2.2. In particular, there is a facet \(\sigma\) that \(L\) maps to the facet \(\{s_{1}\cdot 1,\ldots,s_{d+1}\cdot(d+1)\}\) in \(\partial\Diamond_{d+1}\), which finishes the proof. By taking limits, we derive a version of the theorem above for set coverings of the sphere instead of labellings of triangulations. Fan already derived such a set covering variant in his original paper. We include a proof for completeness. **Theorem 3.2**.: _Let \(A_{1},\ldots,A_{d+1}\subseteq S^{d}\) be closed sets such that \(S^{d}=\bigcup_{i}A_{i}\cup\bigcup_{i}(-A_{i})\). Fix signs \(s_{1},\ldots,s_{d+1}\in\{-1,+1\}\). Either there is an \(i\in[d+1]\) such that \(A_{i}\cap(-A_{i})\neq\varnothing\) or \(\bigcap_{i}s_{i}A_{i}\neq\varnothing\)._ Proof.: Assume that \(A_{i}\cap(-A_{i})=\varnothing\) for all \(i\). Let \(\varepsilon>0\) be sufficiently small that the distance between \(A_{i}\) and \(-A_{i}\) is larger than \(\varepsilon\) for all \(i\). Let \(T_{\varepsilon}\) be a centrally symmetric triangulation of \(S^{d}\) such that each facet has diameter less than \(\varepsilon\). This can be achieved by taking repeated barycentric subdivisions of a given centrally symmetric triangulation. Let \(\ell\colon V(T_{\varepsilon})\to\{\pm 1,\ldots,\pm(d+1)\}\) be a labelling of the vertices of \(T_{\varepsilon}\) such that \(\ell(v)=i\) only if \(v\in A_{i}\), and such that \(\ell(-v)=-\ell(v)\). Thus \(\ell(v)=-i\) only if \(v\in(-A_{i})\). By our choice of \(\varepsilon\), the sum of labels of any edge is non-zero. By Theorem 3.1 there is a facet with labels \(s_{1}\cdot 1,\ldots,s_{d+1}\cdot(d+1)\). Let \(x_{\varepsilon}\) be the barycenter of some such facet. As \(\varepsilon\) approaches zero, by compactness of \(S^{d}\), the \(x_{\varepsilon}\) have an accumulation point \(x\). Since the \(A_{i}\) are closed, we have that \(x\in\bigcap_{i}s_{i}A_{i}\). We will now derive a colorful set covering version of Fan's lemma by considering the barycentric subdivision of fine triangulations. We then label each vertex according to the \(j\)th set covering if the dimension of face it subdivides is \(j\). This idea is not new: It was originally used by Su [33] to derive a colorful version of Sperner's lemma in order to establish results on rental harmony. Recently this idea was employed in [19] to prove colorful versions of set covering results for polytopes, such as a colorful KKMS and colorful Komiya's theorem. The new ingredient there was the application to set covering (instead of vertex labelling) results. The following is a colorful generalization of Fan's sphere covering result: **Theorem 3.3**.: _For \(j\in[d+1]\) let \(A_{1}^{(j)},\dots,A_{d+1}^{(j)}\subseteq S^{d}\) be closed sets such that \(S^{d}=\bigcup_{i}A_{i}^{(j)}\cup\bigcup_{i}(-A_{i}^{(j)})\) for each \(j\). Suppose \(A_{i}^{(j)}\cap(-A_{i}^{(\ell)})=\varnothing\) for all \(i\) and for all \(j\neq\ell\). Fix signs \(s_{1},\dots,s_{d+1}\in\{-1,+1\}\). Then there is a permutation \(\pi\) of \([d+1]\) such that \(\bigcap_{i}s_{i}A_{i}^{(\pi(i))}\neq\varnothing\)._ Proof.: As before, let \(T_{\varepsilon}\) be a triangulation, where every face has diameter at most \(\varepsilon>0\). Here \(\varepsilon\) is chosen sufficiently small so that no face intersects both \(A_{i}^{(j)}\) and \(-A_{i}^{(\ell)}\) for \(j\neq\ell\) and any \(i\). Let \(T_{\varepsilon}^{\prime}\) denote the barycentric subdivision of \(T_{\varepsilon}\). Similar to the proof of Theorem 3.2, let \(\ell\colon V(T_{\varepsilon}^{\prime})\to\{\pm 1,\dots,\pm(d+1)\}\) be a labelling of the vertices of \(T_{\varepsilon}^{\prime}\) such that \(\ell(v)=i\) only if \(v\in A_{i}^{(k)}\) and \(v\) subdivides a \((k-1)\)-dimensional face of \(T_{\varepsilon}\). We may assume that \(\ell(-v)=-\ell(v)\). By our choice of \(\varepsilon\), the sum of labels of any edge is non-zero. By Theorem 3.1 there is a facet with labels \(s_{1}\cdot 1,\dots,s_{d+1}\cdot(d+1)\). Let \(x_{\varepsilon}\) be the barycenter of some such facet. As \(\varepsilon\) approaches zero, by compactness of \(S^{d}\), the \(x_{\varepsilon}\) have an accumulation point \(x\). For each small \(\varepsilon>0\), let \(\pi_{\varepsilon}\) be the permutation of \([d+1]\) with \(\pi_{\varepsilon}(i)=j\) if the (unique) vertex of the facet of \(T_{\varepsilon}^{\prime}\) that contains \(x_{\varepsilon}\) and subdivides a face of dimension \(j-1\) has label \(s_{i}\cdot i\). (If \(x_{\varepsilon}\) is in multiple facets, we choose one arbitrarily.) Since there are only finitely many permutations of \([d+1]\), we can choose a sequence of \(\delta_{1},\delta_{2},\dots\) with \(\delta_{n}\) converges to \(0\) and all permutations \(\pi_{\delta_{n}}\) are the same. Call this permutation \(\pi\). Since the \(A_{i}^{(j)}\) are closed, we have that \(x\in\bigcap_{i}s_{i}A_{i}^{(\pi(i))}\). We compare this colorful generalization of Fan's theorem to the result of Meunier and Su [28], who proved a multilabelled Fan's theorem. We will translate their result to a set covering result by taking limits to make a direct comparison with Theorem 3.3. **Theorem 3.4** (Meunier and Su [28]).: _Let \(m\geq 1\) and \(d\geq 1\) be integers, and let \(d_{1},\dots,d_{m}\) be non-negative integers with \(d_{1}+\dots+d_{m}=d\). For \(j\in[m]\) let \(A_{1}^{(j)},\dots,A_{d+1}^{(j)}\subseteq S^{d}\) be closed sets such that \(S^{d}=\bigcup_{i}A_{i}^{(j)}\cup\bigcup_{i}(-A_{i}^{(j)})\) for each \(j\). Suppose \(A_{i}^{(j)}\cap(-A_{i}^{(j)})=\varnothing\) for all \(i,j\). Then there are strictly increasing maps \(f_{j}\colon[d_{j}+1]\to[d+1]\) and signs \(s_{j}\in\{-1,+1\}\) for \(j\in[m]\) such that_ \[\bigcap_{j=1}^{m}\bigcap_{i=1}^{d_{j}+1}s_{j}\cdot(-1)^{i}A_{f_{j}(i)}^{(j)} \neq\varnothing.\] Think of the sets \(A_{i}^{(j)}\) recorded in a matrix, with set \(A_{i}^{(j)}\) in the \(j\)th row and \(i\)th column. Theorem 3.4 has no assumptions regarding intersections between sets in different rows (only on sets in the same row) and the conclusion gives an intersection among sets (with alternating signs) that form a row-transversal. Theorem 3.3 has no assumption regarding intersections of sets in the same row (only distinct rows) and the conclusion gives an intersection among sets (with prescribed signs) that form a row and column transversal. Proof of Theorem 1.1.: Define the following sets: \[A_{i}^{(j)}=\left\{x\in S^{d}\mid f(x)_{ji}=|f(x)_{ji}|\geq|f(x)_{j\alpha}|\text { for all }\alpha\in[d+1]\right\},\] \[-A_{i}^{(j)}=\left\{x\in S^{d}\mid\ -f(x)_{ji}=|f(x)_{ji}|\geq|f(x)_{j\alpha}| \text{ for all }\alpha\in[d+1]\right\}.\] Note that for each \(j\), the collection of sets \(A_{i}^{(j)}\) will satisfy \(S^{d}=\bigcup_{i}A_{i}^{(j)}\cup\bigcup_{i}(-A_{i}^{(j)})\) since for every \(x\in S^{d}\) the maximal entry in absolute value in the \(j\)th row of \(f(x)\) must be achieved somewhere. If \(A_{i}^{(j)}\cap(-A_{i}^{(\ell)})\neq\varnothing\) for some \(i\) and for some \(j\neq\ell\) then \(f(x)_{ji}=|f(x)_{ji}|\geq|f(x)_{j\alpha}|\) and \(-f(x)_{\ell i}=|f(x)_{\ell i}|\geq|f(x)_{\ell\alpha}|\) for all \(\alpha\in[d+1]\). Thus \(f(x)\) does not have rows in intersecting cube facets. If \(A_{i}^{(j)}\cap(-A_{i}^{(\ell)})=\varnothing\) for all \(i\) and for all \(j\neq\ell\) apply Theorem 3.3 with all signs \(s_{i}=+1\), to get that there is a permutation \(\pi\) of \([d+1]\) such that \(\bigcap_{i}A_{i}^{(\pi(i))}\neq\varnothing\). Then for \(x\in\bigcap_{i}A_{i}^{(\pi(i))}\) we have that \(f(x)_{\pi(i)i}=|f(x)_{\pi(i)i}|\geq|f(x)_{\pi(i)j}|\) for all \(i,j\in[d+1]\). ## 4. Radon-KKM alternative and the colorful KKM theorem Recall that \(\Delta_{d}=\{x\in\mathbb{R}^{d+1}\ :\ \sum_{i}x_{i}=1,\ x_{i}\geq 0\ \forall i\in[d+1]\}\) denotes the \(d\)_-simplex_. For a subset \(J\subseteq[d+1]\) the set \(\Delta_{d}^{J}=\{x\in\Delta_{d}\ :\ x_{j}=0\ \forall j\notin J\}\) is a _face_ of \(\Delta_{d}\). If \(J=[d+1]\setminus\{i\}\), then we call \(\Delta_{d}^{J}\) the \(ith\)_facet_ of \(\Delta_{d}\). For \(f\colon\Delta_{d}\to\mathbb{R}^{n}\), a partition \(J\sqcup J^{\prime}\) of \([d+1]\) with \(f(\Delta_{d}^{J})\cap f(\Delta_{d}^{J^{\prime}})\neq\varnothing\) is a _Radon partition_ for \(f\). The following is a "discretized" variant of the Borsuk-Ulam theorem: **Theorem 4.1** (Topological Radon theorem - Bajmoczy and Barany [3]).: _Let \(f\colon\Delta_{d}\to\mathbb{R}^{d-1}\) be continuous. Then \(f\) has a Radon partition._ It is no loss of generality to state the topological Radon theorem for maps \(f\colon\Delta_{d}\to\Delta_{d-1}\) or \(f\colon\Delta_{d}\to\partial\Delta_{d}\cong S^{d-1}\), where \(\partial\Delta_{d}\) denotes the boundary of \(\Delta_{d}\). The topological Radon theorem is derived from the Borsuk-Ulam theorem, and in fact can be seen as a discretized version of it. Before exploring the colorful extension implied by our main result, we first investigate the non-colorful version, which will apply to maps \(f\colon\Delta_{d}\to\Delta_{d}\). We will show that the topological Radon theorem, in a sense, is dual to the KKM theorem: **Theorem 4.2** (KKM theorem [24]).: _Let \(A_{1},\ldots,A_{d+1}\) be an open cover of \(\Delta_{d}\) such that for every \(J\subseteq[d+1]\) we have that \(\Delta_{d}^{J}\subseteq\bigcup_{j\in J}A_{j}\). Then \(\bigcap A_{i}\neq\varnothing\)._ We say that a finite sequence of continuous maps \(\alpha_{1},\ldots,\alpha_{n}\colon\Delta_{d}\to[0,1]\) is a _partition of unity_ if \(\sum_{i}\alpha_{i}(x)=1\) for all \(x\in\Delta_{d}\). Note that this means that \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\) is a map to the \((n-1)\)-simplex \(\Delta_{n-1}\). A partition of unity \(\alpha_{1},\ldots,\alpha_{n}\colon\Delta_{d}\to[0,1]\) is _subordinate_ to an open cover \(A_{1},\ldots,A_{n}\) of \(\Delta_{d}\) if \(\alpha_{i}(x)>0\) implies \(x\in A_{i}\). Any open cover \(A_{1},\ldots,A_{n}\) of \(\Delta_{d}\) has a partition of unity subordinate to it, since the simplex is locally compact and Hausdorff. Conversely, any continuous \(\alpha\colon\Delta_{d}\to\Delta_{n-1}\) gives an open cover \(A_{i}=\{x\in\Delta_{d}\ :\ \alpha_{i}(x)>0\}\) of \(\Delta_{d}\). Having a Radon partition is a degeneracy of a map \(\alpha\colon\Delta_{d}\to\Delta_{d}\), while for a cover \(A_{1},\ldots,A_{d+1}\) having empty intersection, \(\bigcap A_{i}=\varnothing\), is a degeneracy. Theorem 4.3 shows that these degeneracies are dual to one another. **Theorem 4.3**.: _Let \(A_{1},\ldots,A_{d+1}\) be an open cover of \(\Delta_{d}\) and let \(\alpha_{1},\ldots,\alpha_{d+1}\colon\Delta_{d}\to[0,1]\) be a partition of unity subordinate to the cover \(\{A_{1},\ldots,A_{d+1}\}\). Let \(\alpha=(\alpha_{1},\ldots,\alpha_{d+1})\colon\Delta_{d}\to\Delta_{d}\). Then \(\alpha\) has a Radon partition or \(\bigcap A_{i}\neq\varnothing\)._ Proof.: Let \(F\colon(\Delta_{d})_{\Delta}^{*2}\to\mathbb{R}^{d+1},\ \lambda x+(1-\lambda)y \mapsto\lambda\alpha(x)-(1-\lambda)\alpha(y)\). Since \((\Delta_{d})_{\Delta}^{*2}\) is homeomorphic to \(S^{d}\), by the Borsuk-Ulam theorem (Theorem 2.1(c)), \(F\) must hit the diagonal \(D=\{(z_{1},\ldots,z_{d+1})\in\mathbb{R}^{d+1}\ :\ z_{1}=z_{2}=\cdots=z_{d+1}\}\). If \(F(\lambda x+(1-\lambda)y)=0\) then \(\lambda\alpha(x)=(1-\lambda)\alpha(y)\) and also \(0=\sum\lambda\alpha_{i}(x)-(1-\lambda)\alpha_{i}(y)=\lambda\sum\alpha_{i}(x)-(1 -\lambda)\sum\alpha_{i}(y)=2\lambda-1\), which implies \(\lambda=\frac{1}{2}\) and thus \(\alpha(x)=\alpha(y)\). If \(F(\lambda x+(1-\lambda)y)\in D\setminus\{0\}\), then \(\alpha_{i}(x)>0\) for all \(i\in[d+1]\) or \(\alpha_{i}(y)>0\) for all \(i\in[d+1]\) depending on whether the coordinates of \(F(\lambda x+(1-\lambda)y)\) are positive or negative. Thus either \(x\in\bigcap A_{i}\) or \(y\in\bigcap A_{i}\). Theorem 4.3 easily implies both the topological Radon theorem and the KKM theorem. To derive the topological Radon theorem as a consequence, note that for a map \(\alpha\colon\Delta_{d}\to\Delta_{d-1}\subset\Delta_{d}\), the coordinate functions \(\alpha_{1},\dots,\alpha_{d+1}\) are subordinate to the open cover \(A_{i}=\{x\in\Delta_{d}\ :\ \alpha_{i}(x)>0\}\), where \(A_{d+1}=\varnothing\). Thus \(\bigcap A_{i}=\varnothing\) and \(\alpha\) has a Radon partition by Theorem 4.3. We can regard Theorem 4.3 as a natural strengthening of the KKM theorem, where the KKM condition (that \(\Delta_{d}^{J}\subseteq\bigcup_{j\in J}A_{j}\) for all \(J\subseteq[d+1]\)) is replaced by the condition that a partition of unity subordinate to the cover avoids a Radon partition, which is a weaker requirement. We can now prove a colorful generalization: **Theorem 4.4**.: _Let \(\alpha^{(1)},\dots,\alpha^{(d+1)}\colon\Delta_{d}\to\Delta_{d}\) be continuous maps such that for every \(J\subseteq[d+1]\) and for every \(i\in[d+1]\) we have that \(\alpha^{(i)}(\Delta_{d}^{J})\subseteq\Delta_{d}^{J}\). Then there is an \(x\in\Delta_{d}\) and a permutation \(\pi\) of \([d+1]\) such that \(\alpha^{(i)}_{\pi(i)}(x)\geq\alpha^{(i)}_{j}(x)\) for all \(j\in[d+1]\)._ Proof.: Let \(A\colon\Delta_{d}\to\mathbb{R}^{(d+1)\times(d+1)},\ x\mapsto(\alpha^{(i)}_{j} (x))_{i,j}\). The condition \(\alpha^{(i)}(\Delta_{d}^{J})\subseteq\Delta_{d}^{J}\) implies that for \(x\in\Delta_{d}^{J}\) the matrix \(A(x)\) has non-zero entries only in the columns corresponding to \(j\in J\). Let \[F\colon(\Delta_{d})_{\Delta}^{*2}\to\mathbb{R}^{(d+1)\times(d+1)},\ \lambda x+(1- \lambda)y\mapsto\lambda A(x)-(1-\lambda)A(y).\] We observe that no column of \(F(\lambda x+(1-\lambda)y)\) has both positive and negative entries: Indeed, if \(\lambda=1\) all entries of \(F(\lambda x+(1-\lambda)y)\) are non-negative; if \(\lambda=0\) all entries of \(F(\lambda x+(1-\lambda)y)\) are non-positive; and if \(0<\lambda<1\) then \(x\) and \(y\) are in proper faces of \(\Delta_{d}\), which are disjoint by definition of deleted join, say \(x\in\Delta_{d}^{J}\) and \(y\in\Delta_{d}^{[d+1]\setminus J}\). Then since \(\alpha^{(i)}(\Delta_{d}^{J})\subseteq\Delta_{d}^{J}\), columns of \(F(\lambda x+(1-\lambda)y)\) corresponding to \(j\in J\) are non-negative and all other columns are non-positive. Since each column of \(F(\lambda x+(1-\lambda)y)\) is either entirely non-negative or entirely non-positive, \(F(\lambda x+(1-\lambda)y)\) has rows in intersecting cube facets. By Theorem 1.1 there is a point \(\lambda x+(1-\lambda)y\in(\Delta_{d})_{\Delta}^{*2}\) and a permutation \(\pi\) of \([d+1]\) such that \(F(\lambda x+(1-\lambda)y)_{\pi(i)i}=\lambda\alpha^{\pi(i)}_{i}(x)-(1-\lambda) \alpha^{\pi(i)}_{i}(y)\) is non-negative and \(|F(\lambda x+(1-\lambda)y)_{\pi(i)i}|\geq|F(\lambda x+(1-\lambda)y)_{\pi(i)j}|\) for all \(i,j\in[d+1]\). If some \(F(\lambda x+(1-\lambda)y)_{\pi(i)i}\) were zero, then since these entries maximize their respective rows, the entire row would be zero, which is impossible. Thus all \(F(\lambda x+(1-\lambda)y)_{\pi(i)i}\) are positive. In particular, since the sign of columns is constant, all columns are non-negative. Notice that it is also not possible for an entire column to be zero, since one entry in this column maximizes its row in absolute value and this row is not identically zero. By the above this can only be the case when \(\lambda=1\). Thus \(F(\lambda x+(1-\lambda)y)_{\pi(i)i}=\alpha^{\pi(i)}_{i}(x)>0\), and \(|F(\lambda x+(1-\lambda)y)_{\pi(i)i}|\geq|F(\lambda x+(1-\lambda)y)_{\pi(i)j}|\) for all \(i,j\in[d+1]\) gives that \(\alpha^{\pi(i)}_{i}(x)\geq\alpha^{\pi(i)}_{j}(x)\) for all \(i,j\in[d+1]\). We derive the colorful KKM theorem, originally due to Gale [21], as an immediate consequence of Theorem 4.4: **Corollary 4.5** (colorful KKM theorem).: _For each \(i\in[d+1]\) let \(A_{1}^{(i)},\dots,A_{d+1}^{(i)}\) be open covers of \(\Delta_{d}\) such that for every \(J\subseteq[d+1]\) and for every \(i\in[d+1]\) we have that \(\Delta_{d}^{J}\subseteq\bigcup_{j\in J}A_{j}^{(i)}\). Then there is a permutation \(\pi\) of \([d+1]\) such that \(\bigcap A_{\pi(i)}^{(i)}\neq\varnothing\)._ Proof.: Let \(\alpha^{(i)}\colon\Delta_{d}\to\Delta_{d}\) be a partition of unity subordinate to the cover \(\{A_{1}^{(i)},\dots,A_{d+1}^{(i)}\}\). Then for every \(J\subseteq[d+1]\) and for every \(i\in[d+1]\) we have that \(\alpha^{(i)}(\Delta_{d}^{J})\subseteq\Delta_{d}^{J}\), since \(\Delta_{d}^{J}\subseteq\bigcup_{j\in J}A_{j}^{(i)}\). Thus by Theorem 4.4 there is an \(x\in\Delta_{d}\) and a permutation \(\pi\) of \([d+1]\) such that \(\alpha_{\pi(i)}^{(i)}(x)\geq\alpha_{j}^{(i)}(x)\) for all \(j\in[d+1]\). In particular, \(\alpha_{\pi(i)}^{(i)}(x)>0\) and thus \(x\in\bigcap A_{\pi(i)}^{(i)}\). **Remark 4.6**.: In the same way that the Borsuk-Ulam theorem strengthens Brouwer's fixed point theorem, Theorem 4.4 and Corollary 4.5 exhibit the colorful Borsuk-Ulam theorem as a strengthening of Gale's colorful KKM theorem. Here it is interesting that in the proof of Theorem 4.4 we used that the columns of any matrix in the image of the map \(F\) are either non-negative or non-positive. This is a much stronger condition than is necessary for the application of Theorem 1.1. As a consequence of Corollary 4.5 we can state a colorful Brouwer's fixed point theorem that for \(d+1\) maps \(f_{i}\colon\Delta_{d}\to\Delta_{d}\) asserts the existence of a point \(x\in\Delta_{d}\) and a set of inequalities that in the case \(f_{1}=f_{2}=\cdots=f_{d+1}\) specialize to \(x\) is a fixed point. We introduce one piece of terminology: Let \(S(d)\subseteq\mathbb{R}^{d\times d}\) be the set of _stochastic matrices_, that is, \(A\in S(d)\) if all entries of \(A\) are non-negative and every row sums to one. Thus stochastic matrices are those \((d\times d)\)-matrices, where every row vector belongs to the \((d-1)\)-simplex \(\Delta_{d-1}\). **Theorem 4.7**.: _Let \(f\colon\Delta_{d}\to S(d)\) be continuous. Then there is an \(x\in\Delta_{d}\) and a permutation \(\pi\) of \([d+1]\) such that \(f_{i\pi(i)}(x)\leq x_{\pi(i)}\) for all \(i\in[d+1]\)._ Proof.: Let \(A_{j}^{(i)}=\{x\in\Delta_{d}\ :\ f_{ij}(x)\leq x_{j}\}\). These sets are closed by continuity of \(f\). Let \(J\subseteq[d+1]\) be some non-empty set. Then for \(x\in\Delta_{d}^{J}\) we have that \(\sum_{j\in J}x_{j}=1\) and \(\sum_{j\in J}f_{ij}(x)\leq 1\) for every \(i\in[d+1]\). This implies that for some \(j\in J\) we have that \(f_{ij}(x)\leq x_{j}\) and thus \(x\in A_{j}^{(i)}\). This shows that \(\Delta_{d}^{J}\subseteq\bigcup_{j\in J}A_{j}^{(i)}\). A standard approximation argument shows that Corollary 4.5 also holds for collections of closed sets, and thus we get that there is a permutation \(\pi\) of \([d+1]\) such that \(\bigcap A_{\pi(i)}^{(i)}\neq\varnothing\). Any \(x\in\bigcap A_{\pi(i)}^{(i)}\) satisfies the desired set of inequalities. This is indeed a colorful generalization of Brouwer's fixed point theorem. If every row of \(f\colon\Delta_{d}\to S(d)\) is equal to \(h\colon\Delta_{d}\to\Delta_{d}\), then Theorem 4.7 asserts the existence of \(x\in\Delta_{d}\) with \(h_{i}(x)\leq x_{i}\) for all \(i\in[d+1]\). Since \(\sum_{i}h_{i}(x)=1=\sum_{i}x_{i}\), this implies \(h(x)=x\). ## 5. The colorful ham sandwich theorem We first recall the classical Ham Sandwich theorem, conjectured by Steinhaus and proved by Banach; see [7]. Here _hyperplane_ refers to an affine subspace of codimension one. We will think of every hyperplane \(H\subseteq\mathbb{R}^{d}\) as coming with a fixed orientation so that its positive halfspace \(H^{+}\) is well-defined (similarly, its negative halfspace \(H^{-}\)), that is, for \(H=\{x\in\mathbb{R}^{d}\ :\ \langle x,z\rangle=b\}\) for \(z\in\mathbb{R}^{d}\setminus\{0\}\) and \(b\geq 0\), we let \(H^{+}=\{x\in\mathbb{R}^{d}\ :\ \langle x,z\rangle\geq b\}\) and \(H^{-}=\{x\in\mathbb{R}^{d}\ :\ \langle x,z\rangle\leq b\}\). **Theorem 5.1** (Ham Sandwich theorem).: _Let \(\mu_{1},\mu_{2},\ldots\mu_{d}\) be Borel probability measures on \(\mathbb{R}^{d}\) such that for every hyperplane \(H\) we have that \(\mu_{i}(H)=0\) for all \(i\in[d]\). Then there is a hyperplane \(H\) such that \(\mu_{i}(H^{+})=\frac{1}{2}=\mu_{i}(H^{-})\) for all \(i\in[d]\)._ A colorful version of Theorem 5.1 will take several families of Borel probability measures as input and guarantee the existence of a hyperplane that separates the measures in a certain way. To be a true colorful version, such a result should specialize to Theorem 5.1 if all families of Borel measures are the same. We will discuss two natural attempts at formulating a colorful Ham Sandwich theorem, for simplicity for \(d=2\). Given two families of Borel probability measures \(\sigma_{1}=\left\{r_{1},g_{1}\right\},\sigma_{2}=\left\{r_{2},g_{2}\right\}\) in \(\mathbb{R}^{2}\), is there a line \(H\) such that \(r_{1}(H^{+})\geq\frac{1}{2},r_{2}(H^{-})\geq\frac{1}{2}\) and \(g_{1}(H^{-})\geq\frac{1}{2},g_{2}(H^{+})\geq\frac{1}{2}\)? For \(r_{1}=r_{2}\) and \(g_{1}=g_{2}\) this would reduce to the usual Ham Sandwich theorem. However, this proposed colorful version is false: Figure 2 shows two families of two Borel measures (red and green) distributed along the parabola such that no line \(H\) as above exists. Since this proposed colorful generalization of the Ham Sandwich theorem has a conclusion that is too strong to be true, a refined attempt at arriving at a colorful version might allow us to switch the roles of \(g_{2}\) and \(r_{2}\) in the second family of Borel measures. That is, given two families of Borel probability measures \(\sigma_{1}=\left\{r_{1},g_{1}\right\},\sigma_{2}=\left\{r_{2},g_{2}\right\}\), is there a line \(H\) such that \(r_{1}(H^{+})\geq\frac{1}{2},r_{2}(H^{-})\geq\frac{1}{2}\) and either \(g_{1}(H^{+})\geq\frac{1}{2},g_{2}(H^{-})\geq\frac{1}{2}\) or \(g_{1}(H^{-})\geq\frac{1}{2},g_{2}(H^{+})\geq\frac{1}{2}\)? Again, if \(r_{1}=r_{2}\) and \(g_{1}=g_{2}\) then this reduces to the usual Ham Sandwich theorem. While this colorful version of the Ham Sandwich theorem is true, it is a trivial consequence of the Ham Sandwich theorem itself: Let \(H\) be a line that simultaneously bisects the measures \(r_{1}\) and \(g_{1}\). Then \(r_{1}(H^{+})=\frac{1}{2}=r_{1}(H^{-})\). By flipping the orientation of \(H\) if necessary, we can make sure that \(r_{2}(H^{-})\geq\frac{1}{2}\). Moreover, \(g_{1}(H^{+})=\frac{1}{2}=g_{1}(H^{-})\) and one of \(g_{2}(H^{+})\geq\frac{1}{2}\) or \(g_{2}(H^{-})\geq\frac{1}{2}\) has to hold as well. Nevertheless, the Ham Sandwich theorem admits a (non-trivial) colorful generalization. The example above shows that we need to impose that measures in different families are not "oppositely distributed." We make this notion precise now and then state the colorful ham sandwich theorem. Let \(\mathcal{M}=\left\{\mu_{1},\ldots,\mu_{m}\right\}\) be a family of finite Borel measures on \(\mathbb{R}^{d}\), and let \(H\) be a hyperplane. We say that \(\mu_{i}\)_maximizes_\(H^{+}\)_for_\(\mathcal{M}\) if \(\mu_{i}(H^{+})-\frac{1}{2}\mu_{i}(\mathbb{R}^{d})\geq\mu_{\alpha}(H^{+})- \frac{1}{2}\mu_{\alpha}(\mathbb{R}^{d})\) for all \(\alpha\in[m]\). Similarly, \(\mu_{i}\)_minimizes_\(H^{+}\)_for_\(\mathcal{M}\) if \(\mu_{i}(H^{+})-\frac{1}{2}\mu_{i}(\mathbb{R}^{d})\leq\mu_{\alpha}(H^{+})- \frac{1}{2}\mu_{\alpha}(\mathbb{R}^{d})\) for all \(\alpha\in[m]\). We may now state the colorful generalization of the Ham Sandwich theorem: **Theorem 5.2**.: _For each \(j\in[d+1]\) let \(\mathcal{M}_{j}=\left\{\mu_{1}^{(j)},\ldots,\mu_{d+1}^{(j)}\right\}\) be an ordered family of \(d+1\) finite Borel measures on \(\mathbb{R}^{d}\) such that for every hyperplane \(H\) we have that \(\mu_{i}^{(j)}(H)=0\) for all \(i\in[d+1]\). Suppose further that for \(j,k\in[d+1]\) with \(j\neq k\) and any hyperplane \(H\) we have that if \(\mu_{i}^{(j)}\) maximizes \(H^{+}\) for \(\mathcal{M}_{j}\) then \(\mu_{i}^{(k)}\) does not minimize \(H^{+}\) for \(\mathcal{M}_{k}\). Then there Figure 2. We concentrate two families of measures along the parabola in the plane. Every line \(H\) intersects the parabola in at most two points. If at least half the measure of \(r_{1}\) is supposed to be between the intersection points of \(H\) with the parabola, and at least half the measure of \(g_{1}\) is outside the intersection points, then the intersections of \(H\) with the parabola are constrained as indicated above. If then \(r_{2}\) and \(g_{2}\) are distributed along the parabola as indicated, either more than half of \(r_{2}\) is between the intersection points, or more than half of \(g_{2}\) is outside the intersection points. _is permutation \(\pi\) of \([d+1]\) such that \(\mu_{i}^{(\pi(i))}(H^{+})-\frac{1}{2}\mu_{i}^{(\pi(i))}(\mathbb{R}^{d})\geq\mu_{j} ^{(\pi(i))}(H^{+})-\frac{1}{2}\mu_{j}^{(\pi(i))}(\mathbb{R}^{d})\) for all \(i,j\in[d+1]\)._ Proof.: Let \(u=(u_{0},u_{1},\ldots,u_{d})\in S^{d}\). If there is an \(i\in[d]\) such that \(u_{i}\neq 0\), i.e., if \(u_{0}\neq\pm 1\), assign to \(u\) the halfspace \[H^{+}(u)=\{(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\ |\ u_{1}x_{1}+\ldots u_{d}x_{d }\leq u_{0}\}\] Notice that \[H^{+}(-u)=\{(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\ |\ -u_{1}x_{1}-\cdots-u_{d}x_{ d}\leq-u_{0}\}\] \[=\{(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\ |\ u_{1}x_{1}+\cdots+u_{d}x_{d}\geq u _{0}\}=H^{-}(u).\] Define \(f_{ji}(u)=\mu_{i}^{(j)}(H^{+}(u))-\frac{1}{2}\mu_{i}^{(j)}(\mathbb{R}^{d})\). These maps are continuous and admit a continuous extension into the north and south pole. Thus the maps \(f_{ij}\) define an odd and continuous map \(F\colon S^{d}\to\mathbb{R}^{(d+1)\times(d+1)}\). If \(F(u)\) does not have rows in intersecting cube facets then (writing \(H=H(u)\)) there are \(i,j,k\in[d+1]\) with \(j\neq k\) such that \[|\mu_{i}^{(j)}(H^{+})-\frac{1}{2}\mu_{i}^{(j)}(\mathbb{R}^{d})| \geq|\mu_{\alpha}^{(j)}(H^{+})-\frac{1}{2}\mu_{\alpha}^{(j)}(\mathbb{R}^{d})|\text { and }\] \[|\mu_{i}^{(k)}(H^{+})-\frac{1}{2}\mu_{i}^{(k)}(\mathbb{R}^{d})| \geq|\mu_{\alpha}^{(k)}(H^{+})-\frac{1}{2}\mu_{\alpha}^{(k)}(\mathbb{R}^{d})| \text{ for all }\alpha\in[d+1],\] where \(\mu_{i}^{(j)}(H^{+})-\frac{1}{2}\mu_{i}^{(j)}(\mathbb{R}^{d})\geq 0\) and \(\mu_{i}^{(k)}(H^{+})-\frac{1}{2}\mu_{i}^{(k)}(\mathbb{R}^{d})\leq 0\). This means that \(\mu_{i}^{(j)}\) maximizes \(H^{+}\) for \(\mathcal{M}_{j}\) and \(\mu_{i}^{(k)}\) minimizes \(H^{+}\) for \(\mathcal{M}_{k}\), in contradiction to our assumption. Thus \(F(u)\) has rows in intersecting cube facets for every \(u\in S^{d}\). Applying Theorem 1.1 finishes the proof. The uncolored version of Theorem 5.2, that is, when \(\mu_{i}^{(1)}=\mu_{i}^{(2)}=\cdots=\mu_{i}^{(d+1)}\), is the following strengthening of the Ham Sandwich theorem: **Theorem 5.3**.: _Let \(\mu_{1},\mu_{2},\ldots\mu_{d+1}\) be finite Borel measures on \(\mathbb{R}^{d}\) such that for every hyperplane \(H\) we have that \(\mu_{i}(H)=0\) for all \(i\in[d+1]\). Then there is a hyperplane \(H\) such that \(\mu_{i}(H^{+})-\mu_{i}(H^{-})=\mu_{j}(H^{+})-\mu_{j}(H^{-})\) for all \(i,j\in[d+1]\)._ Proof.: Apply Theorem 5.2 in the case \(\mathcal{M}_{1}=\mathcal{M}_{2}=\cdots=\mathcal{M}_{d+1}=\{\mu_{1},\ldots,\mu_ {d+1}\}\). First suppose that there exists some hyperplane \(H\) such that the halfspace \(H^{+}\) both maximizes and minimizes one of the \(\mu_{i}\). This implies \(\mu_{i}(H^{+})-\frac{1}{2}\mu_{i}(\mathbb{R}^{d})=\mu_{\alpha}(H^{+})-\frac{1} {2}\mu_{\alpha}(\mathbb{R}^{d})\) for all \(\alpha\in[d+1]\). Since \(\mu_{i}(H^{+})+\mu_{i}(H^{-})=\mu_{i}(\mathbb{R}^{d})\), this implies \(\mu_{i}(H^{+})-\mu_{i}(H^{-})=\mu_{\alpha}(H^{+})-\mu_{\alpha}(H^{-})\) for all \(\alpha\in[d+1]\). If no \(\mu_{i}\) simultaneously maximizes and minimizes some halfspace \(H^{+}\), then by Theorem 5.2, we again get that \(\mu_{i}(H^{+})-\frac{1}{2}\mu_{i}(\mathbb{R}^{d})=\mu_{\alpha}(H^{+})-\frac{1} {2}\mu_{\alpha}(\mathbb{R}^{d})\) for all \(\alpha\in[d+1]\). A family \(\mathcal{F}\) of convex sets in \(\mathbb{R}^{d}\) is _well-separated_ if any collection \(x_{1},\ldots,x_{k}\) of points from pairwise distinct \(K_{1},\ldots,K_{k}\in\mathcal{F}\) is in general position, that is, for any \(k\leq d+1\), any pairwise distinct \(K_{1},\ldots,K_{k}\in\mathcal{F}\) and any \(x_{1}\in K_{1},\ldots,x_{k}\in K_{k}\) the set \(\{x_{1},\ldots,x_{k}\}\) is not contained in a common \((k-2)\)-dimensional affine subspace. See Figure 3 for an example. We call a family \(\mathcal{M}\) of Borel measures on \(\mathbb{R}^{d}\)_well-separated_ if the family of convex hulls of supports \(\mathcal{F}=\{\operatorname{conv}(\operatorname{supp}\ \mu)\ :\ \mu\in\mathcal{M}\}\) is well-separated. The _support_ of a Borel measure \(\mu\) on \(\mathbb{R}^{d}\) is \(\operatorname{supp}\ \mu=\{x\in\mathbb{R}^{d}\ :\ \forall\varepsilon>0\ \mu(B_{ \varepsilon}(x))>0\}\). In particular, if \(\mathcal{F}\) is a well-separated family of \(d+1\) sets in \(\mathbb{R}^{d}\), then no hyperplane can intersect all sets in \(\mathcal{F}\). Barany, Hubard, and Jeronimo [6] show that if \(\mu_{1},\ldots,\mu_{d}\) are finite Borel measures on \(\mathbb{R}^{d}\) with bounded supports that are well-separated and such that \(\mu_{i}(H)=0\) for every hyperplane \(H\) in \(\mathbb{R}^{d}\), then there is a hyperplane that cuts off a specified fraction from each measure \(\mu_{i}\), that is, for \(\alpha_{1},\dots,\alpha_{d}\in(0,1)\) there is a hyperplane \(H\) such that \(\mu_{i}(H^{+})=\alpha_{i}\mu_{i}(\mathbb{R}^{d})\) for all \(i\in[d]\). We use Theorem 5.3 to show the following variant: **Corollary 5.4** (Variant of a result of Barany, Hubard, Jeronimo [6]).: _Let \(\mu_{1},\mu_{2},\dots\mu_{d}\) be finite Borel measures on \(\mathbb{R}^{d}\) with bounded supports and \(\mu_{i}(H)=0\) for all hyperplanes \(H\) in \(\mathbb{R}^{d}\) and for all \(i\in[d]\). Suppose there is an \(x\in\mathbb{R}^{d}\) such that no hyperplane through \(x\) may intersect the supports of all \(\mu_{i}\). Then for all \(\alpha_{1},\dots,\alpha_{d}\in(0,1)\) there is a hyperplane \(H\) such that \(\mu_{i}(H^{+})=\alpha_{i}\mu_{i}(\mathbb{R}^{d})\) for all \(i\in[d]\)._ Proof.: Normalize the measures \(\mu_{i}\) such that \(\alpha_{i}\mu_{i}(\mathbb{R}^{d})=1\) for all \(i\) by dividing each \(\mu_{i}\) by \(\alpha_{i}\mu_{i}(\mathbb{R}^{d})\). In particular, after this normalization \(\mu_{i}(\mathbb{R}^{d})=\frac{1}{\alpha_{i}}>1\) for all \(i\). By a standard compactness argument there is an \(\varepsilon>0\) such that any hyperplane that intersects \(B_{\varepsilon}(x)\) does not intersect the supports of all \(\mu_{i}\). Thus we may construct a Borel measure \(\mu_{d+1}\) on \(\mathbb{R}^{d}\) supported in \(B_{\varepsilon}(x)\) with continuous density and with \(\mu_{d+1}(\mathbb{R}^{d})=1\). Now apply Theorem 5.3 to this collection, which yields a hyperplane \(H\) with \(\mu_{i}(H^{+})-\mu_{i}(H^{-})=\mu_{j}(H^{+})-\mu_{j}(H^{-})\) for all \(i,j\in[d+1]\). The hyperplane \(H\) cannot intersect the supports of all measures \(\mu_{1},\dots,\mu_{d+1}\). Let \(i\in[d+1]\) such that \(H\) is disjoint from the support of \(\mu_{i}\). If \(i\neq d+1\) then \(\mu_{i}(H^{+})-\mu_{i}(H^{-})=\mu_{i}(\mathbb{R}^{d})>1\), but \(|\mu_{d+1}(H^{+})-\mu_{d+1}(H^{-})|\leq 1\), so \(\mu_{i}(H^{+})-\mu_{i}(H^{-})\neq\mu_{d+1}(H^{+})-\mu_{d+1}(H^{-})\). Thus \(i=d+1\). This implies \(\mu_{j}(H^{+})-\mu_{j}(H^{-})=\mu_{d+1}(H^{+})-\mu_{d+1}(H^{-})=1\), which finishes the proof. Notice that Corollary 5.4 contains the result of Barany, Hubard, Jeronimo as a special case, provided that for any family \(\{K_{1},\dots,K_{d}\}\) of compact, convex sets in \(\mathbb{R}^{d}\) that are well-separated, there is an \(x\in\mathbb{R}^{d}\) such that \(\{K_{1},\dots,K_{d},\{x\}\}\) is well-separated. We have been unable to show this. ## 6. Colorful Borsuk-Ulam theorems for higher symmetry The methods we have used here to prove colorful results for symmetric set coverings of spheres generalize easily to other settings. Here we prove generalizations for free \(\mathbb{Z}/p\)-actions on spheres, \(p\) a prime. Below \(s\in\mathbb{Z}/p\) acts on \((j,t)\in[d]\times\mathbb{Z}/p\) by \(s\cdot(j,t)=(j,t+s)\). **Theorem 6.1**.: _Let \(p\) be a prime. Let \(d\geq 1\) and \(n=(p-1)d-1\) be integers. Fix some free \(\mathbb{Z}/p\)-action on \(S^{n}\). Let \(\Sigma\) be a \(\mathbb{Z}/p\)-equivariant triangulation of \(S^{n}\) with vertex set \(V\). Let Figure 3. An example of (supports of) three well-separated measures in the plane. If the supports are bounded then no line can intersect all three measures. \(\ell\colon V\to[d]\times\mathbb{Z}/p\) be \(\mathbb{Z}/p\)-equivariant. Fix \(s_{1},\ldots,s_{d}\in\mathbb{Z}/p\). Then either there is a \((p-1)\)-face \(\sigma\) of \(\Sigma\) with \(\ell(\sigma)=\{j\}\times\mathbb{Z}/p\) for some \(j\in[d]\) or there is a facet \(\sigma\) of \(\Sigma\) with \(\ell(\sigma)=\{(j,s)\ :\ j\in[d],\ s\in\mathbb{Z}/p\setminus\{s_{j}\}\}\)._ Proof.: If no \((p-1)\)-face is labelled with all elements in \(\{j\}\times\mathbb{Z}/p\) then \(\ell\) induces an equivariant map \(\Sigma\to(\partial\Delta_{p-1})^{*d}\) by identifying each label \((j,s)\) with the vertex \(s\) in the \(j\)th copy \(\partial\Delta_{p-1}\). The \(d\)-fold join \((\partial\Delta_{p-1})^{*d}\) is a sphere of dimension \(n\). By Lemma 2.3 such an equivariant map will have non-zero degree and thus be surjective. In particular, some face \(\sigma\) of \(\Sigma\) maps to the facet \(\{(j,s)\ :\ j\in[d],\ s\in\mathbb{Z}/p\setminus\{s_{j}\}\}\) of \((\partial\Delta_{p-1})^{*d}\). **Theorem 6.2**.: _Let \(p\) be a prime. Let \(d\geq 1\) and \(n=(p-1)d-1\) be integers. Let \(A_{i}\subset S^{n}\) be closed sets for \(i\in[d]\) such that \(S^{n}=\bigcup_{i}(A_{i}\cup s\cdot A_{i}\cup\dots\cup s^{p-1}A_{i})\). Suppose that \(\bigcap_{k}\varepsilon^{k}\cdot A_{i}=\varnothing\) for every \(i\in[d]\). Then for all \(s_{1},\ldots,s_{d}\in\mathbb{Z}/p\) we have that \(\bigcap_{i}\bigcap_{s\neq s_{i}}s\cdot A_{i}\neq\varnothing\)._ Proof.: Assume that \(\bigcap_{k}s^{k}\cdot A_{i}=\varnothing\) for all \(i\). Let \(T_{\varepsilon}\) be a \(\mathbb{Z}/p\)-symmetric triangulation of \(S^{d}\) such that each facet has diameter less than \(\varepsilon\), where \(\varepsilon>0\) is chosen such that any set of diameter less than \(\varepsilon\) intersects at most \(p-1\) of the sets \(A_{i},s\cdot A_{i},\ldots,s^{p-1}A_{i}\). This can be achieved by taking repeated barycentric subdivisions of a given \(\mathbb{Z}/p\)-symmetric triangulation. Let \(\ell\colon V(T_{\varepsilon})\to[d]\times\mathbb{Z}/p\) be a labelling of the vertices of \(T_{\varepsilon}\) such that \(\ell(v)=(i,g)\) only if \(v\in g\cdot A_{i}\). By our choice of \(\varepsilon\), there is no face \(\sigma\) with \(\ell(\sigma)=\{j\}\times\mathbb{Z}/p\). By Theorem 6.1 there is a facet labelled precisely by the set \(\{(i,s)\ :\ i\in[d],\ s\in\mathbb{Z}/p\setminus\{s_{i}\}\}\). Let \(x_{\varepsilon}\) be the barycenter of some such facet. As \(\varepsilon\) approaches zero, by compactness of \(S^{d}\), the \(x_{\varepsilon}\) have an accumulation point \(x\). Since the \(A_{i}\) are closed, we have that \(\bigcap_{i}\bigcap_{s\neq s_{i}}s\cdot A_{i}\). **Remark 6.3**.: The proof of Theorem 6.1 shows that if in Theorem 6.1 we increase \(n\) by one, that is, \(n=(p-1)d\), then the first alternative will always occur: There is a face \(\sigma\) of \(\Sigma\) labelled with an entire \(\mathbb{Z}/p\)-orbit, that is, \(\ell(\sigma)=\{j\}\times\mathbb{Z}/p\) for some \(j\in[d]\). Similarly, for Theorem 6.2, if \(n=(p-1)d\) then it is impossible that \(\bigcap_{k}s^{k}\cdot A_{i}=\varnothing\) for every \(i\in[d]\). **Corollary 6.4**.: _Let \(p\) be a prime. Let \(d\geq 1\) and \(n=(p-1)d-1\) be integers. Let \(f\colon S^{n}\to\mathbb{R}^{d}\) be continuous. Then there is a \(\mathbb{Z}/p\)-orbit \(x,s\cdot x,\ldots,s^{p-1}\cdot x\) such that \(f\) maps \(p-1\) points in this orbit to the same point \(y\) in \(\mathbb{R}^{d}\) and the remaining point to \(y-(\alpha,\ldots,\alpha)\) for some \(\alpha\in\mathbb{R}\)._ Proof.: For \(x\in S^{n}\) denote its \(\mathbb{Z}/p\)-orbit \(\{x,s\cdot x,\ldots,s^{p-1}\cdot x\}\) by \(G\cdot x\). Denote the \(i\)th coordinate function of \(f\colon S^{n}\to\mathbb{R}^{d}\) by \(f_{i}\colon S^{n}\to\mathbb{R}\). Let \(x\in S^{n}\). We place \(x\in A_{i}\) if \(\operatorname{diam}(f_{i}(G\cdot x))\geq\operatorname{diam}(f_{j}(G\cdot x))\) for all \(j\in[d]\) and \(f_{i}(x)\geq f_{i}(s^{k}\cdot x)\) for all \(k\in[p]\). Thus \(x\in A_{i}\) if \(f_{i}\) fluctuates at least as much on the orbit of \(x\) as any other coordinate function \(f_{j}\), and additionally \(f_{i}(x)\) is the largest value in the orbit of \(x\). As both of these values have to maximized somewhere \(S^{n}=\bigcup_{i}(A_{i}\cup s\cdot A_{i}\cup\dots\cup s^{p-1}A_{i})\). Suppose there is some point \(x\) in \(\bigcap_{k}s^{k}\cdot A_{i}\). Then \(f_{i}(x)=f_{i}(s\cdot x)=\dots=f_{i}(s^{p-1}\cdot x)\) and \(0=\operatorname{diam}(f_{i}(G\cdot x))\geq\operatorname{diam}(f_{j}(G\cdot x))\) for all \(j\in[d]\). This implies \(f(x)=f(s\cdot x)=\dots=f(s^{p-1}x)\). Otherwise by Theorem 6.2 there is some \(x\) in \(\bigcap_{i}\bigcap_{k\neq p-1}s^{k}\cdot A_{i}\). Then \(f(x)=f(s\cdot x)=\dots=f(s^{p-2}\cdot x)\) and \(\operatorname{diam}(f_{i}(G\cdot x))=\operatorname{diam}(f_{j}(G\cdot x))\) for any \(i,j\in[d]\). Since the first \(p-1\) points in \(G\cdot x\) are mapped to the same point, we have that \(f_{i}(s^{p-1}\cdot x)=f_{i}(x)-\operatorname{diam}(f_{i}(G\cdot x))\) for all \(i\in[d]\). **Remark 6.5**.: For \(p=2\) and \(f\colon S^{d-1}\to\mathbb{R}^{d}\) an odd map, Corollary 6.4 asserts that \(f\) maps a pair of antipodal points to \(f(x)=y\) and \(f(-x)=y-(\alpha,\ldots,\alpha)\). Since \(f\) is odd, \(f(-x)=-y\) and thus \((\alpha,\ldots,\alpha)=2y\), that is, the corollary asserts that \(f\) maps a pair of antipodal points to the \(1\)-dimensional diagonal in \(\mathbb{R}^{d}\). Following Remark 6.3 the proof of Corollary 6.4 shows that for \(n\geq(p-1)d\), we get that any continuous \(f\colon S^{n}\to\mathbb{R}^{d}\) maps an entire \(\mathbb{Z}/p\)-orbit to the same point. This is a classical result of Bourgin-Yang and others [16, 35, 36, 12]. Corollary 6.4 extends this orbit collapsing result in the same fashion that Ky Fan's theorem extends the Borsuk-Ulam theorem. We can now derive a colorful generalization of Theorem 6.2 in the same way that we showed the colorful generalization (Theorem 3.3) Fan's theorem. **Theorem 6.6**.: _Let \(p\) be a prime. Let \(d\geq 1\) and \(n=(p-1)d-1\) be integers. Let \(A_{i}^{(j)}\subset S^{n}\) be closed sets with \(i\in[d]\) and \(j\in[n+1]\) such that \(S^{n}=\bigcup_{i}(A_{i}^{(j)}\cup s\cdot A_{i}^{(j)}\cup\cdots\cup s^{p-1} \cdot A_{i}^{(j)})\) for every \(j\in[n+1]\). Suppose that \(\bigcap_{k}s^{k}\cdot A_{i}^{(j_{k})}=\varnothing\) for every \(i\in[d]\) and for pairwise distinct \(j_{1},\ldots,j_{p}\in[n+1]\). Then for all \(s_{1},\ldots,s_{d}\in\mathbb{Z}/p\) we have that \(\bigcap_{i}\bigcap_{s\neq s_{i}}s\cdot A_{i}^{(\pi(i,s)}\neq\varnothing\) for some bijection \(\pi\colon\{(i,s)\in[d]\times\mathbb{Z}/p\ :\ s\neq s_{i}\}\to[n+1]\)._ Proof.: Let \(T_{\varepsilon}\) be a triangulation, where every face has diameter at most \(\varepsilon>0\). Here \(\varepsilon\) is chosen sufficiently small so that no face intersects \(s\cdot A_{i}^{(j_{1})},s^{2}\cdot A_{i}^{(j_{2})},\ldots,s^{p}\cdot A_{i}^{( j_{p})}\) for pairwise distinct \(j_{1},\ldots,j_{p}\in[n+1]\) and any \(i\). Let \(T_{\varepsilon}^{\prime}\) denote the barycentric subdivision of \(T_{\varepsilon}\). Let \(\ell\colon V(T_{\varepsilon}^{\prime})\to[d]\times\mathbb{Z}/p\) be a labelling of the vertices of \(T_{\varepsilon}^{\prime}\) such that \(\ell(v)=(i,g)\) only if \(v\in g\cdot A_{i}^{(k)}\) and \(v\) subdivides a \((k-1)\)-dimensional face of \(T_{\varepsilon}\). We may assume that \(\ell\) is \(\mathbb{Z}/p\)-equivariant. By our choice of \(\varepsilon\), there is no face \(\sigma\) with \(\ell(\sigma)=\{j\}\times\mathbb{Z}/p\). By Theorem 6.1 there is a facet labelled precisely by the set \(\{(i,s)\ :\ i\in[d],\ s\in\mathbb{Z}/p\setminus\{s_{i}\}\}\). Let \(x_{\varepsilon}\) be the barycenter of some such facet. As \(\varepsilon\) approaches zero, by compactness of \(S^{n}\), the \(x_{\varepsilon}\) have an accumulation point \(x\). For every \(\varepsilon>0\) we can find a bijection \(\pi_{\varepsilon}\colon\{(i,s)\in[d]\times\mathbb{Z}/p\ :\ s\neq s_{i}\}\to[n+1]\) such that \(x_{\varepsilon}\) is at distance less than \(\varepsilon\) from the sets \(g\cdot A_{i}^{(\pi(i,g)}\) with \(i\in[d]\) and \(g\neq g_{i}\). Since there are finitely many bijection \(\pi\colon\{(i,s)\in[d]\times\mathbb{Z}/p\ :\ s\neq s_{i}\}\to[n+1]\), as \(\varepsilon\to 0\) one such bijection will be realized infinitely many times. Call this bijection \(\pi\). Since the \(A_{i}^{(j)}\) are closed, we have that \(x\in\bigcap_{i}\bigcap_{s\neq s_{i}}s\cdot A_{i}^{(\pi(i,s))}\). **Remark 6.7**.: Theorem 6.6 has the colorful Borsuk-Ulam theorem (Theorem 1.1) as a corollary: The case \(p=2\) specializes to Theorem 3.3.
2309.17030
A return-to-home model with commuting people and workers
This article proposes a new model to describe human intra-city mobility. The goal is to combine the convection-diffusion equation to describe commuting people's movement and the density of individuals at home. We propose a new model extending our previous work with a compartment of office workers. To understand such a model, we use semi-group theory and obtain a convergence result of the solutions to an equilibrium distribution. We conclude this article by presenting some numerical simulations of the model.
Pierre Magal
2023-09-29T07:34:20Z
http://arxiv.org/abs/2309.17030v1
# A return-to-home model with commuting people and workers ###### Abstract This article proposes a new model to describe human intra-city mobility. The goal is to combine the convection-diffusion equation to describe commuting people's movement and the density of individuals at home. We propose a new model extending our previous work with a compartment of office workers. To understand such a model, we use semi-group theory and obtain a convergence result of the solutions to an equilibrium distribution. We conclude this article by presenting some numerical simulations of the model. **Keywords:** Return-to-home model, Intra-City-Mobility, Diffusion convection equation ## 1 Introduction Understanding human intra-city displacement is crucial since it influences populations' dynamics. Human mobility is essential to understand and quantifying social behavior changes. In light of the recent COVID-19 epidemic outbreak, human travel is critical to know how a virus spreads at the scale of a city, a country, and the scale of the earth, see [41, 42]. We can classify human movement into: 1) short-distance movement: working, shopping, and other intra-city activities; 2) long-distance movement: intercity travels, planes, trains, cars, etc. These considerations have been developed recently in [6, 19, 24, 31]. A global description of the human movement has been proposed (by extending the idea of the Brownian motion) by considering the Levy flight process. The long-distance movement can also be covered using patch models (see Cosner et al. [9] for more results). The spatial motion of populations is sometimes modeled using Brownian motion and diffusion equations. For instance, reaction-diffusion equations are widely used to model the spatial invasion of populations both in ecology and epidemiology. We refer, for example, to Cantrell and Cosner [7], Cantrell, Cosner, and Ruan [8], Murray [32], Perthame [34], Roques [40] and the references therein. In particular, the spatial propagation for the solutions of reaction-diffusion equations has been observed and studied in the 30s by Fisher [15] and Kolmogorov, Petrovski, and Piskunov [25]. Diffusion is a good representation of the process of invasion or colonization for humans and animals. Nevertheless, once the population is established, the return-to-home process (i.e., diffusion-convection combined with return-to-home) seems to be more suitable for describing the movement of human daily life. A good model for intra-city mobility should also incorporate population density in the city. Figure 1 represents the evolution of the population density in Tokyo. This type of problem have been consider by geographer long ago, and we refer to the book of [36] for nice overview on this topic. Ducrot and Magal [12] previously proposed a model with return-to-home with two classes of people, the travelers and the people at home. The present article aims to improve this previous model by introducing a third compartment composed of immobile individuals composed mostly of office workers. In other words, we are trying to model commuting people in a city. This process combines several aspects; some are summarized in Figure 2. In [10], a patches model was proposed to describe commuting people. To our best knowledge, our approach using partial differential equations is new, and we believe that such an approach is very robust. Here, we model the tendency of commuters to travel in a city, and the Figure 1: _The above figure represents the evolution of the density of individuals (at home) in Tokyo city. This figure is taken from [5]._ diffusion takes care of the uncertainty around a tendency (which is modeled by a transport term). For instance, people going to work may change sometime their travel (to buy something, for example). The plane of the paper is the following. In section 2, we present the model. Section 3 focuses on the motion of travelers by using a linear diffusion-convection equation in \(L^{1}\left(\mathbb{R}^{2}\right)\). In section 3, we present an \(L^{1}\) semigroup theory and prove the positivity and the preservation of the total mass of individuals. In section 4, we investigate the asymptotic behavior of the return-to-home model. Section 5 presents a hybrid model where the home locations are discrete. Section 6 presents some numerical simulations of a hybrid model on a \(\Omega=[0,1]\times[0,1]\). In section 7, we conclude the paper by discussing some perspectives. The appendix section A is devoted to the model on a bounded domain and its numerical scheme. ## 2 Eulerian formulation of the model The principle of the model is described in Figure 4. After leaving home, people spend some time commuting to their working places, and after spending some time at work, they return home. In the model, the average time spent at home will be \(1/\gamma\), the average time spent commuting is \(1/\alpha\), the average time spent at work in \(1/\chi\). The average time spent at home \(1/\gamma\) should be approximately equal to \(12\) hours (\(=0.5\) day) and the time spent at works \(1/\chi\) should be approximately equal to \(10\) hours (\(=0.41\) day) will be much longer than the average time spent commuting \(1/\alpha\) which should be approximately equal to \(2\) hours (\(=0.08\) day). But the point is to get a "simple" model to describe the movement of people using diffusion and convection. The parameters \(1/\gamma\), \(1/\chi\), and \(1/\alpha\) may change with time, for example, during the lockdown due to an epidemic outbreak. Figure 2: Principle of the return-home model. Here, for simplicity, we focus on people who leave their homes to go to work. Therefore, the model is not focusing on people leaving their homes and spending a little time shopping, practicing their hobbies, etc... We consider this in the model by considering some random fluctuation around the main activity, which is working. Another simplification in the model is that people at work no longer move. So here we look at people working in offices or factories, and we neglect the people moving within the city for their job (ex., taxi drivers, etc...). So the model intends to capture only a part of the workers' movement. We define the distribution of population \(y\in\mathbb{R}^{2}\mapsto u(t,y)\in\mathbb{R}\) is the distribution of the population of the people staying at home at time \(t\). That is to say that, for any subdomain \(\omega\subset\mathbb{R}^{2}\), \[\int_{\omega}u(t,y)dy\in\mathbb{R},\] is the number of people staying at home with their home located in \(\omega\) at time \(t\). Let \(y\in\mathbb{R}^{2}\) be the home location individuals. Then the distribution \(x\to v(t,x,y)\) is the distribution of travelers who are going to their working place, some shopping place, etc... and which are coming from a home located at the position \(y\). That is to say that, for any subdomain \(\omega\subset\mathbb{R}^{2}\) \[\int_{\omega}v(t,x,y)dx,\] is the number of _travelers_ located in the region \(\omega\) at time \(t\) coming from a home located at the position \(y\). The distribution \(x\to w(t,x,y)\) is the distribution of individuals who arrived at their destination. Those people stay for a random time at their working place, a shopping place, and others before returning home. The home location of the distribution \(x\to w(t,x,y)\) is \(y\). We assume for simplicity that those people are no longer moving. That is, for any subdomain \(\omega\subset\mathbb{R}^{2}\) \[\int_{\omega}w(t,x,y)dx,\] is the number of people who arrived at their destination in the subdomain \(\omega\) at time \(t\) and are not yet back home. To simplify the analysis of the model, we consider the home location \(y\) as a parameters of the model, and we use the notations \[u_{y}(t)=u(t,y),\,v_{y}(x)=v(x,y),\text{ and }w_{y}(x)=w(x,y).\] The return home model is the following for each \(y\in\mathbb{R}^{2}\), the system \[\left\{\begin{aligned} &\partial_{t}u_{y}(t)=\chi\int_{\mathbb{R}^{2}}w _{y}(t,x)\mathrm{d}x-\gamma u_{y}(t),\\ &\partial_{t}v_{y}(t,x)=\varepsilon^{2}\Delta_{x}v_{y}(t,x)- \nabla_{x}\cdot(v_{y}\,\mathbf{C_{y}})-\alpha v_{y}+\gamma g(x-y)u_{y}(t),\\ &\partial_{t}w_{y}(t,x)=\alpha v_{y}(t,x)-\chi w_{y}(t,x),\end{aligned}\right. \tag{2.1}\] with the initial distribution \[\left\{\begin{aligned} & u_{y}(0)=u_{y0}\in\mathbb{R}_{+},\\ & v_{y}(0,x)=v_{y0}(x)\in L^{1}_{+}\left(\mathbb{R}^{2}\right),\\ &\text{and}\\ & w_{y}(0,x)=w_{y0}(x)\in L^{1}_{+}\left(\mathbb{R}^{2}\right). \end{aligned}\right. \tag{2.2}\] **Remark 2.1**.: _We refer to [12] for an approach allowing the integrability of \(u(t,x,y)\) with respect to both \(x\) and \(y\) for each \(t>0\)._ **Remark 2.2**.: _Through the paper for the Banach, we use \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\) instead of \(\mathrm{L}^{1}\left(\mathbb{R}^{2},\mathbb{R}\right)\) to simplify the notations. We will only specify the rang of maps whenever it is not equal to \(\mathbb{R}\)._ In the model, the map \(x\to g(x-y)\) is a Gaussian distribution representing the location of a house centered at the position \(y\in\mathbb{R}^{2}\). The function \(g\) is defined by \[g(x_{1},x_{2})=\frac{1}{2\pi\sigma^{2}}e^{-\frac{x_{1}^{2}+x_{2}^{2}}{2\sigma ^{2}}}. \tag{2.3}\] That is a Gaussian distribution centered at \(0\), and with standard deviation \(\sigma>0\). Note that for all \(y\in\mathbb{R}^{2}\), the translated map \(g(\cdot-y)\) satisfies \[\int_{\mathbb{R}^{2}}g(x-y)\mathrm{d}x=1\text{ and }\int_{\mathbb{R}^{2}}xg(x-y )\mathrm{d}x=y.\] In the model, \(\Delta_{x}v_{y}\) is the Laplace operator of x with respect to the variable \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\). That is, \[\Delta_{x}v_{y}(t,x)=\partial_{x_{1}}^{2}v_{y}(t,x)+\partial_{x_{2}}^{2}v_{y}(t,x).\] The operator \(\nabla_{x}\cdot(v_{y}\,\mathbf{C_{y}})\) is the divergence of \(v_{y}\,\mathbf{C_{y}}\) with respect to the variable \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\). That is, \[\nabla_{x}\cdot(v_{y}(t,x)\,\mathbf{C_{y}}(x))=\partial_{x_{1}}\bigg{(}v_{y}( t,x)\,\mathbf{C_{y}}(x)_{1}\bigg{)}+\partial_{x_{2}}\bigg{(}v_{y}(t,x)\, \mathbf{C_{y}}(x)_{2}\bigg{)},\] where \[\mathbf{C_{y}}(x)=\bigg{(}\begin{array}{c}\mathbf{C_{y}}(x)_{1}\\ \mathbf{C_{y}}(x)_{2}\end{array}\bigg{)}\in\mathbb{R}^{2},\] is the speed of individuals located at the position \(x\in\mathbb{R}^{2}\) and coming from a home located at the position \(y\in\mathbb{R}^{2}\). The density of individuals per house remains constant with time. That is \[n(y)=u_{y}(t)+\int_{\mathbb{R}^{2}}v_{y}(t,x)\mathrm{d}x+\int_{\mathbb{R}^{2} }w_{y}(t,x)\mathrm{d}x,\forall t\geq 0,\forall y\in\mathbb{R}^{2}, \tag{2.4}\] where \(n(y)\) is the density of home in \(\mathbb{R}^{2}\). That is, for each subdomain \(\omega\subset\mathbb{R}^{2}\) \[\int_{\omega}n(y)dy,\] is the number of people having an their home in the subdomain \(\omega\). This motion speed of individuals in a city depends on their home location \(y\) of individuals. The distance to individuals' workplaces often relies on their home location in the city. For example, people living in the suburbs travel much longer than people living downtown. Therefore, the traveling speed \(\mathbf{C_{y}}(x)\) at \(x\in\mathbb{R}^{2}\) depends on the home location \(y\). ## 3 Model describing the motion of travelers The convection terms describes the tendency to moving the speed \(\mathbf{C_{y}}(x)\) at the location \(x\) when they started from the home located at the position \(y\). The diffusion describe a random movement around the tendency corresponding to the convection. In this model the displacement of individuals is described by \[\partial_{t}v_{y}(t,x) =\underbrace{\varepsilon^{2}\bigtriangleup_{x}v_{y}(t,x)}_{ \begin{subarray}{c}\text{Random}\\ \text{motion}\end{subarray}}-\underbrace{\nabla_{x}\cdot(v_{y}(t,x)\, \mathbf{C_{y}}(x))}_{\begin{subarray}{c}\text{Deterministic}\\ \text{movement}\\ \text{with speed $\mathbf{C}$}\end{subarray}}, \tag{3.1}\] where \(\varepsilon^{2}\geq 0\) is the diffusion constant (which corresponds to the standard deviation of the law of displacement after one day of the around the original location), and \(x\to\mathbf{C_{y}}(x)=\mathbf{C}(x,y)\in\mathbb{R}^{2}\) is a deterministic speed displacement at location \(x\in\mathbb{R}^{2}\) for individuals having their home located at the position \(y\in\mathbb{R}^{2}\). In this section, we use semigroup theory to define the the solution of (3.1). We refer to [1, 14, 16, 21, 22, 26, 27, 33, 38, 39, 43] for more result about semigroups generated by diffusive systems. The book of Lunardy provides a very detailed presentation for the case \(L^{p}\left(\mathbb{R}^{2}\right)\) (with \(1<p<\infty\)). Here, we consider the case \(p=1\). ### Purely diffusive model In this section, we consider the equation (3.1) the special case \(\mathbf{C_{y}}(x)\equiv 0\). That is, \[\left\{\begin{aligned} \partial_{t}v(t,x)&= \varepsilon^{2}\bigtriangleup_{x}v(t,x),\\ v(0,x)&=v_{0}(x)\in\mathrm{L}^{1}\left(\mathbb{R}^{2} \right).\end{aligned}\right. \tag{3.2}\] We consider the family of bounded linear operator \(\left\{T_{\varepsilon^{2}\bigtriangleup_{x}}(t)\right\}_{t\geq 0}\subset \mathcal{L}\left(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\right)\) defined by \[T_{\varepsilon^{2}\bigtriangleup_{x}}(t)\left(v(.)\right)(x)=\left\{\begin{aligned} \int_{\mathbb{R}^{2}}K(t,x-z)v(z)dz,&\text{ for }t>0,\\ v(x),&\text{ for }t=0,\end{aligned}\right.\] with \[K(t,x)=\frac{1}{4\pi\varepsilon^{2}t}e^{-\frac{|x|^{2}}{4\varepsilon^{2}t}}.\] The family of bounded linear operator \(\left\{T_{\varepsilon^{2}\bigtriangleup_{x}}(t)\right\}_{t\geq 0}\subset \mathcal{L}\left(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\right)\) is a strongly continuous semigroup on \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\). That is, 1. \(T_{\varepsilon^{2}\bigtriangleup_{x}}(0)=I\); 2. \(T_{\varepsilon^{2}\bigtriangleup_{x}}(t)T_{\varepsilon^{2}\bigtriangleup_{x }}(s)=T_{\varepsilon^{2}\bigtriangleup_{x}}(t+s),\forall t,s\geq 0\); 3. \(t\mapsto T_{\varepsilon^{2}\bigtriangleup_{x}}(t)u\) is continuous from \([0,+\infty)\) to \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\). Furthermore, \(\left\{T_{\varepsilon^{2}\bigtriangleup_{x}}(t)\right\}_{t\geq 0}\subset \mathcal{L}\left(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\right)\) is a semigroup of contraction \[\left\|T_{\varepsilon^{2}\bigtriangleup_{x}}(t)\left(\phi\right)\right\|_{ \mathrm{L}^{1}\left(\mathbb{R}^{2}\right)}\leq\left\|\phi\right\|_{\mathrm{L} ^{1}\left(\mathbb{R}^{2}\right)},\forall t\geq 0,\forall\phi\in\mathrm{L}^{1} \left(\mathbb{R}^{2}\right),\] and \(\left\{T_{\varepsilon^{2}\bigtriangleup_{x}}(t)\right\}_{t\geq 0}\subset \mathcal{L}\left(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\right)\) is a positive semigroup, that is \[T_{\varepsilon^{2}\bigtriangleup_{x}}(t)\bigg{(}\mathrm{L}^{1}_{+}\left( \mathbb{R}^{2}\right)\bigg{)}\subset\mathrm{L}^{1}_{+}\left(\mathbb{R}^{2} \right),\forall t\geq 0, \tag{3.3}\] and the total of mass of individuals in preserved \[\int_{\mathbb{R}^{2}}T_{\varepsilon^{2}\bigtriangleup_{x}}(t)\left(\phi \right)(x)\mathrm{d}x=\int_{\mathbb{R}^{2}}\phi(x)\,\mathrm{d}x,\forall t\geq 0, \forall\phi\in\mathrm{L}^{1}_{+}\left(\mathbb{R}^{2}\right). \tag{3.4}\] By using the semigroup property of \(\left\{T_{\varepsilon^{2}\triangle_{x}}(t)\right\}_{t\geq 0}\subset\mathcal{L} \left(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\right)\), we deduces that the family of linear operator \[R_{\lambda}=\int_{0}^{\infty}e^{-\lambda t}T_{\varepsilon^{2}\triangle_{x}}(t) \mathrm{d}t,\forall\lambda\in\mathbb{C},\text{ with }\mathrm{Re}\,\lambda>0,\] is a pseudo resolvent. That is \[R_{\lambda}-R_{\mu}=\left(\mu-\lambda\right)R_{\lambda}R_{\mu},\forall\lambda, \mu\in\mathbb{C},\text{ with }\mathrm{Re}\,\lambda>0.\] From Lemma 2.2.13. in [27], we know that the null space \(\mathrm{N}\left(R_{\lambda}\right)\) and the range \(\mathrm{R}\left(R_{\lambda}\right)\) are independent of \(\lambda\in\mathbb{C}\) with \(\mathrm{Re}\,\lambda>0\), and the null space \(\mathrm{N}\left(R_{\lambda}\right)\) is closed in \(\mathrm{L}^{1}(\mathbb{R}^{2})\). Moreover, by using the strong continuity of the semigroup \(\left\{T_{\varepsilon^{2}\triangle_{x}}(t)\right\}_{t\geq 0}\subset\mathcal{L} \left(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\right)\), one can prove that \[\lambda\,R_{\lambda}u\to u,\text{ as }\lambda\to+\infty,\] hence \[\mathrm{N}\left(R_{\lambda}\right)=\left\{0_{\mathrm{L}^{1}}\right\},\forall \lambda\in\mathbb{C},\text{ with }\mathrm{Re}\,\lambda>0.\] Consequenlty, it follows from [27, Proposition 2.2.14] that there exists a linear closed operator \(A:D(A)\subset\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\to\mathrm{L}^{1}\left( \mathbb{R}^{2}\right)\), such that \[R_{\lambda}=\left(\lambda I-A\right)^{-1},\forall\lambda\in\mathbb{C},\text{ with }\mathrm{Re}\,\lambda>0.\] Moreover \(A\) is the infinitesimal generator of \(\left\{T_{\varepsilon^{2}\triangle_{x}}(t)\right\}_{t\geq 0}\subset\mathcal{L} \left(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\right)\). That is \[D(A)=\left\{u\in\mathrm{L}^{1}\left(\mathbb{R}^{2}\right):\lim_{t\searrow 0} \frac{T_{\varepsilon^{2}\triangle_{x}}(t)u-u}{t}\text{ exists in }\mathrm{L}^{1}\left( \mathbb{R}^{2}\right)\right\},\] and \[Au=\lim_{t\searrow 0}\frac{T_{\varepsilon^{2}\triangle_{x}}(t)u-u}{t},\forall u \in D(A).\] To connect \(A\) and \(\varepsilon^{2}\triangle_{x}\), one can prove that \[\lim_{t\searrow 0}\frac{T_{\varepsilon^{2}\triangle_{x}}(t)u-u}{t}=\varepsilon ^{2}\triangle_{x}u,\forall u\in C_{c}^{2}\left(\mathbb{R}^{2}\right).\] where \(C_{c}^{2}\left(\mathbb{R}^{2}\right)\) is the space of \(C^{2}\) functions with compact support. It follows that \[C_{c}^{2}\left(\mathbb{R}^{2}\right)\subset D(A),\] and \[Au=\varepsilon^{2}\triangle_{x}u,\forall u\in C_{c}^{2}\left(\mathbb{R}^{2} \right).\] Since \(C_{c}^{2}\left(\mathbb{R}^{2}\right)\) is dense in \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\), it follows that the graph of \(A\) is the closure of the graph of \(\varepsilon^{2}\triangle_{x}\) considered a linear operator from \(C_{c}^{2}\left(\mathbb{R}^{2}\right)\) into \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\). **Remark 3.1**.: _In the above problem, the difficulty is to define the domain \(D(A)\) of \(A\) properly. This domain is not explicit in dimension \(2\), and the goal is to guarantee the invertibility of \(\lambda I-A\) from \(D(A)\) to \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\). The Proposition 8.1.3 p. 223 in the book Haase [21] gives_ \[W^{1,1}\left(\mathbb{R}^{2}\right)\subset D(A)\subset W^{2,1}\left(\mathbb{R}^ {2}\right).\] **Lemma 3.2**.: _The semigroup \(\left\{T_{\varepsilon^{2}\triangle_{x}}(t)\right\}_{t\geq 0}\subset\mathcal{L} \left(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\right)\) is irreducible. That is, for each \(u\in L^{1}_{+}\left(\mathbb{R}^{2}\right)\) with \(u\neq 0\), and each \(\phi\in L^{\infty}_{+}\left(\mathbb{R}^{2}\right)\) with \(\phi\neq 0\),_ \[\int_{\mathbb{R}^{2}}\phi(x)T_{\varepsilon^{2}\triangle_{x}}(t)(u)(x)dx>0, \forall t>0.\] Proof.: Let \(u\in L^{1}_{+}\left(\mathbb{R}^{2}\right)\) with \(u\neq 0\), \(\phi\in L^{\infty}_{+}\left(\mathbb{R}^{2}\right)\) with \(\phi\neq 0\), and \(t>0\). By using Fubini theorem, we have \[\int_{\mathbb{R}^{2}}\phi(x)T_{\varepsilon^{2}\triangle_{x}}(t)(u)(x)dx\,= \int_{\mathbb{R}^{2}}\phi(x)\int_{\mathbb{R}^{2}}K(t,x-z)u(z)dzdx\] and since \[K(t,x)=\frac{1}{4\pi\varepsilon^{2}t}e^{-\frac{|x|^{2}}{4\varepsilon^{2}t}}.\] it follows that \(x\to\int_{\mathbb{R}^{2}}K(t,x-x)u(x)dx\) is continuous and strictly positive for each \(x\in\mathbb{R}^{2}\). The result follows. ### Purely convective model In this section, we consider the equation (3.1) the special case \(\varepsilon=0\). That is, \[\left\{\begin{aligned} \partial_{t}v(t,x)&=- \nabla_{x}\cdot\left(v(t,x)\,\mathbf{C_{y}}(x)\right),\\ v(0,x)&=v_{0}(x)\in\mathrm{L}^{1}\left(\mathbb{R}^{2} \right).\end{aligned}\right. \tag{3.5}\] To define the solutions integrated along the characteristics we make the following assumptions. **Assumption 3.3**.: _Let \(\mathbf{C}:\mathbb{R}^{2}\to\mathbb{R}^{2}\) be a maps. We assume that_ 1. _The map_ \(x\in\mathbb{R}^{2}\mapsto\mathbf{C}(x)\in\mathbb{R}^{2}\) _is uniformly continuous bounded;_ 2. _The map_ \(x\in\mathbb{R}^{2}\mapsto\mathbf{C}(x)\in\mathbb{R}^{2}\) _is supposed to be a_ \(C^{1}\) _function;_ 3. _For_ \(i=1,2\)_, the map_ \(x\mapsto\partial_{x_{i}}\mathbf{C}(x)\) _is bounded and uniformly continuous._ Assume that \(\mathbf{C_{y}}\) satisfies the above assumption. Then the map \(x\in\mathbb{R}^{2}\mapsto\mathbf{C_{y}}(x)\in\mathbb{R}^{2}\) is Lipschitz continuous, and the flow on \(\mathbb{R}^{2}\) generated by \[\left\{\begin{aligned} &\partial_{t}\Pi_{y}(t)z=\mathbf{C_{y}} \left(\Pi_{y}(t)z\right),\forall t\in\mathbb{R},\\ &\Pi_{y}(0)z=z\in\mathbb{R}^{2},\end{aligned}\right. \tag{3.6}\] is well defined. Moreover we have the following property. **Lemma 3.4**.: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3.We have_ \[\det\partial_{z}\Pi_{y}(t)z=\exp\left(\int_{0}^{t}\nabla_{x}\cdot\mathbf{C_{y}} \left(\Pi_{y}(\sigma)z\right)\,\mathrm{d}\sigma\right),\forall t\geq 0, \tag{3.7}\] _and_ \[\det\partial_{z}\Pi_{y}(-t)z=\exp\left(-\int_{0}^{t}\nabla_{x}\cdot\mathbf{C_{y }}\left(\Pi_{y}(-\sigma)z\right)\,\mathrm{d}\sigma\right),\forall t\geq 0. \tag{3.8}\] Proof.: Define \(U(t):=\partial_{z}\Pi_{y}(t)z\in M_{n}\left(\mathbb{R}\right)\). We know that \[\frac{dU(t)}{dt}=\nabla_{x}\mathbf{C_{y}}\left(\Pi_{y}(t)z\right)U(t),\text{ and }U(0)=I.\] For any matrix-valued \(C^{1}\) function \(A:t\mapsto A(t)\) the Jacobi's formula reads \[\frac{d}{dt}\det A(t)=\det A(t)\operatorname{tr}(A^{-1}(t)\frac{dA(t)}{dt})\] and by using the property of the trace \(\operatorname{tr}\left(AB\right)=\operatorname{tr}\left(BA\right)\), we deduce that \[\frac{d}{dt}\det U(t)=\det U(t)\operatorname{tr}(\frac{dU(t)}{dt}\,U(t)^{-1}) =\det U(t)\operatorname{tr}(\nabla_{x}\mathbf{C_{y}}\left(\Pi_{y}(t)z\right))\] and the result follows from the fact that \[\operatorname{tr}(\nabla_{x}\mathbf{C_{y}}\left(\Pi_{y}(t)z\right))=\nabla_{ x}\cdot\mathbf{C_{y}}\left(\Pi_{y}(t)z\right).\] Consider now \(\widehat{\Pi}_{y}(t)z=\Pi_{y}(-t)z\). Then \[\left\{\begin{aligned} &\partial_{t}\widehat{\Pi}_{y}(t)z=- \mathbf{C_{y}}\left(\widehat{\Pi}_{y}(t)z\right),\forall t\in\mathbb{R},\\ &\widehat{\Pi}_{y}(0)z=z\in\mathbb{R}^{2}.\end{aligned}\right. \tag{3.9}\] Therefore (3.8) follows from (3.7). Assume first that the solution of (3.3) is \(C^{1}\). That is \[v\in C^{1}\left(\mathbb{R}\times\mathbb{R}^{2},\mathbb{R}\right).\] Then the right hand side of (3.3) can be expended, and (3.3) reads as \[\partial_{t}v(t,x)=-\mathbf{C_{y}}(x)\cdot\nabla_{x}v(t,x)-v(t,x)\,\nabla_{x} \cdot\mathbf{C_{y}}(x),\] where \(\nabla_{x}v(x)\) is the gradient of \(x\mapsto v(x)\) which is defined by \[\nabla_{x}\,v(x)=\left(\begin{aligned} &\partial_{x_{1}}v(x)\\ &\partial_{x_{2}}v(x)\end{aligned}\right).\] Moreover, we have \[\frac{d}{dt}v(t,\Pi_{y}(t)z) = \partial_{t}v(t,\Pi_{y}(t)z)+\nabla_{x}v(t,\Pi_{y}(t)z)\cdot \partial_{t}\Pi_{y}(t)z\] \[= -\mathbf{C_{y}}(\Pi_{y}(t)z)\cdot\nabla_{x}v(t,\Pi_{y}(t)z)-v(t, \Pi_{y}(t)z)\,\nabla_{x}\cdot\mathbf{C_{y}}(\Pi_{y}(t)z)\] \[+\nabla_{x}v(t,\Pi_{y}(t)z)\cdot\mathbf{C_{y}}\left(\Pi_{y}(t)z \right),\] and we obtain \[\frac{d}{dt}v(t,\Pi_{y}(t)z)=-v(t,\Pi_{y}(t)z)\,\nabla_{x}\cdot\mathbf{C_{y}}( \Pi_{y}(t)z)\] Therefore \[v(t,\Pi_{y}(t)z)=\exp\left(-\int_{0}^{t}\nabla_{x}\cdot\mathbf{C_{y}}(\Pi_{y} (\sigma)z)\,\,\mathrm{d}\sigma\right)v(0,z)\] by choosing \(z=\Pi_{y}(-t)x\) we obtain the following explicit formula for the solutions \[v(t,x)=\exp\left(-\int_{0}^{t}\nabla_{x}\cdot\mathbf{C_{y}}(\Pi_{y}(\sigma-t) x)d\sigma\right)v_{0}\left(\Pi_{y}(-t)x\right),\] or equivalently \[v(t,x)=\exp\left(-\int_{0}^{t}\nabla_{x}\cdot\mathbf{C_{y}}(\Pi_{y}(-\sigma)x )d\sigma\right)v_{0}\left(\Pi_{y}(-t)x\right).\] We consider the family of bounded linear operator \(\left\{T_{B_{y}}(t)\right\}_{t\geq 0}\subset\mathcal{L}\left(\mathrm{L}^{1} \left(\mathbb{R}^{2}\right)\right)\) defined by \[T_{B_{y}}(t)\left(v_{0}\right)(x)=\exp\left(-\int_{0}^{t}\nabla_{x}\cdot \mathbf{C_{y}}(\Pi_{y}(-\sigma)x)d\sigma\right)v_{0}\left(\Pi_{y}(-t)x\right). \tag{3.10}\] Similarly to the diffusion we also have the following result. **Lemma 3.5**.: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3. There exists a closed linear operator \(B_{y}:D(B_{y})\in\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\to\mathrm{L}^{1} \left(\mathbb{R}^{2}\right)\) the infinitesimal generator of a strongly continuous semigroup \(\left\{T_{B_{y}}(t)\right\}_{t\geq 0}\subset\mathcal{L}\left(\mathrm{L}^{1} \left(\mathbb{R}^{2}\right)\right)\) of positive bounded linear operator on \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\) defined by (3.10)._ We observe that we have the following conservation of the number of individuals is preserved. That is, for each Borelian set \(\Omega\subset\mathbb{R}^{2}\), \[\int_{\Omega}T_{B_{y}}(t)\left(v_{0}\right)(x)=\int_{\Omega}\exp\left(-\int_{0 }^{t}\nabla_{x}\cdot\mathbf{C_{y}}(\Pi_{y}(-\sigma)x)d\sigma\right)v_{0}\left( \Pi_{y}(-t)x\right)dx\] and by using (3.8), we obtain \[\int_{\Omega}T_{B_{y}}(t)\left(v_{0}\right)(x)=\int_{\Omega}v_{0}\left(\Pi_{y }(-t)x\right)\det\partial_{z}\Pi_{y}(-t)xdx,\] therefore by making a change of variable \(z=\Pi_{y}(-t)x\), we obtain \[\int_{\Omega}T_{B_{y}}(t)\left(v_{0}\right)(x)=\int_{\Pi_{y}(-t)\Omega}v_{0} \left(z\right)dz,\forall t\geq 0.\] When \(\Omega=\mathbb{R}^{2}\), we deduce that the total mass of individuals is preserved. That is, \[\int_{\mathbb{R}^{2}}T_{B_{y}}(t)\left(v_{0}\right)(x)dx=\int_{\mathbb{R}^{2}} v_{0}\left(x\right)dx,\forall t\geq 0. \tag{3.11}\] By using the semi-explicitly formula (3.10) that \(\left\{T_{B_{y}}(t)\right\}_{t\geq 0}\subset\mathcal{L}\left(\mathrm{L}^{1} \left(\mathbb{R}^{2}\right)\right)\) is a strongly continuous semigroup on \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\), and \[\|T_{B_{y}}(t)v_{0}\|_{\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)}=\|v_{0}\|_{ \mathrm{L}^{1}\left(\mathbb{R}^{2}\right)},\forall t\geq 0. \tag{3.12}\] Moreover, one has \[\lim_{t\searrow 0}\frac{T_{B_{y}}(t)v_{0}-v_{0}}{t}=-\nabla_{x}\cdot\left(v_{0 }(x)\,\mathbf{C_{y}}(x)\right),\forall v_{0}\in C^{1}\left(\mathbb{R}^{2} \right)\cap W^{1,1}\left(\mathbb{R}^{2}\right),\] where \[C^{1}\left(\mathbb{R}^{2}\right)\cap W^{1,1}\left(\mathbb{R}^{2}\right)=\left\{ v\in C^{1}\left(\mathbb{R}^{2}\right)\cap\mathrm{L}^{1}\left(\mathbb{R}^{2} \right):x\mapsto\partial_{x_{i}}v(x)\in\mathrm{L}^{1}\left(\mathbb{R}^{2} \right),\forall i=1,2\right\}.\] It follows that, \[C_{c}^{1}\left(\mathbb{R}^{2}\right)\subset C^{1}\left(\mathbb{R}^{2}\right) \cap W^{1,1}\left(\mathbb{R}^{2}\right)\subset D(B_{y}),\] where \(C_{c}^{1}\left(\mathbb{R}^{2}\right)\) is the space of \(C^{1}\) with compact support. Moreover, \[B_{y}v=-\nabla_{x}\cdot\left(v(x)\,\mathbf{C_{y}}(x)\right),\forall v\in C^{ 1}\left(\mathbb{R}^{2}\right)\cap W^{1,1}\left(\mathbb{R}^{2}\right),\] and since \(C_{c}^{1}\left(\mathbb{R}^{2}\right)\) is dense in \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\), it follows that the graph of \(B_{y}\) is the closure of the graph of \(v\mapsto-\nabla_{x}\cdot\left(v(x)\,\mathbf{C_{y}}(x)\right)\) considered a linear operator from \(C_{c}^{1}\left(\mathbb{R}^{2}\right)\) into \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\). ### Existence of mild solutions for the full problem with both diffusion and convection In this section, we consider the full equation (3.1) \[\left\{\begin{aligned} \partial_{t}v(t,x)&=\varepsilon^{2} \bigtriangleup_{x}v(t,x)-\nabla_{x}\cdot\left(v(t,x)\,\mathbf{C_{y}}(x) \right),\\ v(0,x)&=v_{0}(x)\in\mathrm{L}^{1}\left(\mathbb{R}^{2 }\right).\end{aligned}\right. \tag{3.13}\] By using the notations introduced in the previous sections, this problem rewrites as the following abstract Cauchy problem \[\left\{\begin{aligned} & v^{\prime}(t)=(A+B_{y})v(t),\text{ for }t \geq 0,\\ & v(0)=v_{0}\in\mathrm{L}^{1}\left(\mathbb{R}^{2}\right).\end{aligned}\right. \tag{3.14}\] In order to define the mild solutions of (3.13) as a continuous function \(t\in[0,\infty)\mapsto v(t)\in\mathrm{L}^{1}\left(\mathbb{R}^{2}\right),\) a mild solution \[v(t)=T_{\varepsilon^{2}\triangle_{x}}(t)v_{0}+\int_{0}^{t}T_{\varepsilon^{2} \triangle_{x}}(t-\sigma)B_{y}\,v(\sigma)\,\mathrm{d}\sigma. \tag{3.15}\] The existence of the solutions follows by considering the following system \[\left\{\begin{aligned} & v(t)=T_{\varepsilon^{2}\triangle_{x}}(t)v_{0 }+\int_{0}^{t}T_{\varepsilon^{2}\triangle_{x}}(t-\sigma)w(\sigma)\,\mathrm{d} \sigma,\\ & w(t)=B_{y}T_{\varepsilon^{2}\triangle_{x}}(t)v_{0}+\int_{0}^{t} B_{y}T_{\varepsilon^{2}\triangle_{x}}(t-\sigma)w(\sigma)\,\mathrm{d}\sigma.\end{aligned}\right. \tag{3.16}\] We observe that \[\nabla_{x}\cdot\left(w(t,x)\,\widehat{\mathbf{C}}_{\mathbf{y}}(x)\right)= \mathbf{C}_{y}(x)\cdot\nabla_{x}w(t,x)+w(t,x)\,\nabla_{x}\cdot\mathbf{C}_{ \mathbf{y}}(x), \tag{3.17}\] where \(\nabla_{x}w(t,x)\) is the gradient of \(x\mapsto w(t,x)\) which is defined by \[\nabla_{x}\,w(t,x)=\left(\begin{aligned} &\partial_{z_{1}}w(t,x)\\ &\partial_{z_{2}}w(t,x)\end{aligned}\right).\] **Lemma 3.6**.: _Assume that \(\mathbf{C}_{\mathbf{y}}\) satisfies Assumption 3.3. Let \(y\in\mathbb{R}^{2}\). There exists a constant \(\kappa>0\) such that for each \(u\in\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\),_ \[T_{\varepsilon^{2}\triangle_{z}}(t)u\subset C^{1}\left(\mathbb{R}^{2}\right) \cap W^{1,1}\left(\mathbb{R}^{2}\right)\subset D\left(B_{y}\right),\forall t>0, \tag{3.18}\] _and_ \[\|B_{y}T_{\varepsilon^{2}\triangle_{z}}(t)u\|_{\mathrm{L}^{1}\left(\mathbb{R}^ {2}\right)}\leq\kappa\left(\frac{1}{\sqrt{\varepsilon^{2}t}}+1\right)\|u\|_{ \mathrm{L}^{1}\left(\mathbb{R}^{2}\right)},\forall t>0. \tag{3.19}\] Proof.: We observe that \[K(t,x)=\frac{1}{4\pi\varepsilon^{2}t}e^{-\frac{x_{1}^{2}+x_{2}^{2}}{4 \varepsilon^{2}t}}=K_{1}(t,x_{1})K_{1}(t,x_{2}).\] where \[K_{1}(t,x)=\frac{1}{\sqrt{4\pi\varepsilon^{2}t}}e^{-\frac{x^{2}}{4\varepsilon ^{2}t}}\,.\] Moreover \[\int_{\mathbb{R}}|\partial_{x}K_{1}(t,x)|dx=\frac{2}{\sqrt{4\varepsilon^{2}t}} \int_{0}^{\infty}\frac{2x}{4\varepsilon^{2}t}e^{-\frac{x^{2}}{4\varepsilon^{2 }t}}dx=\frac{1}{\sqrt{\varepsilon^{2}t}}.\] We observe that \[\partial_{x_{1}}T_{\varepsilon^{2}\triangle_{x}}(t)\left(u\right)(x_{1},x_{2})= \int_{\mathbb{R}}\partial_{x}K_{1}(t,x_{1}-\sigma_{1})\int_{\mathbb{R}}K_{1}(t,x _{2}-\sigma_{2})u(\sigma_{1},\sigma_{2})d\sigma_{2}d\sigma_{1},\] and it follows \[\|\partial_{x_{1}}T_{\varepsilon^{2}\triangle_{x}}(t)\left(u \right)\|_{\mathrm{L}^{1}(\mathbb{R}^{2})} \leq\int_{\mathbb{R}}|\partial_{x}K_{1}(t,x)|dx\] \[\times\int_{\mathbb{R}}\int_{\mathbb{R}}K_{1}(t,x_{2}-\sigma_{2} )u(\sigma_{1},\sigma_{2})d\sigma_{2}d\sigma_{1},\] and the proof is completed. By using the previous we deduce the following result. **Proposition 3.7**.: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3. We have_ \[D\left(B_{y}\right)\subset D(A), \tag{3.20}\] _and_ \[\|B_{y}\left(\lambda I-A\right)^{-1}\|_{\mathcal{L}(\mathrm{L}^{1}(\mathbb{R }^{2}))}\leq\int_{0}^{\infty}e^{-\lambda t}\kappa\left(\frac{1}{\sqrt{ \varepsilon^{2}t}}+1\right)\,\mathrm{d}t,\forall\lambda>0. \tag{3.21}\] Proof.: Let \(\lambda>0.\) We have \[D(A)=\left(\lambda I-A\right)^{-1}\mathrm{L}^{1}\left(\mathbb{R}^{2}\right),\] and \[\left(\lambda I-A\right)^{-1}u=\int_{0}^{\infty}e^{-\lambda t}T_{\varepsilon^ {2}\triangle_{z}}(t)u\,\mathrm{d}t,\forall u\in\mathrm{L}^{1}\left(\mathbb{R} ^{2}\right).\] Since \(B_{y}\) is a closed linear operator, we have \[B_{y}\left(\lambda I-A\right)^{-1}u=\int_{0}^{\infty}e^{-\lambda t}B_{y}T_{ \varepsilon^{2}\triangle_{z}}(t)u\,\mathrm{d}t,\forall u\in\mathrm{L}^{1} \left(\mathbb{R}^{2}\right),\] and by (3.19) we deduce that the right hand-side of the above equality is integrable, and (3.20) follows, and \[\|B_{y}\left(\lambda I-A\right)^{-1}u\|_{\mathrm{L}^{1}(\mathbb{R}^{2})}\leq \int_{0}^{\infty}e^{-\lambda t}\kappa\left(\frac{1}{\sqrt{\varepsilon^{2}t}}+1 \right)\,\mathrm{d}t\|u\|_{\mathrm{L}^{1}(\mathbb{R}^{2})}.\] Since \(D(B_{y})\subset D(A),\) we can \((A+B_{y}):D(A)\subset\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\to\mathrm{L}^{1 }\left(\mathbb{R}^{2}\right)\) is well defined by \[\left(A+B_{y}\right)u=Au+B_{y}u,\forall u\in D(A).\] **Definition 3.8**.: _We will say that a continuous map \(u\in C\left(\left[0,\infty\right),\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\right)\) is a **mild solution** of (3.14) if and only if_ \[\int_{0}^{t}v(s)ds\in D(A),\forall t\geq 0,\] _and_ \[v(t)=v_{0}+\left(A+B_{y}\right)\int_{0}^{t}v(s)ds.\] We observe that \[K\left(\alpha\right)=\kappa\int_{0}^{\infty}e^{-\alpha\sigma}\left(\frac{1}{ \sqrt{\varepsilon^{2}}t}+1\right)\,\mathrm{d}\sigma<\infty,\forall\alpha>0,\] and \[\lim_{\alpha\rightarrow+\infty}K\left(\alpha\right)=0.\] We consider the weighted space of integrable function \(L_{\alpha}^{1}\left(\left(0,\infty\right);\mathrm{L}^{1}\left(\mathbb{R}^{2} \right)\right)\) which is the space of Bochner measurable function \(t\mapsto f(t)\) from \(\left(0,\infty\right)\) to \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\) satisfying \[\int_{0}^{\infty}e^{-\alpha t}\|f(t)\|_{\mathrm{L}^{1}\left(\mathbb{R}^{2} \right)}\mathrm{d}t<+\infty.\] Then \(L_{\alpha}^{1}\left(\left(0,\infty\right);\mathrm{L}^{1}\left(\mathbb{R}^{2} \right)\right)\) is a Banach space endowed with the norm \[\|f\|_{L_{\alpha}^{1}}=\int_{0}^{\infty}e^{-\alpha t}\|f(t)\|_{\mathrm{L}^{1} \left(\mathbb{R}^{2}\right)}\mathrm{d}t.\] Let \(\alpha_{0}>0\) such that \(K\left(\alpha_{0}\right)<1.\) Then for each \(v_{0}\in\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\), by applying the Banach fixed theorem, we deduce that there exists a unique solution \(w\in L_{\alpha_{0}}^{1}\left(\left(0,\infty\right);\mathrm{L}^{1}\left( \mathbb{R}^{2}\right)\right)\) satisfying the fixed point problem \[w(t)=B_{y}T_{\varepsilon^{2}\triangle_{x}}(t)v_{0}+\int_{0}^{t}B_{y}T_{ \varepsilon^{2}\triangle_{x}}(t-\sigma)w(\sigma)\,\mathrm{d}\sigma. \tag{3.22}\] By using the same arguments as in Ducrot, Magal, and Prevost [13, Theorem 4.8], we obtain the following result. **Theorem 3.9**.: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3. Let \(\alpha_{0}>0\) such that \(K\left(\alpha_{0}\right)<1.\) The linear operator \(\left(A+B_{y}\right):D(A)\subset\mathrm{L}^{1}\left(\mathbb{R}^{2}\right) \rightarrow\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\) is the infinitesimal generator of an analytic semigroup. Moreover, for each \(v_{0}\in\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\), the Cauchy problem (3.14) admits a unique mild solution \(t\to T_{A+B_{y}}(t)v_{0}.\) Furthermore, the map \(t\to v(t)=T_{A+B_{y}}(t)v_{0}\) satisfies_ \[v(t)=T_{\varepsilon^{2}\triangle_{x}}(t)v_{0}+\int_{0}^{t}T_{\varepsilon^{2} \triangle_{x}}(t-\sigma)w(\sigma)\,\mathrm{d}\sigma,\forall t\geq 0,\] _where \(w\in L^{1}_{\alpha_{0}}\left(\left(0,\infty\right);\mathrm{L}^{1}\left(\mathbb{R}^ {2}\right)\right)\) is the unique solution the fixed point problem (3.22)._ Let \(\lambda\geq\alpha_{0}\). Since \(B_{y}\) is a closed linear operator, we have \[B_{y}\left(\lambda I-A\right)^{-1}=\int_{0}^{\infty}e^{-\lambda t}B_{y}T_{ \varepsilon^{2}\triangle_{x}}(t)dt,\] and \[\left\|B_{y}\left(\lambda I-A\right)^{-1}\right\|_{\mathcal{L}(L^{1}(\mathbb{ R}^{2}))}\leq\kappa\int_{0}^{\infty}e^{-\lambda\sigma}\left(\frac{1}{\sqrt{ \varepsilon^{2}t}}+1\right)\,\mathrm{d}\sigma=K\left(\lambda\right)\leq K \left(\alpha_{0}\right)<1.\] Let \(u\in D(A)\) and \(v\in\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\). We have \[\left(\lambda I-A-B_{y}\right)u=v \Leftrightarrow\,\left[I-B_{y}\left(\lambda I-A\right)^{-1} \right]\left(\lambda I-A\right)u=v\] \[\Leftrightarrow\,u=\left(\lambda I-A\right)^{-1}\left[I-B_{y} \left(\lambda I-A\right)^{-1}\right]^{-1}v\] We obtain the following lemma. **Lemma 3.10**.: _Let Assumption 3.3 be satisfied. We have_ \[\left(\alpha_{0},+\infty\right)\subset\rho(A+B_{y}),\] _the resolvent set of \(A+B_{y}\), and for each \(\lambda>\alpha_{0}\),_ \[\left(\lambda I-A-B_{y}\right)u=v\Leftrightarrow u=\left(\lambda I-A\right)^{ -1}\sum_{k\geq 0}\left[B_{y}\left(\lambda I-A\right)^{-1}\right]^{k}v.\] ### Positivity of the solutions for the full problem with both diffusion and convection In this section, we reconsider the positivity of the solutions by using only abstract argument. Such a problem was study by Protter and Weinberger [35] by using maximum principle. Here we use the fact that \(A\) and \(B_{y}\) are both the infinitesimal generator of positive semi-groups, together with some suitable estimation on \(B_{y}T_{A}(t),\forall t>0\). Recall that the Hille-Yosida approximation of \(B_{y}\) is defined by \[B_{y}^{\lambda}=\lambda B_{y}\left(\lambda I-B_{y}\right)^{-1},\forall\lambda >0. \tag{3.23}\] Then we have \[B_{y}^{\lambda}=-\lambda I+\lambda^{2}\left(\lambda I-B_{y}\right)^{-1}, \forall\lambda>0. \tag{3.24}\] Recall that \[\lim_{\lambda\rightarrow+\infty}\lambda\left(\lambda I-B_{y}\right)^{-1}u=u, \,\forall u\in\mathrm{L}^{1}\left(\mathbb{R}^{2}\right),\] we deduce that \[\lim_{\lambda\rightarrow+\infty}B_{y}^{\lambda}u=B_{y}u,\,\forall u\in D \left(B_{y}\right).\] The idea of this section is to approximate the problem (3.15) \[v(t)=T_{\varepsilon^{2}\triangle_{x}}(t)v_{0}+\int_{0}^{t}T_{\varepsilon^{2} \triangle_{x}}(t-\sigma)B_{y}v(\sigma)\,\mathrm{d}\sigma.\] by using the Hille-Yosida approximation of \(B_{y}.\) That is, \[v_{\lambda}(t)=T_{\varepsilon^{2}\triangle_{x}}(t)v_{0}+\int_{0}^{t}T_{ \varepsilon^{2}\triangle_{x}}(t-\sigma)B_{y}^{\lambda}v_{\lambda}(\sigma)\, \mathrm{d}\sigma.\] #### 3.4.1 Convergence of the approximation Let \(v_{0}\in D(A).\) Define \[w_{\lambda}(t)=B_{y}^{\lambda}v_{\lambda}(t),\] which satisfies \[w_{\lambda}(t)=B_{y}^{\lambda}T_{\varepsilon^{2}\triangle_{x}}(t)v_{0}+\int_{ 0}^{t}B_{y}^{\lambda}T_{\varepsilon^{2}\triangle_{x}}(t-\sigma)w_{\lambda}( \sigma)\,\mathrm{d}\sigma.\] By computing the difference between the above equation and (3.22), we obtain \[\begin{split} w_{\lambda}(t)-w(t)&=\left[\lambda \left(\lambda I-B_{y}\right)^{-1}-I\right]B_{y}T_{\varepsilon^{2}\triangle_{x }}(t)v_{0}\\ &+\int_{0}^{t}\left[\lambda\left(\lambda I-B_{y}\right)^{-1}-I \right]B_{y}T_{\varepsilon^{2}\triangle_{x}}(t-\sigma)w(\sigma)\,\mathrm{d} \sigma\\ &+\int_{0}^{t}\lambda\left(\lambda I-B_{y}\right)^{-1}B_{y}T_{ \varepsilon^{2}\triangle_{x}}(t-\sigma)\left(w_{\lambda}(\sigma)-w(\sigma) \right)\,\mathrm{d}\sigma,\end{split} \tag{3.25}\] Let \(\tau>0.\) By using the fact that \(t\to B_{y}T_{\varepsilon^{2}\triangle_{x}}(t)w_{0}\) maps bounded interval \([0,\tau]\) into a compact subset of \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\), and \((t,\sigma)\to B_{y}T_{\varepsilon^{2}\triangle_{x}}(t-\sigma)w(\sigma)\) maps bounded subsets \(\left\{(t,\sigma)\in[0,\tau]:t\geq\sigma\right\}\) into a compact subset of \(\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\), we deduce that \[\lim_{\lambda\rightarrow+\infty}\sup_{t\in[0,\tau]}\left\|\left[\lambda\left( \lambda I-B_{y}\right)^{-1}-I\right]B_{y}T_{\varepsilon^{2}\triangle_{x}}(t) w_{0}\right\|_{\mathrm{L}^{1}(\mathbb{R}^{2})}=0, \tag{3.26}\] and \[\lim_{\lambda\rightarrow+\infty}\sup_{t\in[0,\tau]}\left\|\int_{0}^{t}\left[ \lambda\left(\lambda I-B_{y}\right)^{-1}-I\right]B_{y}T_{\varepsilon^{2} \triangle_{x}}(t-\sigma)w(\sigma)\,\mathrm{d}\sigma\right\|_{\mathrm{L}^{1}( \mathbb{R}^{2})}=0. \tag{3.27}\] Moreover we have \[\left\|\lambda\left(\lambda I-B_{y}\right)^{-1}\right\|_{\mathcal{L}(\mathrm{ L}^{1}(\mathbb{R}^{2}))}\leq 1,\forall\lambda>0,\] hence \[\begin{split}\|\int_{0}^{t}&\lambda\left(\lambda I-B_{y} \right)^{-1}B_{y}T_{\varepsilon^{2}\triangle_{x}}(t-\sigma)\left(w_{\lambda}( \sigma)-w(\sigma)\right)\,\mathrm{d}\sigma\|_{\mathrm{L}^{1}(\mathbb{R}^{2})} \\ &\leq\kappa\int_{0}^{\tau}\left(\frac{1}{\sqrt{\varepsilon^{2} \sigma}}+1\right)\,\mathrm{d}\sigma\sup_{\sigma\in[0,\tau]}\|w_{\lambda}( \sigma)-w(\sigma)\|_{\mathrm{L}^{1}(\mathbb{R}^{2})}.\end{split} \tag{3.28}\] **Lemma 3.11**.: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3.Let \(\tau>0\) small enough to satisfy_ \[\kappa\int_{0}^{\tau}\left(\frac{1}{\sqrt{\varepsilon^{2}\sigma}}+1\right)\, \mathrm{d}\sigma<1.\] _Then for each \(v_{0}\in D(A)\), and each \(\tau>0\), we have_ \[\lim_{\lambda\to+\infty}\sup_{t\in[0,\tau]}\|v_{\lambda}(t)-v(t)\|_{\mathrm{L }^{1}(\mathbb{R}^{2})}=0.\] #### 3.4.2 Positivity By using (3.24), we deduce that \[v_{\lambda}(t)=T_{\varepsilon^{2}\triangle_{x}-\lambda I}(t)v_{0}+\int_{0}^{t }T_{\varepsilon^{2}\triangle_{x}-\lambda I}(t-\sigma)\lambda^{2}\left(\lambda I -B_{y}\right)^{-1}v_{\lambda}(\sigma)\,\mathrm{d}\sigma, \tag{3.29}\] where \[T_{\varepsilon^{2}\triangle_{x}-\lambda I}(t)=e^{-\lambda t}T_{\varepsilon^{2 }\triangle_{x}}(t).\] If \(u_{0}\in D(A)\cap\mathrm{L}^{1}_{+}\left(\mathbb{R}^{2}\right)\), since \(\left(\lambda I-B_{y}\right)^{-1}\) is a positive bounded linear operator, we deduce that \[v_{\lambda}(t)\geq 0,\forall t\geq 0. \tag{3.30}\] To obtain the positivity is sufficient to use the fact that \(D(A)\cap\mathrm{L}^{1}_{+}\left(\mathbb{R}^{2}\right)\) is dense in \(\mathrm{L}^{1}_{+}\left(\mathbb{R}^{2}\right)\), which follows from the following observation \[\lambda\left(\lambda I-A\right)^{-1}v_{0}\in D(A)\cap\mathrm{L}^{1}_{+}\left( \mathbb{R}^{2}\right),\forall v_{0}\in\mathrm{L}^{1}_{+}\left(\mathbb{R}^{2} \right),\forall\lambda>0,\] and \[\lim_{\lambda\to\infty}\lambda\left(\lambda I-A\right)^{-1}v_{0}=v_{0},\forall v _{0}\in\mathrm{L}^{1}\left(\mathbb{R}^{2}\right).\] By using Lemma 3.11, we obtain the following theorem. **Theorem 3.12** (Positivity).: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3. For each \(v_{0}\in\mathrm{L}^{1}_{+}\left(\mathbb{R}^{2}\right),\) the solution of Cauchy problem (3.14) non-negative. That is_ \[T_{A+B_{y}}(t)v_{0}\geq 0,\forall t\geq 0. \tag{3.31}\] As a consequence of Theorem 3.12, we obtain an abstract proof of the result of Protter-Weinberger [35]. **Corollary 3.13**.: _Assume that \(\mathbf{C}:\mathbb{R}^{2}\to\mathbb{R}^{2}\) satisfies Assumption 3.3. Let \(\chi:\mathbb{R}^{2}\to\mathbb{R}\) be bounded and uniformly continuous map. Consider the system_ \[\left\{\begin{aligned} \partial_{t}v(t,x)&=\varepsilon^{2} \bigtriangleup_{x}v(t,x)\\ &\quad-\mathbf{C}(x)_{1}\,\partial_{x_{1}}v(t,x)-\mathbf{C}(x)_{ 2}\,\partial_{x_{2}}v(t,x)\\ &\quad+\chi(x)v(t,x),\\ v(0,x)&=v_{0}(x)\in\mathrm{L}_{+}^{1}\left(\mathbb{R}^ {2}\right).\end{aligned}\right. \tag{3.32}\] _Then the system (3.32) has a unique non-negative mild solution._ Proof.: It is sufficient to observe that the system (3.32) is equivalent to \[\left\{\begin{aligned} \partial_{t}v(t,x)&= \varepsilon^{2}\bigtriangleup_{x}v(t,x)-\nabla_{x}\cdot\left(v(t,x)\,\mathbf{C }(x)\right)\\ &+\left(\chi(x)+\partial_{x_{1}}\mathbf{C}(x)_{1}+\partial_{x_{2} }\mathbf{C}(x)_{2}\right)v(t,x),\\ v(0,x)&=v_{0}(x)\in\mathrm{L}_{+}^{1}\left(\mathbb{R}^ {2}\right),\end{aligned}\right.\] and the result follows from Theorem 3.12, and by using the variation of constant formula for \(\lambda>0\) large enough \[v(t)=T_{A+B-\lambda I}(t)v_{0}+\int_{0}^{t}T_{A+B-\lambda I}(t-\sigma)\left(L +\lambda I\right)v\left(\sigma\right)\,\mathrm{d}\sigma,\] where \(L\) is the multiplicative operator \[Lv(x)=\left(\chi(x)+\partial_{x_{1}}\mathbf{C}(x)_{1}+\partial_{x_{2}} \mathbf{C}(x)_{2}\right)v(x).\] We could obtain a stronger positivity result of \[v(t,x)=T_{A+B_{y}}(t)(v_{0})(x)\] by using the strong maximum principle for a parabolic equation in the book of Gilbarg and Trudinger [17]. Alternatively, we could have used the Harnack inequality for second-order parabolic equations obtained by Ignatova, Kukavica, and Ryzhik [23] to prove the strict positivity of the solution for all \(t>0\) (by contradiction). But the simple arguments used above are sufficient to establish the convergence result of the entire system. #### Conservation of the total mass of individuals Moreover by using again the formula (3.29), we obtain \[\int_{\mathbb{R}^{2}}v_{\lambda}(t,x)dx=e^{-\lambda t}\int_{\mathbb{R}^{2}}v_ {0}(x)dx+\int_{0}^{t}e^{-\lambda(t-s)}\lambda\int_{\mathbb{R}^{2}}v_{\lambda} (s,x)dxds,\] that is \[\frac{d}{dt}\int_{\mathbb{R}^{2}}v_{\lambda}(t,x)dx=-\lambda\int_{\mathbb{R}^{2}}v _{\lambda}(t,x)dx+\lambda\int_{\mathbb{R}^{2}}v_{\lambda}(s,x)dx=0,\] therefore \[\int_{\mathbb{R}^{2}}v_{\lambda}(t,x)dx=\int_{\mathbb{R}^{2}}v_{0}(x)dx,\forall t \geq 0.\] By using Lemma 3.11, we obtain the following theorem. **Theorem 3.14** (Conservation of the total mass of individuals).: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3. For each \(v_{0}\in\mathrm{L}^{1}_{+}\left(\mathbb{R}^{2}\right)\), the Cauchy problem (3.14) conserves of the total mass of individuals. That is_ \[\int_{\mathbb{R}^{2}}v(t,x)dx=\int_{\mathbb{R}^{2}}v_{0}(x)dx,\forall t\geq 0,\] _with_ \[v(t)=T_{A+B_{y}}(t)v_{0},\forall t\geq 0.\] ## 4 Asymptotic behavior of the return-to-home model In this section, for simplicity, we drop the subscript \(y\) notation, and we consider the entire system \[\left\{\begin{array}{l}u^{\prime}(t)=\chi\int_{\mathbb{R}^{2}}w(t,x)\mathrm{ d}x-\gamma\,u(t),\\ v^{\prime}(t,x)=\left(A_{y}+B_{y}\right)v(t,x)-\alpha\,v(t,x)+\gamma\,g(x-y) \,u(t),\\ w^{\prime}(t,x)=\alpha\,v(t,x)-\chi\,w(t,x),\end{array}\right. \tag{4.1}\] with initial distribution \[u(0)=u_{0}\in\mathbb{R}_{+},v(0)=v_{0}\in L^{1}_{+}(\mathbb{R}^{2}),\text{ and }w(0)=w_{0}\in L^{1}_{+}(\mathbb{R}^{2}). \tag{4.2}\] ### Abstract Cauchy problem We consider the space \[X=\mathbb{R}\times\mathrm{L}^{1}\left(\mathbb{R}^{2}\right)\times\mathrm{L}^ {1}\left(\mathbb{R}^{2}\right),\] which is a Banach space endowed with the standard produce norm \[\|(u,v,w)\|=|u|+\|v\|_{\mathrm{L}^{1}(\mathbb{R}^{2})}+\|w\|_{\mathrm{L}^{1}( \mathbb{R}^{2})}.\] We consider the positive cone of \(X\) \[X_{+}=\mathbb{R}_{+}\times L^{1}_{+}\left(\mathbb{R}^{2}\right)\times L^{1}_ {+}\left(\mathbb{R}^{2}\right).\] The system (2.1) we can rewritten as an abstract Cauchy problem \[\left(\begin{array}{l}u^{\prime}(t)\\ v^{\prime}(t)\\ w^{\prime}(t)\end{array}\right)=\left(\mathcal{A}_{y}+\mathcal{C}_{y}\right) \left(\begin{array}{l}u(t)\\ v(t)\\ w(t)\end{array}\right),\text{ for }t>0, \tag{4.3}\] with initial value \[\left(\begin{array}{c}u(0)\\ v(0)\\ w(0)\end{array}\right)=\left(\begin{array}{c}u_{0}\\ v_{0}\\ w_{0}\end{array}\right)\in\mathbb{R}\times\mathrm{L}^{1}\left(\mathbb{R}^{2} \right)\times\mathrm{L}^{1}\left(\mathbb{R}^{2}\right). \tag{4.4}\] The linear operator \(\mathcal{A}_{y}:D\left(\mathcal{A}_{y}\right)\subset X\to X\) is defined by \[\mathcal{A}_{y}\left(\begin{array}{c}u\\ v\\ w\end{array}\right)=\left(\begin{array}{c}-\gamma\,u\\ \left(A+B_{y}\right)v-\alpha\,v\\ \alpha v-\chi\,w\end{array}\right)\] with the domain \[D\left(\mathcal{A}_{y}\right)=\mathbb{R}\times D(A)\times\mathrm{L}^{1}\left( \mathbb{R}^{2}\right).\] The semigroup generated by \(\mathcal{A}_{y}\) is explicitly given by \[T_{\mathcal{A}_{y}}(t)\left(\begin{array}{c}u_{0}\\ v_{0}\\ w_{0}\end{array}\right)=\left(\begin{array}{c}e^{-\gamma t}\,u_{0}\\ T_{A+B_{y}-\alpha I}(t)v_{0}\\ e^{-\chi t}w_{0}+\int_{0}^{t}e^{-\chi(t-s)}\alpha T_{A+B_{y}-\alpha I}(s)v_{0} \,\mathrm{d}s\end{array}\right). \tag{4.5}\] where \[T_{A+B_{y}-\alpha I}(t)=e^{-\alpha t}T_{A+B_{y}}(t),\forall t\geq 0.\] We also consider the compact bounded linear operator \(\mathcal{C}_{y}:X\to X,\) \[\mathcal{C}_{y}\left(\begin{array}{c}u\\ v\\ w\end{array}\right)=\left(\begin{array}{c}\chi\int_{\mathbb{R}^{2}}w(x) \mathrm{d}x\\ +\gamma g(x-y)u\\ 0_{\mathrm{L}^{1}(\mathbb{R}^{2})}\end{array}\right).\] **Theorem 4.1** (Existence and uniqueness of solutions).: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3. Then for each \(y\in\mathbb{R}^{2}\), the system (2.1) generates a strongly continuous semigroup \(\left\{T_{\mathcal{A}_{y}+\mathcal{C}_{y}}(t)\right\}_{t\geq 0}\) of bounded linear operator on \(X\). We recall that_ \[\left(\begin{array}{c}u(t)\\ v(t)\\ w(t)\end{array}\right)=T_{\mathcal{A}_{y}+\mathcal{C}_{y}}(t)\left(\begin{array} []{c}u_{0}\\ v_{0}\\ w_{0}\end{array}\right)\] _is the unique mild solution satisfies the variation of constant formula_ \[\left(\begin{array}{c}u(t)\\ v(t)\\ w(t)\end{array}\right)=T_{\mathcal{A}_{y}}(t)\left(\begin{array}{c}u_{0}\\ v_{0}\\ w_{0}\end{array}\right)+\int_{0}^{t}T_{\mathcal{A}_{y}}(t-\sigma)\,\mathcal{C}_{y }\left(\begin{array}{c}u(\sigma)\\ v(\sigma)\\ w(\sigma)\end{array}\right)\,\mathrm{d}\sigma,\forall t\geq 0, \tag{4.6}\] _or equivalently_ \[\left\{\begin{array}{ll}u(t)=&\!\!e^{-\gamma t}u_{0}+\int_{0}^{t}e^{-\gamma(t- \sigma)}\chi\int_{\mathbb{R}^{2}}w(\sigma,x)\mathrm{d}x\,\mathrm{d}\sigma,\\ v(t)=&\!\!T_{A+B_{y}-\alpha I}(t)v_{0}+\int_{0}^{t}T_{A+B_{y}-\alpha I}(t- \sigma)\gamma g(.-y)u(\sigma)\,\mathrm{d}\sigma,\\ w(t)=&\!\!e^{-\chi t}w_{0}+\int_{0}^{t}e^{-\chi(t-s)}\alpha v(t,\sigma)\, \mathrm{d}\sigma.\end{array}\right. \tag{4.7}\] By Theorem 3.12, we have the following result. **Theorem 4.2** (Positivity).: _The semigroup \(\left\{T_{\mathcal{A}_{y}+\mathcal{B}_{y}}(t)\right\}_{t\geq 0}\) is positive. That is_ \[T_{\mathcal{A}_{y}+\mathcal{B}_{y}}(t)X_{+}\subset X_{+},\forall t\geq 0. \tag{4.8}\] By using Theorem 3.14 we have the following result. **Theorem 4.3** (Conservation of the total mass).: _Define_ \[V(t)=\int_{\mathbb{R}^{2}}v(t,x)\,\mathrm{d}x,\text{ and }W(t)=\int_{ \mathbb{R}^{2}}w(t,x)\,\mathrm{d}x.\] _Then \(t\mapsto(u(t),V(t),W(t))\) satisfies the system of linear ordinary differential equations_ \[\left\{\begin{array}{ll}u^{\prime}(t)=&\chi W(t)-\gamma u(t),\\ V^{\prime}(t)=&\!\!-\alpha V(t)+\gamma u(t),\\ W^{\prime}(t)=&\!\!\alpha V(t)-\chi W(t),\end{array}\right. \tag{4.9}\] _with initial distribution_ \[u(0)=u_{0},V(0)=\int_{\mathbb{R}^{2}}v_{0}(x)\,\mathrm{d}x,\text{ and }W(0)=\int_{\mathbb{R}^{2}}w_{0}(x)\,\mathrm{d}x. \tag{4.10}\] _The density of individuals per house remains constant with time. That is_ \[n(y)=u(t)+\int_{\mathbb{R}^{2}}v(t,x)\mathrm{d}x+\int_{\mathbb{R}^{2}}w(t,x) \mathrm{d}x,\forall t\geq 0,\forall y\in\mathbb{R}^{2}, \tag{4.11}\] _where \(n(y)\) is the density of home in \(\mathbb{R}^{2}\)._ **Definition 4.4**.: Let \(T\in\mathcal{L}\left(X\right).\) Then the _essential semi-norm_\(\left\|T\right\|_{\mathrm{ess}}\) of \(T\) is defined by \[\left\|T\right\|_{\mathrm{ess}}=\kappa\left(T\left(B_{X}(0,1)\right)\right),\] where \(B_{X}\left(0,1\right)=\left\{x\in X:\left\|x\right\|_{X}\leq 1\right\},\) and for each bounded set \(B\subset X,\) \[\kappa\left(B\right)=\inf\left\{\varepsilon>0:B\text{ can be covered by a finite number of balls of radius }\leq\varepsilon\right\}\] is the _Kuratovsky measure of non-compactness_. By using, Webb [44] (see Magal and Thieme [28, Theorem 3.2.] for more results), we deduce that \[x\mapsto\int_{0}^{t}T_{\mathcal{A}_{y}}(t-\sigma)\mathcal{C}_{y}T_{\mathcal{A} _{y}+\mathcal{C}_{y}}(\sigma)x\,\mathrm{d}\sigma,\] is compact. Therefore, we obtain the following lemma. In the following lemma, we are using the essential growth rate of semigroup, we refer to Engel and Nagel [14], or Magal and Ruan [27] for more results on this topic. **Lemma 4.5**.: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3. The essential growth rate_ \[\omega_{ess}\left(\mathcal{A}_{y}+\mathcal{C}_{y}\right):=\lim_{t\to\infty} \left\|T_{\mathcal{A}_{y}+\mathcal{C}_{y}}(t)\left(B_{X}\left(0,1\right) \right)\right\|_{\rm ess}\leq-\min(\gamma,\alpha,\chi).\] Thanks to the negative essential growth rate and since the positive orbits are bounded, we deduce that the positive orbits are relatively compact (i.e., their closure is compact), and we obtain the following theorem. **Proposition 4.6**.: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3. The omega-limit set of each trajectory is defined by_ \[\omega\left(\begin{array}{c}u_{0}\\ v_{0}\\ w_{0}\end{array}\right):=\bigcap_{t\geq 0}\overline{\bigcup_{s\geq t}\left\{T_{ \mathcal{A}_{y}+\mathcal{C}_{y}}(s)\left(\begin{array}{c}u_{0}\\ v_{0}\\ w_{0}\end{array}\right)\right\}}\] _in a non-empty compact subset of \(X\) and is contained in_ \[\left\{(u_{0},v_{0},w_{0})\in X_{+}:u+\int_{\mathbb{R}^{2}}v(x)\mathrm{d}x+ \int_{\mathbb{R}^{2}}w(x)\mathrm{d}x=n(y)\right\}. \tag{4.12}\] ### Equilibria An equilibrium solution of the model (2.1) will satisfy \[\left\{\begin{aligned} & 0=\chi\int_{\mathbb{R}^{2}}\overline{w}(x) \mathrm{d}x-\gamma\overline{u}(y),\\ & 0=\varepsilon^{2}\Delta_{x}\overline{v}(x)-\nabla_{x}\cdot( \overline{v}(x)\,\mathbf{C_{y}}(x))-\alpha\overline{v}(x)+\gamma g(x-y) \overline{u}(y),\\ & 0=\alpha\overline{v}-\chi\overline{w},\end{aligned}\right. \tag{4.13}\] From the first and the equation of (4.13), we deduce that \[\overline{w}_{y}(x)=\frac{\alpha}{\chi}\overline{v}_{y}(x),\text{ and } \overline{u}(y)=\frac{\chi}{\gamma}\int_{\mathbb{R}^{2}}\overline{w}_{y}(x) \mathrm{d}x. \tag{4.14}\] By using the conservation of the total number of individuals in each house, we have \[\overline{u}(y)+\int_{\mathbb{R}^{2}}\overline{v}_{y}(y)\mathrm{d}x+\int_{ \mathbb{R}^{2}}\overline{w}_{y}(x)\mathrm{d}x=n(y),\] and by using (4.14), we deduce that \[\overline{u}(y)=\frac{\tau}{\gamma}n(y)=\frac{1}{\left(1+\frac{\gamma}{\alpha }+\frac{\gamma}{\chi}\right)}n(y), \tag{4.15}\] where \[\tau=\frac{1}{\left(\frac{1}{\gamma}+\frac{1}{\alpha}+\frac{1}{\chi}\right)}.\] By plugging (4.15) into the \(v\)-equation of (4.13), we deduce that \[0=\varepsilon^{2}\Delta_{x}\overline{v}_{y}-\nabla_{x}\cdot\left(\overline{v} _{y}\operatorname{\mathbf{C}}_{y}\right)-\alpha\overline{v}_{y}+\tau g(x-y)n( y),\] which is equivalent \[\alpha\overline{v}_{y}-\varepsilon^{2}\Delta_{x}\overline{v}+\nabla_{x} \cdot\left(\overline{v}_{y}\operatorname{\mathbf{C}}_{y}\right)=\tau g(x-y)n( y).\] Therefore \[\overline{v}_{y}(x)=\left(\alpha I-A-B_{y}\right)^{-1}\biggl{(}\tau g(\cdot-y) n(y)\biggr{)},\] or equivalently \[\overline{v}_{y}(x)=\tau n(y)\int_{0}^{+\infty}e^{-\alpha t}T_{A+B_{y}}(t) \left(g(.-y)\right)(x)\operatorname{d}\!t. \tag{4.16}\] ### Asymptotic behavior By integrating in \(x\)\(v(t,x)\) and \(w(t,x)\) and by using \[\left\{\begin{array}{l}u^{\prime}(t)=\chi\int_{\mathbb{R}^{2}}w(t,x) \mathrm{d}x-\gamma u(t),\\ v^{\prime}(t)=\left(A_{y}+B_{y}\right)v(t)-\alpha v(t)+\gamma g(.-y)u(t),\\ w^{\prime}(t)=\alpha v(t)-\chi w(t),\end{array}\right. \tag{4.17}\] with initial distribution \[u(0)=U_{0}\in\mathbb{R}_{+},v(0)=v_{0}\in L^{1}_{+}(\mathbb{R}^{2}),\text{ and }w(0)=w_{0}\in L^{1}_{+}(\mathbb{R}^{2}). \tag{4.18}\] By using Perron-Frobenius theorem applied to the irreducible system (4.9) we obtain the following theorem. We refer to Ducrot, Griette, Liu, and Magal [11, Theorem 4.53] for more result on this subject. **Lemma 4.7**.: _Assume that \(\alpha>0\)\(\gamma>0\) and \(\chi>0\). Then the solution of system (4.9) satisfies_ \[\lim_{t\to\infty}u(t)=\overline{u},\lim_{t\to\infty}V(t)=\overline{V},\text{ and }\lim_{t\to\infty}W(t)=\overline{W}, \tag{4.19}\] _where_ \[\overline{u}\left(1+\frac{\gamma}{\alpha}+\frac{\gamma}{\chi}\right)=n(y),\] \[\overline{V}\left(\frac{\alpha}{\gamma}+1+\frac{\alpha}{\chi}\right)=n(y),\] _and_ \[\overline{W}\left(\frac{\chi}{\gamma}+\frac{\chi}{\alpha}+1\right)=n(y).\] _Moreover, the convergence in (4.19) is exponential. That is, there exists a constant \(M>0\) and \(\delta>0\) such that for each \(t\geq 0\),_ \[|u(t)-\overline{u}|\leq Me^{-\delta t},|V(t)-\overline{V}|\leq Me^{-\delta t}, \text{ and }|W(t)-\overline{W}|\leq Me^{-\delta t}. \tag{4.20}\] Proof.: The matrix of system (4.9) is \[L=\begin{pmatrix}-\gamma&0&\chi\\ \gamma&-\alpha&0\\ 0&\alpha&-\chi\end{pmatrix}.\] Therefore the system (4.9) is strongly connected (i.e., \(L+\delta I\) is irreducible for all \(\delta>0\) large enough). The vector \(\mathbb{1}^{\,T}=(1,1,1)^{T}\) is a strictly positive left-eigenvector associated with the eigenvalue \(0\). The Perron-Frobenius theorem shows that \(0\) is the dominant eigenvalue of \(L\) (i.e., an eigenvalue with the largest real part). The equilibrium of equation (4.9) corresponds to the right eigenvector. That is \[\chi\overline{W}=\gamma\overline{u},\alpha\overline{V}=\gamma\overline{u}, \alpha\overline{V}=\chi\overline{W}, \tag{4.21}\] and since we must impose that \[\overline{u}+\overline{V}+\overline{W}=n(y),\] the proof is completed. **Theorem 4.8**.: _Assume that \(\mathbf{C_{y}}\) satisfies Assumption 3.3. Assume that \(\alpha>0\)\(\gamma>0\) and \(\chi>0\). For each \(y\in\mathbb{R}^{2}\), the solution of system (2.1) satisfies_ \[\lim_{t\to+\infty}u_{y}(t)=\overline{u}(y),\text{ in }\mathbb{R}, \tag{4.22}\] \[\lim_{t\to+\infty}v_{y}(t,x)=\overline{v}_{y}(x),\text{ in }\mathrm{L}^{1}( \mathbb{R}^{2}), \tag{4.23}\] _and_ \[\lim_{t\to+\infty}w_{y}(t,x)=\overline{w}_{y}(x),\text{ in }\mathrm{L}^{1}( \mathbb{R}^{2}), \tag{4.24}\] _and the convergence is exponential for each limit._ Proof.: By Lemma 4.7, we already know the exponential convergence in (4.22). Let us consider the exponential convergence in (4.23). We have \[v(t,x)=T_{A_{y}+B_{y}-\alpha I}(t)v_{0}+\int_{0}^{t}T_{A_{y}+B_{y}-\alpha I}(t- \sigma)\gamma g(.-y)u(\sigma)\,\mathrm{d}\sigma,\] and \[\overline{v}(x)=T_{A_{y}+B_{y}-\alpha I}(t)\overline{v}+\int_{0}^{t}T_{A_{y}+B _{y}-\alpha I}(t-\sigma)\gamma g(.-y)\overline{u}(\sigma)\,\mathrm{d}\sigma.\] Therefore, we deduce that \[v(t,x)-\overline{v}(x) =T_{A_{y}+B_{y}-\alpha I}(t)\left(v_{0}-\overline{v}\right)\] \[+\int_{0}^{t}T_{A_{y}+B_{y}-\alpha I}(t-\sigma)\gamma g(.-y)\left(u (\sigma)-\overline{u}\right)\,\mathrm{d}\sigma,\] and we obtain \[\|v(t)-\overline{v}\|_{\mathrm{L}^{1}(\mathbb{R}^{2})} \leq\,e^{-\alpha t}\|v_{0}-\overline{v}\|_{\mathrm{L}^{1}(\mathbb{ R}^{2})}\] \[\qquad+\int_{0}^{t}e^{-\alpha(t-\sigma)}|u(\sigma)-\overline{u}| \,\mathrm{d}\sigma.\] Now, by Lemma 4.7, we have \(|u(t)-\overline{u}|\leq Me^{-\delta t},\forall t\geq 0\), we obtain \[\|v(t)-\overline{v}\|_{\mathrm{L}^{1}(\mathbb{R}^{2})} \leq\,e^{-\eta t}(1+t)\left(\|v_{0}-\overline{v}\|_{\mathrm{L}^{1}( \mathbb{R}^{2})}+M\right)\] where \(\eta=\min\left(\alpha,\delta\right)>0\). The exponential convergence in (4.24) follows by using the exponential convergence in (4.23) and using similar arguments to those above. **Remark 4.9**.: _The above result is relate the irreducibility of the semigroup \(\left\{T_{A_{y}+C_{y}}(t)\right\}_{t\geq 0}\). The difficulty would be to prove the additional result for each \(\phi\in\mathrm{L}^{\infty}_{+}\left(\mathbb{R}^{2}\right)\), with \(\phi\neq 0\), we have_ \[\int_{\mathbb{R}^{2}}\phi(x)u(t,x)dx>0,\forall t>0,\] _where_ \[u(t,x)=T_{A+B_{y}}(t)(g(.-y))(x),\forall t\geq 0. \tag{4.25}\] _The reader can find more result on this topic in the paper by Webb [45, Remark 2.2] (see also [2, 3, 4, 18, 20] for more on this subject) to prove infinite dimensional Perron-Frobenius like theorem. Here, we propose a more direct approach to study the asymptotic behavior of the system._ ## 5 Hybrid formulation of a return to home model A major difficulty in applying such a model in concrete situations is the computation time. Indeed the time of computation grows exponentially with the discretization step. In the previous section, we introduced reduction technique, that could be used to run the simulations of the return home model. Unfortunately such an idea does not apply to the case of epidemic model. To circumvent this difficulty, we now introduce discrete homes locations. **Assumption 5.1**.: _Assume that we can find a sequence of point \(y_{i}=\left(y_{1}^{i},y_{2}^{i}\right)\in\mathbb{R}^{2}\), and the index \(i\) belongs to a countable set \(I\)._ **Remark 5.2**.: _In the numerical simulations section, it will be convenient to use a finite number of homes_ \[I=\left\{1,\ldots,n\right\}.\] _But, we could also consider a one dimensional lattice with \(I=\mathbb{Z}\) or a two dimensional lattice with \(I=\mathbb{Z}\times\mathbb{Z}\)._ The model we consider now is the previous model in which we assume that \[n(y)=u(t,y)+\int_{\mathbb{R}}\left(v+w\right)(t,x,y)dy=\sum_{i\in I}n_{i}\delta_{ y_{i}}(y)\] where \(y\to\delta_{y_{i}}(y)\) is the Dirac mass at \(y_{i}\). Instead of considering \[u(t,y)=\sum_{i\in I}u_{i}(t)\delta_{y_{i}}(y),\] it is sufficient to consider \((u_{1}(t),\ldots,u_{n}(t))\in\mathbb{R}^{n}\) the numbers of individual staying at home with their home located at \(y_{1},\ldots,y_{n}\). We define \(v_{i}(t,x)\) (respectively \(w_{i}(t,x)\)) the density of travelers (respectively workers) with their home located at \(y_{i}\), and \(x\mapsto\mathbf{C}_{i}(x)\) the traveling speed of individual coming from the home located at \(y_{i}\). The return home model consists of a decoupled system of \(n\) sub-system of the following form \[\left\{\begin{aligned} &\partial_{t}u_{i}(t)=\chi\int_{\mathbb{R}^{2}}w_{i }(t,x)\mathrm{d}x-\gamma u_{i}(t),\\ &\partial_{t}v_{i}(t,x)=\varepsilon^{2}\Delta_{x}v_{i}-\nabla_{ x}\cdot(v_{i}\,\mathbf{C}_{i}(x))-\alpha v_{i}+\gamma g(x-y_{i})u_{i}(t),\\ &\partial_{t}w_{i}(t,x)=\alpha v_{i}(t,x)-\chi w_{i}(t,x),\end{aligned}\right. \tag{5.1}\] with \(i=1,\ldots,n\), \(t\geq 0\), \(x\in\mathbb{R}^{2}\) is the spatial location individual, and \(y_{i}\in\mathbb{R}^{2}\), is their home's location, and the initial distribution at \(t=0\), and for \(i=1,\ldots,n\), \[\left\{\begin{aligned} & u_{i}(0)=u_{i0}\in[0,+\infty),\\ & v_{i}(0,x)=v_{i0}(x)\in\mathrm{L}_{+}^{1}\left(\mathbb{R}^{2} \right),\\ &\text{and}\\ & w_{i}(0,x)=w_{i0}(x)\in\mathrm{L}_{+}^{1}\left(\mathbb{R}^{2} \right).\end{aligned}\right. \tag{5.2}\] **Conservation of individuals:** Total number of individual in each house \(i\in I\) is preserved \[n_{i}=\underbrace{u_{i}(t)}_{\begin{subarray}{c}\text{Number of}\\ \text{individuals}\\ \text{at home}\end{subarray}}+\underbrace{\int_{\mathbb{R}^{2}}v_{i}(t,x) \mathrm{d}x}_{\begin{subarray}{c}\text{Number of}\\ \text{travelers}\end{subarray}}+\underbrace{\int_{\mathbb{R}^{2}}w_{i}(t,x) \mathrm{d}x}_{\begin{subarray}{c}\text{Number of}\\ \text{workers}\end{subarray}},\] is the number of individuals in the home \(i\) at time \(t\). **Equilibria:** For each \(i\in I\), we have a unique equilibrium \[\overline{u}_{i}=\frac{1}{\left(1+\frac{\gamma}{\alpha}+\frac{\gamma}{\chi} \right)}n_{i},\] \[\overline{v}_{i}(x)=\bigg{(}\alpha I-A-B_{y_{i}}\bigg{)}^{-1}\bigg{(}\tau g(\cdot-y _{i})n_{i}\bigg{)},\] and \[\overline{w}_{i}(x)=\frac{\alpha}{\chi}\overline{v}_{i}(x).\] As a consequence of Theorem 4.8. **Corollary 5.3**.: _Assume that \(\mathbf{C}_{i}\) satisfies Assumption 3.3. Assume that \(\alpha>0\)\(\gamma>0\) and \(\chi>0\). For each \(i\in I\), the solution of system (5.1) satisfies_ \[\lim_{t\to+\infty}u_{i}(t)=\overline{u}_{i},\text{ in }\mathbb{R},\] \[\lim_{t\to+\infty}v_{i}(t,x)=\overline{v}_{i}(x),\text{ in }\mathrm{L}^{1}( \mathbb{R}^{2}),\] _and_ \[\lim_{t\to+\infty}w_{i}(t,x)=\overline{w}_{i}(x),\text{ in }\mathrm{L}^{1}( \mathbb{R}^{2}),\] ## 6 Numerical simulations of the hybrid model In this section, we run a simulations of the model (2.1) on a bounded domain \[\Omega=[0,1]\times[0,1].\] The model on bounded is presented in Appendix A. Here, we use the following initial distribution \[u_{i}(0)=n_{i},v_{i}(0,x)=w_{i}(0,x)=0,\forall i\in I.\] We assume that the convection is null. That is, \[\mathbf{C}_{i}(x)=0.\forall x\in\Omega,\forall i\in I.\] It is essential to mention that the numerical results are obtained by using an Euler integration method \(\Delta x_{1}\Delta x_{2}\sum_{j}\sum_{k}w_{i}(t,x_{1}^{j},x_{2}^{k})\) for \(\int_{\Omega}w_{i}(t,x)dx\) in the \(u\)-equation of system (5.1). This method does not give a very good approximation of the integral, but this approximation is preserved through the numerical scheme used for diffusion. For example, the Simpson method does not work to compute the solution, and the errors accumulate and produces a blowup of the solutions. In the numerical simulations, we use a semi-implicit numerical method to compute the diffusive part of the system (see Appendix A). In Table 1, we list the parameters used in the simulations. In Figure 4, we plot the number of people per home with the location of each home. In Figure 5, we plot \[x\mapsto\sum_{i\in I}n_{i}g(x-y_{i})\] which is the density of individuals leaving their homes at time \(t=0\). This figure gives another representation of the density of individuals at home. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Symbol** & **Interpretation** & **Value** & **Unit** \\ \hline \(\varepsilon\) & Diffusion coefficient & 1 & none \\ \hline \(1/\gamma\) & Average time spent at home & 12/24 & day \\ \hline \(1/\alpha\) & Average time spent traveling & 2/24 & day \\ \hline \(1/\chi\) & Average time spent at work & 10/24 & day \\ \hline \(\sigma\) & Standard deviation for the function \(\rho(x1,x2)\) & 0.05 & none \\ \hline \end{tabular} \end{table} Table 1: List of parameters used in the simulations. Figure 4: In this figure, we plot bars with height \(n_{i}\) located at \(y_{i}\), the number of individuals with their home located at \(y_{i}\in\Omega\). The number of individuals per home varies randomly between \(50\) and \(200\) per home. In Figure 6, we observe that the numerical method preserves the number of individuals. **Fig. 6**: _We plot the total number of individuals at home \(t\to U(t)=\sum_{i\in I}u_{i}(t)\) (blue), the total number of travelers \(t\to V(t)=\sum_{i\in I}\int_{\Omega}v_{i}(t,x)dx\) (orange), and the total number of workers \(t\to W(t)=\sum_{i\in I}\int_{\Omega}w_{i}(t,x)dx\) (yellow), the total number of individuals (purple). After one day, we observe the number of individuals in each compartment remains constant._ In Figure 7, we plot the number of people at home, travelers, and workers in each home at time \(t=2\). That is \[u_{i}(2),\ \int_{\Omega}v_{i}(2,x)dx,\ \text{and}\ \int_{\Omega}w_{i}(2,x)dx,\] and we draw a bar at their home location \(y_{i}\). We observe that each distribution (a) (b) or (c) is a multiple of the density of individual at home \((y_{1},y_{2})\mapsto n\left(y_{1},y_{2}\right)\), and the individual are mixed subdivide in between each compartments. The maximal value is \(180\) in (a), \(15\) in (b), and \(70\) in (c). **Fig. 7**_In this figure, we plot the number of individuals on day \(2\): 1) at home \(y_{i}\mapsto u_{i}\left(2\right)\) (on the top left) for each home \(i\); 2) traveling \(y_{i}\mapsto\int_{\Omega}v_{i}(2,x)dx\) (on the top right) for each home \(i\); 3) at work \(y_{i}\mapsto\int_{\Omega}w_{i}(2,x)dx\) (on the bottom) for each home \(i\). The three figures look the same, but their amplitude is very different. The maximal value is \(150\) on the top left, \(15\) on the top right, and \(80\) on the bottom._ In Figure 8, we plot \[\sum_{i\in I}v_{i}(2,x_{1},x_{2}),\text{ and }\sum_{i\in I}w_{i}(2,x_{1},x_{2}).\] We observe numerically the equilibrium formula (4.14). That is, \[\sum_{i\in I}\overline{w}_{i}(2,x_{1},x_{2})=\frac{\alpha}{\chi}\sum_{i\in I} \overline{v}_{i}(2,x_{1},x_{2}),\forall(x_{1},x_{2})\in\Omega.\] ## 7 Conclusion This article presents a new model, including a compartment for people at home, traveling, and people at work. We study the model's well-posedness and obtain a convergence result to a stationary distribution. The numerical simulations have illustrated such convergence results, and we observed that only one day is necessary for the solutions of the model to converge to the equilibrium distributions. Such a model is essential because significant social differences exist between individuals depending on their home location. Intuitively, the people living in the city's center would travel a short distance to work, while those living in the suburbs would travel a long distance to their working places. The model could be complexified in many ways. We could introduce multiple groups to describe the different types of behavior for people at work. For example, some people, like taxi drivers, never stop to travel while they are working. Conversely, teleworking people stay at home to work but leave their homes to shop. Figure 8: _In the figure, we plot the distribution of all the travelers \((x_{1},x_{2})\mapsto\sum_{i\in I}v_{i}(2,x_{1},x_{2})\) (on the top), and we plot the distribution of all the workers \((x_{1},x_{2})\mapsto\sum_{i\in I}w_{i}(2,x_{1},x_{2})\) (on the bottom). Both figures, left and right, look the same, and only the amplitude changes from the left to the right._ We could also consider multiple transport speeds \(\mathbf{C}_{\mathbf{y}}^{\mathbf{k}}(x)\) for people leaving their homes at \(y\in\mathbb{R}^{2}\). Different speeds can match different means of transportation, car, bus, subway, etc. Assuming, for example, that \(m\)-types of transport speed are involve, then for each group \(k=1,\ldots,m\), we would have the following model to describe the travelers \[\left\{\begin{aligned} \partial_{t}v_{k}(t,x)&=\varepsilon^{2} \bigtriangleup_{x}v_{k}(t,x)-\nabla_{x}\cdot\left(v_{k}(t,x)\,\mathbf{C}_{ \mathbf{y}}^{\mathbf{k}}(x)\right),\\ v_{k}(0,x)&=v_{0}^{k}(x)\in\mathrm{L}^{1}\left( \mathbb{R}^{2}\right).\end{aligned}\right. \tag{7.1}\] Suppose we consider now an epidemic spreading in a city. In that case, the most critical compartments are those staying at home and work, where most pathogens' transmissions occur. The return-to-home model could compute the distribution of people at work and home depending on their home locations in a given city. Return-to-home models could be used to study various phenomena in the cities. We can extend this model to study air pollution, the spread of epidemics, and other important problems to understand the population dynamics at the level of a single city. Here we use a model to describe travelers' movement, which is relatively simplistic. For example, people travel on the roads, not through the buildings. Another question would be how to include the streets or a map in such a model. To conclude the paper, we should mention that animals also have a home. An important example is the bee, and we refer to [29, 30] for more results on this topic. Many species of animals live around their home, so modeling return-to-home is probably essential to understand the dynamics of many leaving populations. This article considers the case where the model's parameters are constant with time. But, people mostly leave home in the morning, and the parameter \(\gamma\) must be larger in the morning than the rest of the day. Similarly, since the people return home late in the afternoon, the parameter \(\chi\) must be larger during that period than during the rest day. For each \(y\in\mathbb{R}^{2}\), therefore, the return-to-home model with circadian rhythm (one-day periodic parameters) reads as follows \[\left\{\begin{aligned} &\partial_{t}u_{y}(t)=\chi(t)\int_{ \mathbb{R}^{2}}w_{y}(t,x)\mathrm{d}x-\gamma(t)u_{y}(t),\\ &\partial_{t}v_{y}(t,x)=\varepsilon^{2}\Delta_{x}v_{y}-\nabla_{x} \cdot(v_{y}\,\mathbf{C}_{\mathbf{y}})-\alpha v_{y}+\gamma(t)g(x-y)u_{y}(t),\\ &\partial_{t}w_{y}(t,x)=\alpha v_{y}(t,x)-\chi(t)w_{y}(t,x),\end{aligned}\right. \tag{7.2}\] where the function \(t\to\gamma(t)\) and \(t\to\chi(t)\) are one-day periodic functions. To conclude, we should insist on the fact that in the model, the individuals return home instantaneously. So here, we use diffusion and convection processes to derive the distribution of individuals at work from the distribution of individuals at home. In most practical problems, such as epidemic outbreaks and others, the two distributions will be sufficient to understand the major interactions between individuals. **Appendix** ## Appendix A The return home model on a bounded domain We consider the rectangle domain of \(\mathbb{R}^{2}\) \[\Omega=(a_{1},b_{1})\times(a_{2},b_{2})=\{(x_{1},x_{2})\in\mathbb{R}^{2}:a_{1}<x_ {1}<b_{1},\text{ and }a_{2}<x_{2}<b_{2}\}.\] The return home model with no flux at the boundary (i.e. with Neumann boundary conditions) is the following \[\left\{\begin{aligned} &\partial_{t}u(t,y)=\chi\int_{\Omega}w(t,x,y) \mathrm{d}x-\gamma u(t,y),\\ &\partial_{t}v(t,x,y)=\varepsilon^{2}\Delta_{x}v(t,x,y)-\alpha v (t,x,y)+\gamma\rho(x,y)u(t,y),\\ &\partial_{t}w(t,x,y)=\alpha v(t,x,y)-\chi w(t,y),\end{aligned}\right.\] (A.1) with \(t\geq 0,x\in\Omega,y\in\Omega\), and in order to preserve the \(\mathrm{L}^{1}\) norm in space, we impose Neumann boundary conditions. As \(\Omega\) is assumed to be a rectangle, that is \[\left\{\begin{aligned} &\partial_{x_{1}}v(t,x,y)=0,t\geq 0,x_{1}=a_{1} \text{ or }x_{1}=b_{1},\\ &\partial_{x_{2}}v(t,x,y)=0,t\geq 0,x_{2}=a_{2}\text{ or }x_{2}=b_{2}, \end{aligned}\right.\] (A.2) the initial distribution at \(t=0\) \[\left\{\begin{aligned} & u(0,y)=u_{0}(y)\in L^{1}_{+}(\Omega, \mathbb{R}),\\ & v(0,x,y)=v_{0}(x,y)\in L^{1}_{+}(\Omega\times\Omega,\mathbb{R}),\\ &\text{ and }\\ & w(0,x,y)=w_{0}(x,y)\in L^{1}_{+}(\Omega\times\Omega,\mathbb{R}).\end{aligned}\right.\] (A.3) In order to preserve the total number of individuals, we defined for \(x=(x_{1},x_{2})\in\Omega\), and \(y=(y_{1},y_{2})\in\Omega\), as follows \[\rho(x,y)=\frac{g(x-y)}{G(y)},\] where \(G(y)\) is a normalization constant, which is defined by \[G(y)=\int_{\Omega}g(x-y)\mathrm{d}x,\forall y\in\Omega.\] **Remark A.1**.: _In the formula for \(\rho(x,y)\) we divide \(g(x-y)\) by \(G(y)\), in order to obtain_ \[\int_{\Omega}\rho(x,y)dx=1,\forall y\in\Omega.\] In Figure 1 we plot the function \((y_{1},y_{2})\to G(y_{1},y_{2})\) and we use the 2 dimensional Simpson method to compute the integrals. ## Appendix B Matrix form of the numerical scheme From Appendix A, we know that the unknowns and equations are stored "naturally" as components of a vector for the one-dimensional case. However, for the two-dimensional case, we need to deal directly with the components of a matrix. Rearranging the values as a column vector raises the delicate issue of grid point renumbering. We define for each \(i=1,\cdots,n_{1}\), \(j=1,\cdots,n_{2}\), \(k=1,\cdots,n_{1}\), and \(l=1,\cdots,n_{2}\), and set \[m_{1}=(j-1)n_{1}+i\in[1,n_{1}n_{2}]\Leftrightarrow i=\text{mod}(m_{1},n_{1}), \text{ and }j=\frac{m_{1}-i}{n_{1}}+1,\] and \[m_{2}=(l-1)n_{1}+k\in[1,n_{1}n_{2}]\Leftrightarrow k=\text{mod}(m_{2},n_{1}), \text{ and }l=\frac{m_{2}-k}{n_{1}}+1,\] and \[m=(m_{1}-1)(n_{1}n_{2})+m_{2}\in\left[1,(n_{1}n_{2})^{2}\right] \Leftrightarrow m_{2}=\text{mod}(m,n_{1}n_{2}),\text{ and }m_{1}=\frac{m-m_{2}}{n_{1}n_{2}}+1.\] We agree to note the grid points from "the left to the right" and from "the bottom to the top", i.e., according to the increasing order of the \(i\), \(j\) and \(k\), \(l\) indices, respectively. Hence, \(m_{1}\) and \(m_{2}\) are the numbers corresponding to the points \((x_{1i},x_{2j})\) and \((y_{1k},y_{2l})\), respectively. Figure 9: _In this figure we plot \((y_{1},y_{2})\to G(y_{1},y_{2})\). Here we use \(\Omega=[0,1]\times[0,1]\) and the Gaussian function \(g(x_{1},x_{2})\) with \(\sigma=0.05\)._ The vector \(v\) is then defined by its components \[v(m_{1})^{n}=v(t^{n},x_{1i},x_{2j}),\quad\forall i=1,\cdots,n_{1},\forall j=1, \cdots,n_{2}.\] It follows from Appendix A that the discrete problem can be written in the vector form as follows: \[\Delta_{x}v(t^{n},x_{1i},x_{2j})=Av(m_{1})^{n},\] where \(A\in M_{n_{1}\times n_{2}}\left(\mathbb{R}\right)\) is the block tridiagonal matrix defined as \[A=\begin{pmatrix}\dfrac{B}{\Delta x_{1}^{2}}-\dfrac{I}{\Delta x_{2}^{2}}& \dfrac{I}{\Delta x_{2}^{2}}&0\vdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdotscdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdotscdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdotscdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdotscdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdotscdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdotscdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdotscdots\cdots\cdotscdots \cdots\cdots\cdots\cdots\cdotscdots\cdots\cdotscdots\cdots\cdotscdots \cdots\cdots\cdots\cdotscdots\cdots\cdotscdots\cdots\cdotscdots\cdots \cdots\cdotscdots\cdots\cdotscdots\cdots\cdotscdots\cdotscdots\cdotscdots \cdots\cdotscdots\cdotscdots\cdotscdots\cdotscdots\cdotscdotscdots \cdots\cdotscdots\cdotscdots\cdotscdotscdots\cdotscdotscdots \cdotscdots\cdotscdots\cdotscdotscdots\cdotscdotscdotscdots \cdots \[R_{m_{2}}=\left[\rho\left(x_{m_{1}},y_{m_{2}}\right)\right]_{m_{1}=1}^{m_{1}=n_{1} \times n_{2}}.\] Then system (B) can be written as a semi-implicit numerical scheme \[\left\{\begin{array}{rl}u_{m_{2}}^{n+1}=&u_{m_{2}}^{n}+\Delta t\Delta x_{1} \Delta x_{2}\chi\,\sum W_{m_{2}}^{n}-\Delta t\,\gamma\,u_{m_{2}}^{n},\\ V_{m_{2}}^{n+1}=&V_{m_{2}}^{n}+\Delta t\,\varepsilon AV_{m_{2}}^{n+1}-\Delta t \,\alpha\,V_{m_{2}}^{n}+\Delta t\,\gamma\,\mathrm{diag}\left(R_{m_{2}}\right)u_ {m_{2}}^{n},\\ W_{m_{2}}^{n+1}=&W_{m_{2}}^{n}+\Delta t\,\alpha\,V_{m_{2}}^{n}-\Delta t\, \chi\,W_{m_{2}}^{n}.\end{array}\right.\] (B.1) The complete problem with convection is more challenging to simulate. Nevertheless, it is possible to use splitting methods in that case. We refer to Speth, Green, MacNamara, and Strang [37] for more on this topic.
2301.13474
Generalized Fruit Diophantine equation and Hyperelliptic curves
We show the insolvability of the Diophantine equation $ax^d-y^2-z^2+xyz-b=0$ in $\mathbb{Z}$ for fixed $a$ and $b$ such that $a\equiv 1 \pmod {12}$ and $b=2^da-3$, where $d$ is an odd integer and is a multiple of $3$. Further, we investigate the more general family with $b=2^da-3^r$, where $r$ is a positive odd integer. As a consequence, we found an infinite family of hyperelliptic curves with trivial torsion over $\mathbb{Q}$. We conclude by providing some numerical evidence corroborating the main results.
Om Prakash, Kalyan Chakraborty
2023-01-31T08:46:17Z
http://arxiv.org/abs/2301.13474v1
# Generalized Fruit Diophantine Equation and Hyperelliptic Curves ###### Abstract. We show the insolvability of the Diophantine equation \(ax^{d}-y^{2}-z^{2}+xyz-b=0\) in \(\mathbb{Z}\) for fixed \(a\) and \(b\) such that \(a\equiv 1\pmod{12}\) and \(b=2^{d}a-3\), where \(d\) is an odd integer and is a multiple of \(3\). Further, we investigate the more general family with \(b=2^{d}a-3^{r}\), where \(r\) is a positive odd integer. As a consequence, we found an infinite family of hyperelliptic curves with trivial torsion over \(\mathbb{Q}\). We conclude by providing some numerical evidence corroborating the main results. Key words and phrases:Diophantine equation, Quadratic residue, Elliptic curves, Hyperelliptic curves 2010 Mathematics Subject Classification: Primary: 11D41, 11D72. Secondary: 11G30 ## 1. Introduction One of the earliest topics in number theory is the study of Diophantine equations. In the third century, Greek mathematician Diophantus of 'Alexandria' began this study. A polynomial equation of the form \[P(x_{1},x_{2},\cdots,x_{n})=0\] is known as a Diophantine equation. Finding all of its integer solutions, or all of the \(n-\)tuples \((x_{1},x_{2},\cdots,x_{n})\in\mathbb{Z}\) that satisfy the above equation, is of prime interest. The main task is to investigate whether solutions exist for a given Diophantine equation. If they do, it would be the aim to know how many are there and how to find all. There are certain Diophantine equations which has no non zero integer solutions, for example, Fermat's equation \(x^{n}+y^{n}=z^{n}\) for \(n\geq 3\). The tenth of Hilbert's 23 problems, which he presented in 1900, dealt with Diophantine equations. Hilbert asked, is there an algorithm to determine weather a given Diophantine equation has a solution or not? and Matiyasevich in 1970 answered it negatively. We investigate a class of Diophantine equations of the form \(ax^{d}-y^{2}-z^{2}+xyz-b=0\) for fixed \(a\) and \(b\). Due to its emergence when attempting to solve an equation involving fruits, this type of Diophantine equations were given the name "Fruit Diophantine equation" by B. Sury and D. Majumdar [5] and they proved the following: **Theorem 1.1**.: _[_5_]_ _The equation_ \[y^{2}-xyz+z^{2}=x^{3}-5\] _has no integer solution in \(x\), \(y\) and \(z\)._ Similar type of equations were previously studied by F. Luca and A. Togbe. In particular, Luca and Togbe [4] studied the solution of the Diophantine equation \(x^{3}+by+1-xyz=0\) and later, Togbe [7] independently studied the equation \(x^{3}+by+4-xyz=0\). As a consequence of Theorem 1.1 Majumdar and Sury proved the following: **Theorem 1.2**.: _[_5_]_ _For any integer \(m\), the elliptic curve_ \[E_{m}:y^{2}-mxy=x^{3}+m^{2}+5\] _has no integral point._ L. Vaishya and R. Sharma expanded on Majumdar and Sury's work in [8]. A class of fruit Diophantine equations without an integer solution was found by them. In particular Vaishya and Sharma showed, **Theorem 1.3**.: _[_8_]_ _For fixed integers \(a\) and \(b\) with \(a\equiv 1\pmod{12}\) and \(b=8a-3\). The Diophantine equation_ \[ax^{3}-y^{2}-z^{2}+xyz-b=0\] _has no integer solution._ Using Nagell-Lutz theorem [6] and Theorem 1.3 they got hold of an infinite family of elliptic curves with torsion-free Mordell-Weil group over \(\mathbb{Q}\). **Theorem 1.4**.: _[_8_]_ _Let \(a\) and \(b\) be as in Theorem 1.3._ * _For any even integer_ \(m\) _the elliptic curve_ \[E^{e}_{m,a,b}:y^{2}=x^{3}+\frac{1}{4}m^{2}x^{2}-a^{2}\left(m^{2}+b\right)\] _has torsion-free Mordell-Weil group._ * _For any odd integer_ \(m\) _the elliptic curve_ \[E^{o}_{m,a,b}:y^{2}=x^{3}+m^{2}x^{2}-64a^{2}\left(m^{2}+b\right)\] _has torsion-free Mordell-Weil group._ We extend Vaishya and Sharma's results [8] for higher exponents. We obtain a family of hyperelliptic curves, by carrying out some appropriate transformations. In 2013, D. Grant gave an analogue of Nagell-Lutz theorem for hyperelliptic curves [3], using which we conclude that the Mordell-Weil group of each member of the corresponding family of hyperelliptic curves is torsion-free. ## 2. Insolvability Here we state and prove the main theorem and derive a couple of interesting corollaries. We end this section by looking into a couple of examples. **Theorem 2.1**.: _The equation_ \[ax^{d}-y^{2}-z^{2}+xyz-b=0\] _has no integer solutions for fixed \(a\) and \(b\) such that \(a\equiv 1\pmod{12}\) and \(b=2^{d}a-3\), where \(d\) is an odd integer and divisible by \(3\)._ Proof.: Consider \[ax^{d}-y^{2}-z^{2}+xyz-b=0. \tag{2.1}\] If possible, let \((x,y,z)\) be an integer solution of (2.1). Let us fix \(x=\alpha\). Then (2.1) can be re-written as, \[y^{2}+z^{2}+b=a\alpha^{d}+\alpha yz. \tag{2.2}\] We consider the cases of \(\alpha\) being even or odd separately. **Case 1**.: _If \(\alpha\) is even. Then, we write (2.2) as:_ \[\left(y-\frac{\alpha z}{2}\right)^{2}-\left(\frac{\alpha^{2}}{4}-1\right)z^{2 }=a\alpha^{d}-b \tag{2.3}\] _and set \(Y=y-\frac{\alpha z}{2}\), \(\beta=\frac{\alpha}{2}\) and \(z=Z\). Thus (2.3) becomes,_ \[Y^{2}-\left(\beta^{2}-1\right)Z^{2}=a\alpha^{d}-b=2^{d}\beta^{d}a-b. \tag{2.4}\] * _If_ \(\beta\) _is even, say_ \(\beta=2n\) _for some integer_ \(n\)_, then reducing (_2.4_) modulo_ \(4\) _gives,_ \[Y^{2}+Z^{2}\equiv 3\pmod{4},\] (2.5) _which is not possible in_ \(\mathbb{Z}/4\mathbb{Z}\)_._ * _If_ \(\beta\) _is odd, then_ \(\beta=2n+1\) _for some integer_ \(n\)_. Reduction of (_2.4_) modulo_ \(4\) _entails,_ \[Y^{2}\equiv 3\pmod{4}\] (2.6) _which is impossible._ **Case 2**.: _If_ \(\alpha\) _is odd, say,_ \(\alpha=2n+1\) _for some integer_ \(n\)_. Then,_ \[y^{2}+z^{2}+b = a\alpha^{d}+\alpha yz\] \[y^{2}+z^{2}+a2^{d}-3 = a\left(2n+1\right)^{d}+\alpha yz\] \[y^{2}+z^{2}-\left(2n+1\right)yz = a\left(2n+1\right)^{d}-a2^{d}+3.\] _Now_ \[y^{2}+z^{2}+yz \equiv a+3\pmod{2},\] \[\Rightarrow y^{2}+z^{2}+yz \equiv 0\pmod{2}.\] _Note that \(y^{2}+z^{2}+yz\equiv a+3\pmod{2}\) has only solution \(y\equiv 0\equiv z\) in \(\mathbb{Z}/2\mathbb{Z}\), that is, \(y\) and \(z\) are even. Thus (2.3) becomes_ \[a\alpha^{d}-b\equiv 0\pmod{4}.\] _If we write \(a=12l+1\) for some integer \(l\), then,_ \[\alpha^{d}-\left(a2^{d}-3\right) \equiv 0\pmod{4},\] \[\Rightarrow\alpha^{d}+3 \equiv 0\pmod{4},\] \[\Rightarrow\alpha^{d} \equiv 1\pmod{4},\] \[\Rightarrow\alpha \equiv 1\pmod{4}.\] _Let us consider_ \[\left(y-\frac{\alpha z}{2}\right)^{2}-\left(\frac{\alpha^{2}}{4}-1 \right)z^{2} = a\alpha^{d}-b,\] \[\text{i.e. }\left(y-\frac{\alpha z}{2}\right)^{2}-\left(\alpha^{2}-4 \right)\left(\frac{z}{2}\right)^{2} = a\alpha^{d}-b.\] _Further, we set \(Y=y-\frac{\alpha z}{2}\) and \(Z=\frac{z}{2}\). Then,_ \[Y^{2}-\left(\alpha^{2}-4\right)Z^{2}=a\alpha^{d}-b \tag{2.7}\] _where \(\alpha\equiv 1\pmod{4}\), \(a\equiv 1\pmod{12}\) and \(b=a2^{d}-3\). Three sub cases need to be considered._ **Sub-case 1**.: _If \(\alpha\equiv 1\pmod{12}\), write \(\alpha=12l+1\) for some integer \(l\). Then,_ \[\alpha\equiv 1\pmod{3}\] \[\Rightarrow\alpha+2\equiv 0\pmod{3}.\] _Substituting \(\alpha=12l+1\) in 2.7, we get_ \[Y^{2}-\left(\left(12l+1\right)^{2}-4\right)Z^{2} = a\alpha^{d}-b,\] \[\Rightarrow Y^{2}\equiv a\alpha^{d}-b\pmod{3},\] \[\Rightarrow Y^{2}\equiv a\left(12l+1\right)^{d}-a2^{d}+3 \pmod{3},\] \[\Rightarrow Y\equiv 1-2^{d}\pmod{3},\] \[\Rightarrow Y^{2}\equiv 2\pmod{3}.\] _A contradiction as \(2\) is not square modulo \(3\)._ **Sub-case 2**.: _If \(\alpha\equiv 9\pmod{12}\). Then, there is a prime factor \(p\equiv 5\) or \(7\pmod{12}\) of \((\alpha-2)\). Let \(p\equiv 5\) or \(7\pmod{12}\) be a prime factor of \((\alpha-2)\). Thus,_ \[Y^{2}\equiv a\alpha^{d}-b\pmod{p}.\] _Let \(\alpha=pl+2\) for some integer \(l\). Then,_ \[Y^{2} \equiv a\left(pl+2\right)^{d}-b\pmod{p},\] \[\Rightarrow Y^{2} \equiv 3\pmod{p}.\] _This leads to a contradiction as \(3\) is not a quadratic residue modulo \(p\)._ **Sub-case 3**.: _When \(\alpha\equiv 5\pmod{12}\), we substitute \(\alpha=3k+2\) for some integer \(k\) and get,_ \[Y^{2}-\left(\left(3l+2\right)^{2}-4\right)Z^{2} = \left(12l+1\right)\left(3k+2\right)-2^{d}\left(12l+1\right)+3,\] \[\Rightarrow Y^{2} \equiv 2-2^{d}\equiv 0\pmod{3},\] \[\Rightarrow Y \equiv 0\pmod{3}.\] _Further, we substitute \(Y=3m\) and \(\alpha=12n+5\) for some integers \(n\) and \(m\) in 2.7 and arrive onto,_ \[9m^{2}-\left(12n+3\right)\left(12n+7\right)Z^{2} = a\left(12n+5\right)^{d}-b,\] \[\Rightarrow-\left(n+1\right)Z^{2} \equiv \sum_{i=0}^{d-1}\left(12n+5\right)^{d-1-i}2^{i}\pmod{3},\] \[\Rightarrow-\left(n+1\right)Z^{2} \equiv 1\pmod{3},\] \[\Rightarrow n \equiv 1\pmod{3}.\] _Hence, \(\alpha\equiv 17\pmod{36}\)._ _Note that \(3\) divides \((\alpha-2)\). Thus there is a prime factor \(p\equiv 5\) or \(7\pmod{12}\) of \(\frac{(\alpha-2)}{3}\), otherwise it would mean that \(\frac{\alpha-2}{3}\) is congruent to \(\pm 1\), which is not the case. Therefore,_ \[\alpha-2\equiv 0\pmod{p}.\] _Thus,_ \[Y^{2}\equiv a\alpha^{d}-b\pmod{p}.\] _Substituting \(\alpha=pl+2\) for some integer \(l\), we have_ \[Y^{2}\equiv 3\pmod{p},\] _which contradicts the fact that \(3\) is quadratic residue modulo \(p\) if \(p\equiv\pm 1\pmod{12}\)._ _Remark 1_.: The result of Sury and Majumdar [5] follows by substituting \(a=1\) and \(d=3\) in Theorem 2.1. The particular case \(d=3\) in the same theorem deduces the results of Vaishya and Sharma [8]. By increasing the exponents in the expression for \(b\) to \(3\), we will now examine the Diophantine equation with a little more generality. The potential of a solution in this scenario is described by the following two corollaries, along with a few examples. **Corollary 2.1**.: _The equation_ \[ax^{d}-y^{2}-z^{2}+xyz-b=0\] _has no integer solution \((x,y,z)\) with \(x\) even for fixed integers \(a\) and \(b\) such that \(a\equiv 1\pmod{12}\) and \(b=2^{d}a-3^{r}\) with positive odd integers \(r\) and \(d\) as in Theorem 2.1._ Proof.: We follow exactly the same steps as in Case 1 of Theorem 2.1. Suppose there is a solution with \(x=\alpha\) even, then we write (2.2) as: \[\left(y-\frac{\alpha z}{2}\right)^{2}-\left(\frac{\alpha^{2}}{4}-1 \right)z^{2}=a\alpha^{d}-b. \tag{2.8}\] Let \(Y=y-\frac{\alpha z}{2},\beta=\frac{\alpha}{2}\) and \(z=Z\). Then (2.8) can be written as, \[Y^{2}-\left(\beta^{2}-1\right)Z^{2}=a\alpha^{d}-b=2^{d}\beta^{ d}a-b. \tag{2.9}\] * If \(\beta\) is even, say \(\beta=2n\) for some integer \(n\), then the reduction modulo 4 of (2.9) will give, \[Y^{2}+Z^{2}\equiv 3^{r}\equiv 3\pmod{4},\] (2.10) which is not feasible in \(\mathbb{Z}/4\mathbb{Z}\). * If \(\beta\) is odd, say \(\beta=2n+1\) for some integer \(n\). Then, the reduction modulo 4 of (2.9) provides, \[Y^{2}\equiv 3^{r}\equiv 3\pmod{4},\] (2.11) which again is not possible. The following corollary deals with solutions having \(x\), an odd integer: **Corollary 2.2**.: _The equation_ \[ax^{d}-y^{2}-z^{2}+xyz-b=0\] _has no integer solution in \(x\), \(y\) and \(z\) with \(x\equiv 1\) or \(9\pmod{12}\), for fixed integers \(a,b\) such that \(a\equiv 1\pmod{12}\) and \(b=2^{d}a-3^{r}\), for \(r\) and \(d\) as in Corollary 2.1._ Proof.: Analogous steps as in Sub-case 2 and 3 of Theorem 2.1 will give the proof. _Remark 2_.: Corollary 2.2 says that, if there is a solution of \(ax^{d}-y^{2}-z^{2}+xyz-b=0\) with \(a\) and \(b\) as described in the Corollary 2.2, then \(x\) must be 5 modulo 12. We will see some examples. **Example 1**.: _For \(a=25\), \(d=3\) and \(r=3\). The equation_ \[25x^{3}-y^{2}-z^{2}+xyz-173=0 \tag{2.12}\] _has no integer solution._ Example 2 shows that the equation may not have solution even with \(x\equiv 5\pmod{12}\). However, the next examples tell us the other possibility as well. **Example 2**.: _If \(a=13\), \(d=3\) and \(r=3\), then_ \[13x^{3}-y^{2}-z^{2}+xyz-77=0 \tag{2.13}\] _has an integer solution \(\left(5,=18,-102\right)\)._ _Remark 3_.: The condition that \(r\) should be odd is rigid. **Example 3**.: _For \(a=13\), \(d=3\) and \(r=2\), the equation_ \[13x^{3}-y^{2}-z^{2}+xyz-95=0 \tag{2.14}\] _has an integer solution \(\left(2,-10,-7\right)\)._ ## 3. Hyperelliptic curves A hyperelliptic curve \(H\) over \(\mathbb{Q}\) is a smooth projective curve associated to an affine plane curve given by the equation \(y^{2}=f\left(x\right)\), where \(f\) is a square-free polynomial of degree at least \(5\). If the degree of \(f\) is \(2g+1\) or \(2g+2\), then the curve has genus \(g\). We write \(H\left(\mathbb{Q}\right)\) for the set of \(\mathbb{Q}\)-points on \(H\). Determining rational points on hyperelliptic curve is one of the major problems in mathematics. The following is the general result regarding the size of \(H\left(\mathbb{Q}\right)\), which was conjectured by Mordell and was proved by Faltings: **Theorem 3.1**.: _[_2_]_ _If \(C\) is a smooth, projective and absolutely irreducible curve over \(\mathbb{Q}\) of genus at least \(2\), then \(C\left(\mathbb{Q}\right)\) is finite._ We may thus, at least theoretically, write down the finite set \(C\left(\mathbb{Q}\right)\). It is still a significant unresolved problem to perform this practically for a given curve. Given a hyperelliptic curve \(H\), we can define the _height_ (classical) function to be the maximum of absolute values of the coefficients. The Northcott property tells us that there are finitely many equations with bounded height. Thus, one may talk about the density and averages. In this regard, Bhargava [1] has proved that most of the hyperelliptic curve over \(\mathbb{Q}\) has no rational point. So, most of the times calculating \(H\left(\mathbb{Q}\right)\) means proving \(H\left(\mathbb{Q}\right)=\phi\). In this section, we construct hyperelliptic curves corresponding to the equation \(ax^{d}-y^{2}-z^{2}+xyz-b=0\) with \(a\) and \(b\) as mentioned in Theorem 2.1. Then, we prove that \(H\left(\mathbb{Q}\right)=\phi\) (corroborating Bhargava [1]). The main ingredient to prove this is the following Nagell-Lutz type theorem (Theorem 3, [3]) proved by D. Grant. **Theorem 3.2**.: _[_3_]_ _Let \(C\) be a nonsingular projective curve of genus \(g\geq 1\) given by \(y^{2}=x^{2g+1}+b_{1}x^{2g}+\cdots+b_{2g}x+b_{2g+1}\), where \(b_{i}\in\mathbb{Z}\). Suppose_ \[\psi:C\left(\mathbb{Q}\right)\to J\left(\mathbb{Q}\right)\] _be the Abel-Jacobi map, defined by \(\psi\left(p\right)=\left[p-\infty\right]\), where \(J\left(\mathbb{Q}\right)\) is the Jacobian variety. If \(p=\left(x,y\right)\in C\left(\mathbb{Q}\right)\setminus\left\{\infty\right\}\) and \(\psi\left(p\right)\in J\left(\mathbb{Q}\right)_{\text{tors}}\), then, \(x,y\in\mathbb{Z}\) and either \(y=0\) or \(y^{2}\) divides discriminant of the polynomial \(x^{2g+1}+b_{1}x^{2g}+\cdots+b_{2g}x+b_{2g+1}\)._ For fixed \(m\) we define hyperelliptic curves, \[H_{m,a,b}:y^{2}-mxy=ax^{d}-m^{2}-b.\] * Suppose \(m\) is even. Then write (2.1) as: \[\left(y-\frac{mx}{2}\right)^{2}-\frac{m^{2}x^{2}}{4}=ax^{d}-m^{2}-b.\] (3.1) Multiplying (3.1) by \(a^{d-1}\) throughout, and using the fact that \(d\) is odd and divisible by \(3\), we have, \[\left(\left(y-\frac{mx}{2}\right)a^{\frac{d-1}{2}}\right)^{2}-a^{d-1}\frac{m^{ 2}x^{2}}{4}=(ax)^{d}-m^{2}a^{d-1}-ba^{d-1}.\] (3.2) We get the following hyperelliptic curve by substituting \(\left(\left(y-\frac{mx}{2}\right)a^{\frac{d-1}{2}}\right)=Y\) and \(ax=X\), \[H_{m,a,b}^{e}:Y^{2}-a^{d-3}\frac{m^{2}X^{2}}{4}=X^{d}-m^{2}a^{d-1}-ba^{d-1}.\] (3.3) * Now if \(m\) is odd, multiply (3.2) by \(4^{d}\) throughout to get \[\left(\left(y-\frac{mx}{2}\right)a^{\frac{d-1}{2}}2^{d}\right)^{2}-(4a)^{d-1} \,m^{2}x^{2} =(4ax)^{d}-m^{2}a^{d-1}4^{d}-ba^{d-1}4^{d}.\] Finally substitute \(\left(\left(y-\frac{mx}{2}\right)a^{\frac{d-1}{2}}2^{d}\right)=Y\) and \(4ax=X\), to get \[H_{m,a,b}^{o}:Y^{2}-(4a)^{d-3}\,m^{2}X^{2}=X^{d}-m^{2}a^{d-1}4^{d}-ba^{d-1}4^{d}.\] (3.4) Let, \[H_{m,a,b}=\begin{cases}H_{m,a,b}^{e}&\text{ if }m\text{ is even}\\ H_{m,a,b}^{o}&\text{ if }m\text{ is odd},\end{cases} \tag{3.5}\] be the hyperelliptic curves. **Theorem 3.3**.: _Let \(a\) and \(b\) be as defined in Theorem 2.1. For any \(m\in\mathbb{N}\), the hyperelliptic curve \(H_{m,a,b}\) has torsion-free Mordell-Weil group over \(\mathbb{Q}\)._ Proof.: Let \(a\) and \(b\) be fixed positive integers with \(a\equiv 1\pmod{12}\) and \(b=2^{d}a-3\). * For any even integer \(m\), consider the hyperelliptic curve \[H^{e}_{m,a,b}:Y^{2}-a^{d-3}\frac{m^{2}X^{2}}{4}=X^{d}-m^{2}a^{d-1}-ba^{d-1}.\] (3.6) By Theorem 3 of [3], if (3.6) has an integer solution \((X_{0},Y_{0})\), then \(\left(aX_{0},\left(\left(Y_{0}-\frac{mX_{0}}{2}\right)a^{\frac{d-1}{2}}\right),m\right)\) is a solution of (2.1). However, in Theorem 2.1 we have proved that it has no integer solutions. * For an odd integer \(m\), consider the hyperelliptic curve \[H^{o}_{m,a,b}:Y^{2}-(4a)^{d-3}\,m^{2}X^{2}=X^{d}-m^{2}a^{d-1}4^{d}-ba^{d-1}4^{d}.\] (3.7) Suppose (3.7) has a solution \((X_{0},Y_{0})\), then \(\left(4aX_{0},\left(\left(Y_{0}-\frac{mX_{0}}{2}\right)a^{\frac{d-1}{2}}2^{d }\right),m\right)\) is a solution of (2.1), which is a contradiction. ## 4. Numerical examples In this section we give some numerical examples corroborating our results in Corollary 2.2 and Remark 2. \begin{tabular}{||c c c c||} \hline \(a\) & \(d\) & \(r\) & Equation & Solution \\ \hline \hline 1 & 3 & 3 & \(x^{3}-y^{2}-z^{2}+xyz+19=0\) & \((5,0,-12)\) \\ \hline 1 & 3 & 5 & \(x^{3}-y^{2}-z^{2}+xyz+235=0\) & \((29,12,-60)\) \\ \hline 1 & 3 & 7 & \(x^{3}-y^{2}-z^{2}+xyz+2179=0\) & \((5,0,-48)\) \\ \hline 1 & 3 & 9 & \(x^{3}-y^{2}-z^{2}+xyz+19675=0\) & \((-31,12,-30)\) \\ \hline 13 & 3 & 3 & \(13x^{3}-y^{2}-z^{2}+xyz-77=0\) & \((5,-18,-102)\) \\ \hline 13 & 3 & 5 & \(13x^{3}-y^{2}-z^{2}+xyz+139=0\) & \((5,0,-42)\) \\ \hline 13 & 3 & 7 & \(13x^{3}-y^{2}-z^{2}+xyz+2083=0\) &? \\ \hline 25 & 3 & 3 & \(25x^{3}-y^{2}-z^{2}+xyz-173=0\) & \((5,0,-42)\) \\ \hline \end{tabular} ## Acknowledgement This work is done during the first author's visit to Institute of Mathematical Sciences (IMSc), Chennai, and he is grateful to the Institute for the hospitality and the wonderful working ambience. Both the authors are grateful to Kerala School of Mathematics(KSoM), Kozhikode, for it's support and wonderful ambience.
2308.16761
Learning Category Trees for ID-Based Recommendation: Exploring the Power of Differentiable Vector Quantization
Category information plays a crucial role in enhancing the quality and personalization of recommender systems. Nevertheless, the availability of item category information is not consistently present, particularly in the context of ID-based recommendations. In this work, we propose a novel approach to automatically learn and generate entity (i.e., user or item) category trees for ID-based recommendation. Specifically, we devise a differentiable vector quantization framework for automatic category tree generation, namely CAGE, which enables the simultaneous learning and refinement of categorical code representations and entity embeddings in an end-to-end manner, starting from the randomly initialized states. With its high adaptability, CAGE can be easily integrated into both sequential and non-sequential recommender systems. We validate the effectiveness of CAGE on various recommendation tasks including list completion, collaborative filtering, and click-through rate prediction, across different recommendation models. We release the code and data for others to reproduce the reported results.
Qijiong Liu, Lu Fan, Jiaren Xiao, Jieming Zhu, Xiao-Ming Wu
2023-08-31T14:29:10Z
http://arxiv.org/abs/2308.16761v6
# Co-evolving Vector Quantization for ID-based Recommendation ###### Abstract Category information plays a crucial role in enhancing the quality and personalization of recommendations. Nevertheless, the availability of item category information is not consistently present, particularly in the context of ID-based recommendations. In this work, we propose an alternative approach to automatically learn and generate entity (i.e., user and item) categorical information at different levels of granularity, specifically for ID-based recommendation. Specifically, we devise a co-evolving vector quantization framework, namely COVE, which enables the simultaneous learning and refinement of code representation and entity embedding in an end-to-end manner, starting from the randomly initialized states. With its high adaptability, COVE can be easily integrated into existing recommendation models. We validate the effectiveness of COVE on various recommendation tasks including list completion, collaborative filtering, and click-through rate prediction, across different recommendation models. We will publish the code and data2 for other researchers to reproduce our work. Footnote 2: [https://github.com/Jyonn/Cove](https://github.com/Jyonn/Cove) ## 1 Introduction Recommender systems Wang et al. (2017); Liu et al. (2022); Wu et al. (2022) aim to ease the burden of decision-making by automatically suggesting personalized item recommendations tailored to a user's preferences and historical behavior. They cater to diverse objectives such as list completion, collaborative filtering, and click-through rate prediction The varied objectives underscore the importance of devising methodologies that can adapt to different recommendation scenarios and deliver improved recommendations. When crafting recommendation models and algorithms, the integration of categorical information, such as product types Cai et al. (2021) and user locations Liu et al. (2022); Moreira, Jannach, and da Cunha (2019), emerges as a pivotal consideration. These categorical attributes adeptly capture inherent characteristics of users or items, establishing meaningful associations. Consequently, recommender systems can learn diverse granularities of entities (e.g., user or item) representations. Furthermore, category features serve to mitigate the cold-start problem, providing an additional layer of information for less active (sparsely interacting) entities Gogna and Majumdar (2015); Barman, Hasan, and Roy (2019). This supplementary information is progressively refined by interactions from active users or items during training, thereby aiding less active entities in obtaining more robust representations. However, not all scenarios provide category features, as many recommendation datasets only include ID information. To address the absence of category attributes in ID-based recommendation contexts, we employ vector quantization (VQ) techniques as the core clustering algorithm to automatically generate categorical features. However, the application of VQ poses two challenges, as detailed below. Firstly, previous vector quantization methods for recommendation Zhang et al. (2023); Rajput et al. (2023) often rely on _meaningful and fixed_ entity (e.g., user or item) embeddings, derived from side information like content-aware item embeddings using pretrained models. They usually adopt a two-stage design, where clustering and recommendation training are carried out separately. However, the lack of side information in ID-based recommendation hinders the generation of meaningful entity embeddings during the initial training phrase, making the two-stage approach impractical. To tackle this challenge, we propose a "co-evolving vector quantization" framework, as illustrated in Figure 1. It enables dynamic adjustments of both entity embeddings and code representations (from the quantization codebook), starting from their initial random states, through internal inter-dependencies and external recommendation tasks, resulting in a robust and stable form. To achieve this, we employ differentiable vector quantization techniques Van Den Oord, Vinyals et al. (2017) and introduce a commitment loss (Section 3.1) to the quantization loss. Secondly, it is crucial to select the appropriate level of detail for categories. Utilizing fine-grained categories may lead to sparse data for recommendations, whereas using coarse-grained categories may obscure important distinctions between entities. To address this challenge, we propose a cascaded clustering structure that can capture different levels of granularity in categorical attributes. To summarize, we introduce a co-evolving vector quantization framework for ID-based recommendation scenario, namely **COVE**, which offers several notable advantages and capabilities as outlined below. * **Easy adoption and high adaptability.** COVE is a pluggable module that can be conveniently integrated into a wide range of existing recommendation models for accommodating different recommendation scenarios, including list completion, collaborative filtering, and click-through rate prediction. * **End-to-end framework.** Unlike previous works Liu et al. (2023); Rajput et al. (2023) that adopt a multi-stage training approach and pre-extract categories (or "semantic ids") using clustering algorithms, COVE offers an end-to-end solution by training together with the recommender system. The end-to-end training allows for refining and optimizing the cate Figure 1: Illustration of our proposed co-evolving training. “Internal Learning” indicates code representations and entity embeddings are learned mutually through the internal quantization task, while “External Learning” refers to they are both supervised by the external recommendation task. The co-evolving training strategy enables their iterative upgrade to meaningful vectors from randomly initialized ones. gorization to align with specific recommendation objectives and improve the performance of the model over time. * **Effectiveness.** We conduct a comprehensive evaluation of COVE on multiple recommendation tasks, including list completion, collaborative filtering, and click-through rate prediction. The evaluation involves seven datasets and a comparison with 14 baseline methods. The results demonstrate the effectiveness of COVE, showcasing significant improvements across most scenarios. Notably, COVE demonstrates a relative improvement of up to 21.41% over state-of-the-art baselines in list completion tasks, highlighting its high effectiveness in this area. ## 2 Related Work ### Recommender Systems Recommender systems have been extensively studied in various application scenarios including (1) list completion, which aims to continue the user-curated list by sequence generation, (2) collaborative filtering (CF) that makes recommendation based on user-item interactions, and (3) click-through rate (CTR) prediction, which is a crucial task in the ranking phase of the recommendation pipeline. **List completion.** Pioneer works based on Markov chain McFee and Lanckriet (2011, 2012); Chen et al. (2012) or neural networks Chen et al. (2018); Volkovs et al. (2018); Gatzioura et al. (2019); Tran et al. (2019) are mostly proposed for automatic playlist continuation. In recent years, sequential recommenders Tang and Wang (2018); Hidasi et al. (2016); Sun et al. (2019); He et al. (2020) have been proposed to use an autoregressive way to generation items for list completion task, while FANS Liu et al. (2023) leverages non-autoregressive generation to improve both quality and efficiency. **Collaborative filtering.** To overcome the scalability and sparsity issues in large-scale systems, matrix factorization Koren et al. (2009) techniques and deep learning-based methods have been widely adopted to capture underlying preferences and characteristics for personalized recommendation. **Click-through rate prediction.** In recent years, deep learning-based CTR prediction models Cheng et al. (2016); Wang et al. (2017); Guo et al. (2018); Huang et al. (2019); Mao et al. (2023) have gained popularity. These models have demonstrated improved performance by leveraging the expressive power of neural networks to capture intricate patterns in user-item interactions. ### Vector Quantization Vector quantization (VQ) techniques Gray (1984) map a large set of input vectors into a small set of vectors (i.e., a codebook), which have been widely studied in computer vision Xia et al. (2013); Babenko and Lempitsky (2014); Razavi et al. (2019) and speech coding Buzo et al. (1980); Juang and Gray (1982) domains. Vector Quantization for Recommendation.To date, only a few studies explore the application of vector quantization techniques in the field of recommender systems. One line of research aims to improve the recommendation efficiency Ko et al. (2021); Lian et al. (2020); Van Balen and Levy (2019), while the other line targets in improving recommendation quality for content-based meaningful entity embedding Hou et al. (2023); Rajput et al. (2023). To the best of our knowledge, our COVE is the first to improve the recommendation quality in the ID-based recommendation context where side information is not available. ## 3 Proposed Framework: COVE Figure 1(a) illustrates our differentiable and cascaded vector quantization (COVE) framework, which is designed to enhance _id-based_ representations of both items and users. It involves a series of cascaded vector quantizers for extracting category-aware information at multiple levels of granularity. The vector quantizers are interconnected in a successive manner, with the output of one quantizer be the input to the next. The quantized multi-level code vectors are then fused and fed to a recommender system to facilitate downstream recommendation tasks, as shown in Figure 1(b) and 1(c). COVE and the recommender system are trained together in an end-to-end manner. ### Cascaded Vector Quantization Vector quantization Wu and Yu (2019) targets at grouping similar vectors into clusters by representing them with a small set of prototype vectors. We use a vector quantizer to locate the code vector within a codebook that closely matches the input embedding. The code vector is anticipated to capture and represent the categorical information associated with the input embedding. The vector quantizer includes a \(k\)-entry codebook \(\mathbf{E}\in\mathbb{R}^{k\times d}\), where \(k\) is the number of the code vectors and \(d\) is the dimension of each code vector. Given an input embedding \(\mathbf{z}\in\mathbb{R}^{d}\), nearest neighbour search is performed to find the most similar code to \(\mathbf{z}\) within \(\mathbf{E}\): \[j=\arg\min_{i\in\{1,2,\ldots,k\}}\|\mathbf{z}-\mathbf{e}_{i}\|_{2}^{2}, \tag{1}\] where \(\mathbf{e}_{i}(1\leq i\geq k)\) is any code vector in the codebook \(\mathbf{E}\), and \(j\) is the index of the matched code vector \(\mathbf{e}_{j}\). #### Cascaded Quantization Flow COVE employs a series of cascaded vector quantizers to capture categorical information at multiple levels of granularity. Figure 1(a) shows an example with three quantizers. Let \(H\) be the number of quantizers (or levels of granularity). Each quantizer \(Q^{(i)}\) has a \(v^{i}\)-entry codebook \(\mathbf{E}^{(i)}\), where \(i=1,2,\ldots,H\). The quantizers are interconnected in a cascaded fashion, generating _fine-to-coarse_ code vectors, i.e., \(v^{i}>v^{j}\) for \(i<j\). Each quantizer \(Q^{(i)}\) takes the output of the previous quantizer (i.e., \(Q^{(i-1)}\)) as input, creating a quantization flow defined as follows. \[\mathbf{z}_{\text{q}}^{(i)} =Q^{(i)}\left(\mathbf{z}^{(i-1)}\right), \tag{2}\] \[\text{where}\quad\mathbf{z}^{(i-1)} =\begin{cases}\mathbf{z},&\text{if }i=1\\ \mathbf{z}_{\text{q}}^{(i-1)}.&\text{else}\end{cases}\] where \(\mathbf{z}_{\text{q}}^{(i)}\) is the output of quantizer \(Q^{(i)}\). Figure 2: (a) Overview of our proposed co-evolving vector quantization (COVE) framework. (b) Integration of COVE to non-sequential recommenders. (c) Integration of COVE to sequential recommenders. #### Code Fusion Layer As shown in Figure 2b and 2c, with the quantized multi-level code vectors \(\mathbf{z}_{\text{q}}^{(i)}(i=1,2,\cdots,H)\), we employ an average pooling operation (\(\mathcal{P}\)) to combine them into a single vector: \[\mathbf{\bar{z}}=\frac{1}{H}\sum_{i}^{H}\mathbf{z}_{\text{q}}^{(i)}. \tag{3}\] Further, we use a weighted residual connection (\(\mathcal{A}\)) to add the original vector \(\mathbf{z}\) to obtain the final category-aware representation \(\mathbf{z}_{\text{c}}\), i.e., \[\mathbf{z}_{\text{c}}=\mathbf{z}+\alpha\mathbf{\bar{z}}, \tag{4}\] where \(\alpha\) is a hyperparameter that balances the two terms. #### Differentiable Back Propagation Since the nearest neighbour search algorithm is not differentiable, we utilize the straight-through estimator (STE) Bengio, Leonard, and Courville (2013) to approximate the gradient of each quantizer. Specifically, the gradient of the quantizer is approximated by the gradient of the identity function, which is defined as: \[\frac{\partial\mathbf{z}_{\text{q}}^{(i)}}{\partial\mathbf{z}^{(i-1)}}\approx \frac{\partial\mathbf{z}^{(i-1)}}{\partial\mathbf{z}^{(i-1)}}=\mathbf{I}, \tag{5}\] where \(\mathbf{I}\) is the identity matrix. Therefore, the quantization loss can be defined as: \[\begin{split} L_{\text{quant}}=&\sum_{i}^{H}\left( \|sg[\mathbf{z}^{(i-1)}]-\mathbf{z}_{\text{q}}^{(i)}\|_{2}^{2}\right)+\\ &\beta\sum_{i}^{H}\left(\|\mathbf{z}^{(i-1)}-sg[\mathbf{z}_{ \text{q}}^{(i)}]\|_{2}^{2}\right),\end{split} \tag{6}\] where \(sg\) is the stop gradient operation, and \(\beta\) is a hyper-parameter that controls the trade-off between the two losses, i.e., the vector quantization loss (the first term) and the commitment cost (the second term). The vector quantization loss encourages the quantizer to select the closest vector in the codebook, while the commitment loss forces the quantizer to commit to a particular codebook entry. ### Co-Evolving Training As shown in Figure 2b and 2c, our COVE can be easily integrated into a wide range of recommender systems, including non-sequential recommenders and sequential recommenders. More precisely, COVE performs quantization on entity embeddings, extracting categorical information termed as cascaded code representation. Furthermore, it combines the initial entity embeddings, producing category-aware entity embeddings that seamlessly integrate within the recommender system. It's worth noting that we have designed both user COVE and item COVE modules, each meticulously tailored to process user and item embeddings, respectively. For the sequential recommendation context, user embedding will be learned by item history list, so only item COVE is required. The cascaded codebook of COVE and entity embeddings are both initialized randomly prior to training. Initially, entity embeddings lack meaningful information, leading to insignificant quantization outcomes. As training progresses, the COVE module and the recommender model are jointly optimized through an external recommendation task (i.e., recommendation loss), gradually imbuing entity embeddings with semantic context. Furthermore, an internal quantization loss is introduced to enhance the clustering effectiveness of the codebook. The enriched category information (code representation) subsequently contributes to improved recommendation performance for entity embeddings in subsequent training batches. This cyclic iteration results in a double-helix refinement process, where the codebook and entity embeddings continuously enhance their representation learning throughout the training process. Specifically, the loss objective of the sequential and non-sequential recommenders can be respectively defined as: \[L_{\text{seq}}=L_{\text{rec}}+\omega_{q}L_{\text{quant}}^{\text{item}}, \tag{7}\] \[L_{\text{non\_seq}}=L_{\text{seq}}+\omega_{q}L_{\text{quant}}^{\text{user}}, \tag{8}\] where the hyperparameter \(\omega_{q}\) balances the objectives of accurate recommendation and codebook quantization during the training process. ## 4 Experiment ### Experimental Setup #### Datasets. We conducted offline experiments on three recommendation tasks, namely list completion, collaborative filtering (CF), and click-through rate (CTR) prediction. For the list completion task, we use three real-world datasets: Zhihu, Spotify, and Goodreads, which were crawled and compiled by He et al. (2020). For the collaborative filtering task, we utilize two public datasets: Amazon Toys and Amazon Kindle Store, namely Toys and Kindle, respectively. Regarding the CTR prediction task, we employ two public datasets: MIND Wu et al. (2020) (small version) and MovieLens Harper and Konstan (2015) (100K version). The dataset statistics can be found in Table 1. #### Preprocessing. For the list completion task, we adopt the data preprocessing steps proposed by Liu et al. (2023). We iteratively perform the following two operations until the data no longer changes: 1) remove items with a frequency less than 10 from all lists; 2) truncate or filter the item list according to the maximum and minimum lengths specific to each dataset. Furthermore, we uniformly divide a qualifying list into two segments, namely the input and target lists. The lists are then partitioned into training, validation, and testing sets using an 8:1:1 ratio. For the CF and CTR prediction datasets, we only use the user-item interaction data without any additional information. To be specific, for the MIND dataset, user historical behaviors are transformed into a list of user-item pairs, which are subsequently included in the training set. More details about the dataset preprocessing will be provided in the public code repository upon accepted. #### Baselines and Variants of Our Method. **List completion.** We take the state-of-the-art sequential recommendation methods and the item list completion models as baselines, including Caser Tang and Wang (2018), GRU4Rec Hidasi et al. (2016), SASRec Kang and McAuley (2018), BERT4Rec Sun et al. (2019), CAR He et al. (2020)) and FANS Liu et al. (2023). We integrate COVE into BERT4Rec and FANS to obtain \(\text{COVE}_{\text{BERT4Rec}}\) and \(\text{COVE}_{\text{FANS}}\) models, respectively. It is worth noting that FANS Liu et al. (2023) pre-extracts categorical item features based on the curated item lists among training, validation, and testing sets. These categorical knowledge is also added into baseline models for a fair comparison in the FANS paper. Since we learn the cascaded categorical features in an end-to-end manner, we do not use the pre-extracted categorical entity features in our experiments for both our method variants and baselines. **Collaborative filtering.** We compare our method with representative CF models as baselines, including BPRMF Rendle et al. (2012), NeuMF He et al. (2017), CFKG Zhang et al. (2018) and LGCN He et al. (2020) We integrate our proposed COVE module into these baselines and denote them as \(\text{COVE}_{\text{BPRMF}}\), \(\text{COVE}_{\text{NeuMF}}\), \(\text{COVE}_{\text{CFKG}}\), and \(\text{COVE}_{\text{LGCN}}\), respectively. \begin{table} \begin{tabular}{c|c c c|c c|c c} \hline \hline & \multicolumn{3}{c|}{**List Completion**} & \multicolumn{2}{c|}{**CTR**} & \multicolumn{2}{c}{**CF**} \\ \cline{2-9} **Datasets** & **Zhihu** & **Spotify** & **Goodreads** & **MIND** & **MovieLens** & **Toys** & **Kindle** \\ \hline \(\#\)**Lists** & 18,704 & 72,152 & 15,426 & 94,057 & 943 & 19,413 & 68,224 \\ \(\#\)**Items** & 36,005 & 104,695 & 47,877 & 65,238 & 1,682 & 11,925 & 61,935 \\ \(\#\)**Interactions** & 927,781 & 6,809,820 & 1,589,480 & 1,756,555 & 52,480 & 623,023 & 2,664,795 \\ **Items per list** & 49.59 & 94.38 & 103.04 & - & - & - & - \\ **List Range** & \(10\sim 200\) & \(20\sim 300\) & \(20\sim 300\) & - & - & - & - \\ **Samples** & - & - & - & 9,993,270 & 69,881 & 1,246,064 & 5,329,590 \\ **Density** & 0.138\% & 0.089\% & 0.215\% & 0.163\% & 4.406\% & 0.538\% & 0.136\% \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics. The density is defined as the ratio of the number of interactions to the number of all possible interactions. **Click-through rate prediction.** We compare our method with the widely used and state-of-the-art deep CTR models, including DeepFM Guo et al. (2018), DCN Wang et al. (2017), FiBiNET Huang, Zhang, and Zhang (2019), and FinalMLP Mao et al. (2023). We integrate our proposed COVE module into these baselines and denote the integrated models as \(\text{COVE}_{\text{DeepFM}}\), \(\text{COVE}_{\text{DCN}}\), \(\text{COVE}_{\text{FinalMLP}}\), and \(\text{COVE}_{\text{FiBiNET}}\), respectively. #### Evaluation Protocols. We follow the common practice Shi et al. (2019) to evaluate the effectiveness of recommendation models with the widely used metrics, i.e., Normalized Discounted Cumulative Gain Jarvelin and Kekalainen (2002) (NDCG@k) and Hit Ratio (HR@k). In this work, we set \(k=\{5,10\}\). #### Implementation Details. During training, we adopt the Adam optimizer as the gradient descent algorithm. For all models, the embedding dimension is set to 64. **For the list completion task**, we set the batch size to 256 and the learning rate to 0.01 following Liu et al. (2023). We use 3 Transformer layers for all Transformer-based models and 3 hidden layers for the GRU4Rec model. For the Caser model, we follow the original implementation and settings, and set the max sequence length to 5. We set the number of attention heads to 8 for all Transformer-based methods on the three datasets of list completion. **For the collaborative filtering task**, we set the batch size to 1024 and the learning rate to 0.001. For the LGCN model, we set the number of GCN layers to 3. **For the CTR prediction task**, we set the batch size to 5000, the learning rate to 0.001, the number of DNN layers to 3, the size of each hidden layer to 1000, and the dropout rate to 0.1 for all models. For the DCN model, we set the number of cross layers to 3. For the FiBiNET model, we set the number of feature interaction blocks to 3. We carefully tune the hyper-parameters of all models on the validation set and report the best results achieved on the test set. The results are averaged over 5 runs. Due to space constraints, we will furnish the details in future publications. All the methods were trained using NVIDIA GeForce RTX 3090 with 24GB memory. \begin{table} \begin{tabular}{c c|c c c c c c|c c c c|c c} \hline \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{**Models**} & **Caser** & **GRU4Rec** & **SASRec** & **CAR** & **BERT4Rec** & **COVE\({}_{\text{BERT4Rec}}\) & **FANS\({}^{*}\)** & **FANS\({}_{\text{TSC}}\)** & **COVE\({}_{\text{EANS}}\)** & **Imp.** \\ & & (2018) & (2016) & (2018) & (2020) & (2019) & (ours) & (2023) & (2023) & (ours) & **Imp.** \\ \hline \hline \multirow{8}{*}{**Baselines**} & **No\(\mathbf{\leqslant}\)** & 0.0065 & 0.0058 & 0.0046 & 0.0050 & 0.0136 & **0.0220** & 0.0256 & 0.0223 & **0.0301** & 17.58\% \\ & **No\(\mathbf{10}\)** & 0.0105 & 0.0085 & 0.0074 & 0.0087 & 0.0198 & **0.0305** & 0.0389 & 0.0337 & **0.0428** & 10.03\% \\ & **HR@5** & 0.0926 & 0.0819 & 0.0728 & 0.0770 & 0.1664 & **0.2333** & **0.2857** & 0.2670 & **0.3043** & 6.20\% \\ & **HR@10** & 0.1812 & 0.1597 & 0.1423 & 0.1664 & 0.2933 & **0.3987** & 0.4819 & 0.4604 & **0.4859** & 0.83\% \\ \hline \multirow{8}{*}{**Baselines**} & **No\(\mathbf{\leqslant}\)** & 0.0187 & 0.0041 & 0.0037 & 0.0040 & 0.0136 & **0.0202** & 0.0313 & 0.0315 & **0.0352** & 11.75\% \\ & **No\(\mathbf{10}\)** & 0.0262 & 0.0057 & 0.0054 & 0.0057 & 0.0229 & **0.0298** & 0.0461 & 0.0438 & **0.0519** & 12.58\% \\ & **HR@5** & 0.2786 & 0.0805 & 0.0825 & 0.0793 & 0.2350 & **0.3242** & 0.4071 & 0.3999 & **0.4385** & 7.71\% \\ & **HR@10** & 0.3983 & 0.1236 & 0.1257 & 0.1227 & 0.3212 & **0.4559** & 0.5927 & 0.5552 & **0.6282** & 5.99\% \\ \hline \multirow{8}{*}{**Baselines**} & **No\(\mathbf{\leqslant}\)** & 0.0039 & 0.0053 & 0.0049 & 0.0040 & 0.0108 & **0.0130** & 0.0334 & 0.0293 & **0.0399** & 19.46\% \\ & **No\(\mathbf{10}\)** & 0.0053 & 0.0068 & 0.0064 & 0.0058 & 0.0160 & **0.0180** & 0.0467 & 0.0418 & **0.0567** & 21.41\% \\ \cline{1-1} & **HR@5** & 0.0694 & 0.0856 & 0.0830 & 0.0726 & 0.1634 & **0.1829** & 0.3819 & 0.3268 & **0.4275** & 11.94\% \\ \cline{1-1} & **HR@10** & 0.1109 & 0.1252 & 0.1187 & 0.1109 & **0.2678** & **0.2678** & 0.5149 & 0.4514 & **0.5473** & 6.29\% \\ \hline \hline \end{tabular} \end{table} Table 2: Effectiveness of our COVE method in the list completion scenario. We bold the best results. The asterisk (*) indicates that the method uses pre-extracted categorical features which is learnt from the overall dataset including the test set. \begin{table} \begin{tabular}{c c|c c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{**Models**} & **BPEMF** & \(\text{COVE}_{\text{Dropout}}\) & **Imp.** & \begin{tabular}{c} **NeuMF** \\ (ours) \\ \end{tabular} & **COMP\({}_{\text{Dropout}}\)** & **Imp.** & \begin{tabular}{c} **CVE\({}_{\text{Dropout}}\)** \\ (ours) \\ \end{tabular} & **Imp.** & \begin{tabular}{c} **CFFG** \\ (2019) \\ \end{tabular} & **COMP\({}_{\text{Dropout}}\)** & **Imp.** & \begin{tabular}{c} **CFFG** \\ (ours) \\ \end{tabular} & **Imp.** & \begin{tabular}{c} **LEGN** \\ (ours) \\ \end{tabular} & **COMP\({}_{\text{LSTM}}\)** & **Imp.** & \begin{tabular}{c} **LEGN** \\ (ours) \\ \end{tabular} & **COMP\({}_{\text{LSTM}}\)** & **Imp.** & \begin{tabular}{c} **LOCN** \\ (ours) \\ \end{tabular} & **Comp.** \\ \hline \hline \multirow{8}{*}{**Baselines**} & **No\(\mathbf{\leqslant}\)** & 0.2156 & **0.2204** & 2.22\% & 2.23\% & 0.1822 & **0.1918** & **0.194** & 4.56\% & 0.2136 & **0.2443** & 14.37\% & 0.2282 & **0.2318** & 1.58\% \\ & **No\(\mathbf{\leqslant}\)** & 0.2466 & **0.2520** & 2.19\% & 0.2154 & **0.2325** & 3.30\% & 0.2466 & **0.2778** & 12.65\% & 0.2629 & **0.2666** & 1.41\% \\ & **HR@5** & 0.2959 & **0.3039** & 2.70\% & 0.2560 & **0.2661** & 3.95\% & 0.3026 & **0.3398** & 13.02\% & 0.3185 & **0.3233** & 1.51\% \\ & **HR@10** & 0.3918 & **0.4021** & 2.63\% & 0.3585 & **0.3653** & 1.90\% & 0.4028 & **0.4433** & 10.05\% & 0.4260 & **0.4312** & 1.22\% \\ \hline \multirow{8}{*}{**Baselines**} & **No\(\mathbf{\leqslant}\)** & 0.4712 & **0.4747** & 0.74\% & 0.4341 & **0.4442** & 2.33\% & 0.3143 & **0.3235** & 2.93 ### List Completion Table 2 presents a comparison of the state-of-the-art sequential recommenders with our proposed COVE variants on the list completion task. Based on the results, we can make the following observations. **Firstly**, for both autoregressive and non-autoregressive models, our proposed COVE module can significantly improve the performance of the baseline models. For example, \(\text{COVE}_{\text{BERT4Rec}}\) can achieve an average improvement of 38% and 31% in terms of NDCG@5 and HR@5 among all datasets, compared with BERT4Rec. **Secondly**, since the FANS models leverage item category information in their design, they outperform other autoregressive baselines. However, our COVE-integrated variant \(\text{COVE}_{\text{FANS}}\) can still achieve better performance than FANS, which implies that the end-to-end training models utilizing differentiable vector quantization can effectively learn improved clustering features compared to the word2vec+kmeans Liu et al. (2023) approach that relies on \begin{table} \begin{tabular}{c c c c c|c c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{3}{c|}{**DCN**} & \multicolumn{3}{c|}{**COVE**} & \multicolumn{3}{c|}{**DeepFM**} & \multicolumn{3}{c|}{**COVE**} & \multicolumn{3}{c|}{**DeepFM**} & \multicolumn{3}{c|}{**DeepFM**} & \multicolumn{3}{c|}{**DeepFM**} & \multicolumn{3}{c|}{**DeepFM**} & \multicolumn{3}{c|}{**FUNLP**} & \multicolumn{3}{c|}{**COVE\({}_{\text{BERT4Rec}}\)**} & \multicolumn{3}{c}{**DeepFM**} \\ & (317) & (ours) & **Imp.** & (308) & **Imp.** & (308) & **Imp.** & (309) & **Imp.** & (320) & **Imp.** & (202) & **Imp.** & (202) & **Imp.** \\ \hline \multirow{5}{*}{**200**} & **N05** & 0.2031 & **2273** & 6.999 & 0.2170 & **0.2428** & 11.899 & 0.2181 & **0.2319** & 0.3199 & 0.2176 & **0.2265** & 4.099 \\ & **N010** & 0.2623 & **0.2770** & 5.609 & 0.2749 & **0.2991** & 8.838 & 0.2760 & **0.2881** & 4.389 & 0.2757 & **0.2832** & 2.998 \\ & **N05** & **0.3940** & **0.3110** & 4.014 & 0.0465 & **0.4353** & 7.0681 & **0.4013** & **0.4346** & 6.294 & 0.4661 & **0.4207** & 3.475 \\ & **H05** & 0.5889 & **0.6012** & 2.099 & 0.5948 & **0.4156** & 3.499 & 0.5949 & **0.6129** & 3.0038 & 0.5935 & **0.6007** & 1.211 \\ \hline \multirow{5}{*}{**3000**} & **N01** & 0.6781 & **0.7047** & 3.992 & 0.6640 & **0.7038** & 5.95 & 0.7016 & **0.7328** & 4.459 & 0.7000 & **0.7440** & 6.299 \\ & **N05** & 0.7029 & **0.7204** & 2.499 & 0.7014 & **0.7152** & 1.797 & 0.7314 & **0.7445** & 1.79 & 0.7337 & **0.7431** & 1.285 \\ & **N010** & 0.7465 & **0.7569** & 1.999 & 0.7433 & **0.7554** & 1.2726 & 0.7679 & **0.7682** & 2.389 & 0.7896 & **0.7818** & 1.599 \\ & **H05** & 0.9690 & **0.9984** & 0.1519 & 0.9969 & **0.9984** & 0.155 & 0.9953 & **0.9969** & 0.169 & 0.9977 & **0.9987** & 0.509 \\ \hline \hline \end{tabular} \end{table} Table 4: Effectiveness of our COVE method in the click-through rate prediction scenario. We bold the best results. Figure 3: Influence of the use of user and item COVE in non-sequential recommendation. \begin{table} \begin{tabular}{c c c c|c c c c|c c c c|c c c c} \hline \hline \multicolumn{2}{c|}{**Datasets**} & \multicolumn{3}{c|}{**Zplus**} & \multicolumn{3}{c|}{**Contribution**} \\ \hline \multicolumn{2}{c|}{**Models**} & \multicolumn{3}{c|}{**COVE**} & \multicolumn{3}{c|}{**DeepFM**} & \multicolumn{3}{c|}{**DeepFM**} & \multicolumn{3}{c|}{**DeepFM**} & \multicolumn{3}{c|}{**DeepFM**} & \multicolumn{3}{c}{**DeepFM**} \\ \hline \multirow{2}{*}{**300**} & **N05** & **0.2031** & **2273** & 6.999 & 0.2170 & **0.2428** & 11.899 & 0.2181 & **0.2319** & 0.3199 & 0.2176 & **0.2265** & 4.099 \\ & **N010** & 0.2623 & **0.2770** & 5.609 & 0.2749 & **0.2991** & 8.838 & 0.2760 & **0.2881** & 4.3895 & 0.2767 & **0.2832** & 2.3987 & **0.2832** & 2.3998 \\ & **N05** & **0.3940** & **0.3110** & 4.1174 & 0.0465 & **0.4353** & 7.0681 & 0.4081 & **0.4336** & 6.2945 & 0.4661 & **0.4620** & 3.4759 & **0.4620** & 3.4759 \\ & **H05** & 0.5889 & **0.6012** & 2.099 & 0.5948 & **0.4156** & 3.499 & 0.5949 & **0.6129** & 3.0038 & 0.5935 & **0.6007** & 1.2119 \\ \hline \multirow{5}{*}{**300**} & **N01** & 0.6781 & **0.7047** & 3.992 & 0.6640 & **0.7038** & 5.995 & 0.7016 & **0.7328** & 4.549 & 0.7000 & **0.7440** & 6.2998 \\ & **N05** & 0.7029 & **0.7204** & 2.499 & 0.7014 & **0.7152** & 1.2789 & 0.7314 & **0.7445** & 1.79 & 0.7337 & **0.7431** & 1.285 \\ & **N010** & 0.7465 & **0.7569** & 1.999 & 0.7433 & **0.7554** & 1.2726 & 0.7679 & **0.7682** & 2.389 & 0.7896 & **0.7818** & 1.599 \\ \cline{1-1} & **H05** & 0.9690 & **0.9984** & 0.1519 & 0.9969 & **0.9984** & 0.155 & 0.9953 & **0.9969** & 0.169 & 0.9977 & **0.9987** & 0.5097 \\ \hline \hline \end{tabular} \end{table} Table 5: Impact of the number of COVE layers (H) and the number of entries of each layer (\(v^{i}\)). The best results are indicated in bold, while the second-best results are underlined. A hyphen (-) indicates the absence of a layer. For example, “100(\(v^{1}\)) 10(\(v^{2}\)) -(\(V^{3}\))” means that the COVE has only two layers, and the first and second layer correspond to a 100-entry and 10-entry codebook, respectively. We fix \(\alpha,\beta,\omega_{\text{c}},\omega_{\text{q}}\) to \(1.0\) in this experiment. pre-extracted features. **Thirdly**, in the Spotify dataset, the performance of the CNN-based Caser model is better than Transformer-based BERT4Rec model, which is aligned with the observation in Liu et al. (2023). One possible reason is that the local knowledge of the Spotify dataset is more important than the global information. ### Collaborative Filtering Table 3 displays the results of the popular CF models, along with our proposed COVE variants on the collaborative filtering task. From the results, we can make the following observation. Our proposed COVE consistently enhances the performance on the two datasets, resulting in significant improvements compared to the baseline models. ### Click-Through Rate Prediction Table 4 shows the results of the widely-used CTR prediction models and our proposed COVE variants on the CTR prediction task. Based on the results, we can make the following observations. Among all CTR prediction models, our COVE variants outperform the baseline models. ### Ablation Study We study the effects of the number of layers and the number of entries (i.e., codebook size) in COVE. We vary the number of layers from 1 to 3 and the number of entries within a range from 10 to 8,000. We fix other hyper-parameters and report the results of \(\text{COVE}_{\text{BERT4Rec}}\) and \(\text{COVE}_{\text{FANS}}\). As illustrated in Table 5, we conduct experiments on the Zhihu and Goodreads datasets. From the results, we can make the following observations. **Firstly**, the best results of two-layer COVE variants are better than those of one-layer COVE variants on both datasets, indicating that COVE can effectively capture the hierarchical category information to further improve the entity representations. **Secondly**, different variants prefer different numbers of entries. For example, on the Zhihu dataset, \(\text{COVE}_{\text{BERT4Rec}}\) prefers a small number of entries in the first layer (i.e., 200), while \(\text{COVE}_{\text{FANS}}\) prefers a large number of entries in the same layer (i.e., 500). **Thirdly**, different datasets prefer different numbers of entries. Figure 4: Impact of the residual connection weight \(\alpha\), the quantization commitment cost \(\beta\), the codebook classification loss weight \(\omega_{\text{c}}\), and the quantization loss weight \(\omega_{\text{q}}\). We use the model with \(\alpha=0\) as the reference baseline for (a) and (b), and measure the _relative improvement_ of each metric compared to the baseline for various values of \(\alpha\), defined as \((m_{\alpha}-m_{0})/m_{0}*100\%\), where \(m\) is one of the metrics in {N@5, N@10, HR@5, HR@10}. Therefore, the relative improvement of \(\alpha=0\) is constant at 0%. Similarly, we use the model with \(\beta=0\) as the reference baseline for (c) and (d), \(\omega_{\text{q}}=0\) for (e) and (f), and \(\omega_{\text{q}}=0\) for (g) and (h). For example, for the \(\text{COVE}_{\text{BERT4Rec}}\) variant, the best number of entries is 20 on the Zhihu dataset and 50 on the Goodreads dataset. **Fourthly**, as the number of entries increases, the performance of COVE variants first increases and then decreases. One possible reason is that a small number of entries may exhibit boundary effects, and as the entry size increases, the boundaries of the clusters gradually become blurred. However, when the number of entries is too large, the number of entities in each entry is too small, which may lead to insufficient learning of categorical feature. Moreover, the layer and entry numbers need to be carefully adjusted, otherwise it may lead to negative effects. We also test the effectiveness of the dual COVE (i.e., using both user and item COVE) in the CTR prediction scenario. As shown in Figure 3, we test four base models on the MovieLens dataset and the results prove that both user and item COVE could boost the performance of baselines. ### Impact of Hyper-parameters We explore the impacts of the residual connection weight \(\alpha\), the quantization commitment cost \(\beta\), the quantization loss weight \(\omega_{\text{q}}\), and the codebook classification loss weight \(\omega_{\text{c}}\). The experiments are conducted on two list completion datasets, i.e., Zhihu and Goodreads. Based on the results from Section 4.5, we take the best COVE configuration of the \(\text{COVE}_{\text{FANS}}\) model, i.e., (500, 10) for the Zhihu dataset and (500, 50) for the Goodreads dataset. Based on the results from Figure 4, we can make the following observations. **Firstly**, the performance of baselines (i.e., when hyper-parameters are set to \(0\)) is inferior to the most of the cases, indicating the effectiveness of these hyper-parameters. **Secondly**, different datasets achieve the best performance at different hyper-parameter settings. For example, the Zhihu dataset reaches the best performance at \(\alpha=0.6\), while for the Goodreads dataset, \(\alpha=1.0\). **Thirdly**, unlike the computer vision domain where the quantization commitment cost \(\beta\) is usually set to \(0.25\) Van Den Oord, Vinyals et al. (2017), in the recommendation domain, a higher \(\beta\) (i.e., \(1.0\) for the Zhihu dataset or \(0.50\) for the Goodreads dataset) gets a higher performance. **Fourthly**, due to the equivalent performance shown by the hyperparameters when set to 1 in the Figure 4(a)(c)(e)(g) for the Zhihu dataset (e.g., the performance of \(\alpha=1\) in Figure 4(a) is equivalent to that of \(\beta=1\) in Figure 4(c)), we can assess the performance when the hyperparameters are set to 0 by examining the range on the vertical axis (e.g., comparing the performance of \(\alpha=0\) in Figure 4(a) with that of \(\beta=0\) in Figure 4(c)). A wider range signifies a larger disparity between the performance at 0 and 1 for the hyperparameters. This indicates that when this particular hyper-parameter is set to 0, the resulting effect is poorer, highlighting its greater significance. Therefore, we can observe that the ranking of importance for these four hyperparameters is: \(\omega_{\text{q}}>\beta>\alpha\approx\omega_{\text{c}}\). Similarly, according to Figure 4(b)(d)(f)(h) for the Goodreads dataset, the importance ranking is: \(\omega_{\text{q}}\approx\beta>\alpha>\omega_{\text{c}}\). ### Visualization We present the results of our visualization experiments conducted on the MIND news dataset, where each news article is assigned a specific category from a set of 18 categories (previously unused in our Figure 5: Visualization of the learned code representations. experiments). Our evaluation focuses on two variants: \(\text{COVE}_{\text{FibNET}}\) and \(\text{COVE}_{\text{DeepFM}}\), which are trained using a single-level vector quantizer with a 100-entry codebook. To assess the effectiveness of the clustering, we randomly select four real categories, namely "lifestyle", "travel", "weather", and "foodanddrink". For each category, we calculate the number of news articles under that category for each codebook entry. This process generated a sorted array with a length of 100. Next, we perform an iterative sum operation to calculate the percentage of the selected category covered by considering the first-\(i\) codebook entries. Figure 5 presents the visualization of these results. Each line indicates a real category, and each point \((i,j)\) on the line corresponds to the percentage of the selected category covered by considering \(i\) codebook entries. Therefore, we can make the following observations. **Firstly**, the lines representing the four selected categories perfectly overlap with each other, demonstrating the fairness of COVE towards different categories and the robustness of the learned code representations. **Secondly**, a steeper line indicates a higher clustering performance, as it implies that a greater proportion of the news articles under the corresponding category can be represented using fewer codebook entries. In \(\text{COVE}_{\text{DeepFM}}\), the top-20 entries involve almost 100% news articles of the current category, while \(\text{COVE}_{\text{FibNET}}\) needs the top-40 entries to achieve a similar coverage, indicating \(\text{COVE}_{\text{DeepFM}}\) performs better than \(\text{COVE}_{\text{FibNET}}\). ## 5 Conclusion We have proposed a novel vector quantization approach, namely COVE, to learn category-aware entity representations at multiple granularity levels in ID-based recommender systems. The flexibility of COVE allows for its seamless integration into a variety of existing recommender systems. Through comprehensive experiments conducted across diverse recommendation scenarios, we have demonstrated the effectiveness of COVE in enhancing the performance of various recommendation models. Additionally, our visualization experiments have further validated the robustness of the learned category-aware code representations.
2309.11327
Leveraging Data Collection and Unsupervised Learning for Code-switched Tunisian Arabic Automatic Speech Recognition
Crafting an effective Automatic Speech Recognition (ASR) solution for dialects demands innovative approaches that not only address the data scarcity issue but also navigate the intricacies of linguistic diversity. In this paper, we address the aforementioned ASR challenge, focusing on the Tunisian dialect. First, textual and audio data is collected and in some cases annotated. Second, we explore self-supervision, semi-supervision and few-shot code-switching approaches to push the state-of-the-art on different Tunisian test sets; covering different acoustic, linguistic and prosodic conditions. Finally, and given the absence of conventional spelling, we produce a human evaluation of our transcripts to avoid the noise coming from spelling inadequacies in our testing references. Our models, allowing to transcribe audio samples in a linguistic mix involving Tunisian Arabic, English and French, and all the data used during training and testing are released for public use and further improvements.
Ahmed Amine Ben Abdallah, Ata Kabboudi, Amir Kanoun, Salah Zaiem
2023-09-20T13:56:27Z
http://arxiv.org/abs/2309.11327v2
Leveraging Data Collection and Unsupervised Learning for Code-Switched Tunisian Arabic Automatic Speech Recognition ###### Abstract Crafting an effective Automatic Speech Recognition (ASR) solution for dialects demands innovative approaches that not only address the data scarcity issue but also navigate the intricacies of linguistic diversity. In this paper, we address the aforementioned ASR challenge, focusing on the Tunisian dialect. First, textual and audio data is collected and in some cases annotated. Second, we explore self-supervision, semi-supervision and few-shot code-switching approaches to push the state-of-the-art on different Tunisian test sets, covering different acoustic, linguistic and prosodic conditions. Finally, and given the absence of conventional spelling, we produce a human evaluation of our transcripts to avoid the noise coming from spelling inadequacies in our testing references. Our models, allowing to transcribe audio samples in a linguistic mix involving Tunisian Arabic, English and French, and all the data used during training and testing are released for public use and further improvements. Ahmed Amine Ben Abdallah\({}^{1}\)+, Ata Kabboudi\({}^{2}\), Amir Kanoun\({}^{3}\), Salah Zaiem\({}^{4}\)\({}^{*}\)\({}^{1}\)Tunis Business School; \({}^{2}\)University of Michigan-Dearborn; \({}^{3}\)Abshore; \({}^{4}\)LTCI, Telecom Paris, Institut Polytechnique de Paris Speech recognition, code-switching Footnote †: These authors contributed equally to this work. ## 1 Introduction Several recent works have been trying to extend the number of languages and dialects covered by high-performance speech recognition technology, with models covering hundreds and even thousands of languages [1, 2]. We evaluated a few-released models on Tunisian data and collected the results in Table 1. The results show that even massively multilingual models fail at reaching reasonable performance on Tunisian ASR test sets, code-switched or not. This justifies the need for local models, tackling the needs of specific idioms. In this context, Tunisian ASR has been explored in the last decade, mainly by Tunisian scholars. Linguists focused first on developing orthographic conventions for annotators [3, 4]. Then, from hybrid techniques [5] to end-to-end approaches [6], the models have been suffering from the lack of annotated resources, and thus poor generalization. This work tries to overcome these issues through, first, an effort on multi-source data collection and annotation, and the exploitation of recent unsupervised techniques. Getting closer to realistic Tunisian Speech, this work also proposes a first dataset for Tunisian Code-Switched ASR. A large part of Tunisians, use French and English words and expressions in formal or informal settings [8]. The dataset, collected from radio broadcasts and podcasts, shows the extent of this phenomenon and offers a challenging real-conditions low-resource ASR task for the code-switching community. Code-switching, _i.e._ the practice of alternating between two or more languages or dialects within a single conversation or discourse, has been an active research domain in speech recognition [9]. However, with the exceptions of a few works involving Arabic dialects [10, 11], a major part of code-switching research has been focusing on English-Mandarin or English-Hindi situations [12, 9]. This work presents datasets and methods handling dialect code-switching with three languages involved, in real-world spontaneous conversations. Thus, our contributions are fourfold : * We collect and release Tunisian audio, annotated or not, and textual data1. These cover different conditions; spontaneous versus read speech, code-switched versus non code-switched, allowing the establishment of diverse benchmarks to foster research in the community. Footnote 1: Data is available here: [https://zenodo.org/record/8370566](https://zenodo.org/record/8370566) * We explore self-supervision, semi-supervision and few-shot code-switching techniques, pushing the boundaries of Tunisian ASR and reaching reasonable performance in code-switched scenarios. * All the models are released together with their code and can be used publicly 2 with permissive licenses 3. Footnote 2: Demo spaces available here: [https://huggingface.co/SalahZa](https://huggingface.co/SalahZa) * A human evaluation is conducted to assess the impact of the absence of spelling conventions in Tunisian Arabic. Footnote 3: Models are available here: [https://huggingface.co/SalahZa](https://huggingface.co/SalahZa) First, Section 2 describes the data collection and annotation process, and the public data used in our experiments. Second, Section 3 details the training approaches and choices leading to the released baseline models. Finally, Section 4 covers the results obtained, and describes the human evaluation process and its conclusions. ## 2 Data Collection and Preprocessing This section presents the collection process of the textual and audio data used in the remaining of the paper. \begin{table} \begin{tabular}{l c c c} \hline \hline Test Sets & TARIC & TunSwitch TO & TunSwitch CS \\ \hline MMS 1B All. [2] & 139.4 & 104.7 & 102.0 \\ Wave2Vec20 Ar. [7] & 95.3 & 89.7 & 96.4 \\ Whisper Large z [1] & 119.5 & 127.3 & 105.8 \\ Whisper Large v2 Ar. & 81.8 & 74.1 & 85.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Failure of multilingual or Standard Arabic models on a few Tunisian Arabic testing settings. The results are showing the Word Error Rate (WER). “TunSwitch CS” contains code-switching while the other two do not. ### Textual Data Given the scarcity of good quality Tunisian textual data, previous ASR works have only been using data from the training and validation sets for language model training [6]. In this work, we incorporate Tunisian text data sourced from Tunisiya [13], a vast corpus of Tunisian Arabic that is openly accessible. We also scrapped code-switched data from various online sources and public forums. To refine the dataset, we systematically eliminate diacritics, punctuation, special characters, and phrases containing numerical values. Statistics about the two resulting sets are available in Table 3. ### Audio Data #### 2.2.1 TanSwitch Collection tool We developed a tool for collecting Tunisian dialect data, prompting users to record themselves reading provided phrases. We sourced sentences from Tunisiya [13]. These sentences are consequently removed from the LM training corpus. 89 persons have participated leading to the collection of 2631 distinct phrases. This set will be called TunSwitch TO, "TO" standing for Tunisian Only, as these sentences do not have non-Tunisian words. #### 2.2.2 TunSwitch CS In response to the limited availability of paired Text-Speech Tunisian datasets with code-switching, we have built a corpus through meticulous manual annotation. This process was facilitated by using the Doccano annotation tool [14]. Whenever encountered, French and English words are enclosed within "\(<\)fr\(>\)\(<\)/fr\(>\)" or "\(<\)en\(>\)\(<\)/en\(>\)" tags. Tunisian words are left without any enclosing tags. While these tags have not been used in the proposed models, they allow to have language-usage statistics and may be useful for further approaches handling code-switching. The resulting set is released as TunSwitch CS, "CS" standing for Code-Switched. As shown in Figure 1, TunSwitch CS, with \(13.9\%\) of French and \(13.3\%\) English words contains \(5\) times more code-switching than the STAC dataset, the only previously available code-switched resource. The TunSwitch CS dataset samples come from a set of radio shows and podcasts, representing diverse topics and a large number of unique speakers. The audio are first segmented into chunks, prioritiing word integrity using the WebRTC-VAD algorithm for silence detection. Afterward, we used a Pyamote [15] overlap detection model to remove overlapping speech sections. Then, a music detection model is employed to eliminate music-containing chunks that could disrupt ASR model accuracy. #### 2.2.3 Unlabeled Data To explore self-training with unlabeled audio data, we have curated a vast collection of national TV shows videos spanning a total duration of 317 hours. This dataset encompasses a diverse range of topics, speakers and accents faithfully mirroring the diversity of speech encountered in real-world scenarios. From these initial 317 hours, only 153 hours are kept after VAD-based segmentation and the removal of audio samples containing music or overlapping speech. #### 2.2.4 Publicly Available Datasets In addition to the TunSwitch dataset, we included three additional publicly available datasets: TARIC [6] a dataset of conversations in train stations, STAC [3] a radio-broadcast-based dataset with slight code-switching, and the IWSLT translation dataset [17] consisting in telephonic conversations. Table 2 summarizes the different datasets used in our experiments. Now that the datasets are introduced, the remaining sections describe a solid baseline involving several unsupervised approaches. \begin{table} \begin{tabular}{l l l l l l} \hline \hline & **Dataset** & **Prosody** & **Code-Switching** & **Train(H)** & **Dev(H)** & **Test(H)** \\ \hline \multirow{4}{*}{\begin{tabular}{l} **Public** \\ **data** \\ \end{tabular} } & IWSLT & Spontaneous & ✗ & 151h 24m 47s & 4h 55m 51s & 4h 36m 28s \\ & STAC & Spontaneous & ✓(very slight) & 2h 29m 8s & n/a & n/a \\ & TARIC & Spontaneous & ✗ & 9h 25m 44s & 17 m 29s & 12m 5s \\ \hline \multirow{2}{*}{\begin{tabular}{l} **Collected** \\ **Data** \\ \end{tabular} } & TunSwitch TO & Read & ✗ & 2h 29m 29s & 4m 25s & 23m 39s \\ & TunSwitch CS & Spontaneous & ✓ & 8h 15m 35s & 15m 43s & 25m 12s \\ \hline \begin{tabular}{l} **Unlabeled** \\ **Data** \\ \end{tabular} & TunSwitch TO & Spontaneous & ✓ & 153h 18m 22s & n/a & n/a \\ \hline \hline \end{tabular} \end{table} Table 2: Description of public and newly collected Tunisian Speech datasets. Figure 1: Code-Switching presence in train sets : TunSwitch CS vs STAC. The released TunSwitch CS exhibits large English and French parts. ## 3 Models This section describes the different architectures and training policies adopted for the development of the "Tunisian only" and "Code-switched" models released. ### Base Model Given the Tunisian-only annotated training data described in the previous section, we first train a model handling only non code-switched audios, outputting therefore only Arabic characters. Building on other works involving low-resource languages [18, 2], we opt for a pretrained encoder, trained with a self-supervision objective. While wav2vec2.0 XLSR [19], trained on 53 languages, seems to be the go-to option in the literature, the WavLM [20] model, although trained only on English data, performed better in our experiments. The downstream head, mapping the representations to the Arabic characters consists in three dense layers with LeakyReLU activations, and batch normalization between layers, and is trained with Connectionist Temporal Classification (CTC) [21] loss. The WavLM encoder parameters are fine-tuned, except for the convolutional front-end that is kept frozen. During evaluation, candidate sentences are rescored using a 4-gram language model trained with the KenLM toolkit [22] and implemented with the PyCTCDecode library. Different language models based on different textual corpora are tested as we will detail in section 4. ### Self-Training Given a first trained Tunisian ASR model, the unlabeled collected and cleaned data samples can be used within a semi-supervised approach. Transcriptions are obtained using the aforementioned model, and added to the training set. Two options are tested, fine-tuning the previous model with the new training points or training all from scratch. The latter led to the best results. This remains a very naive approach for self-training, with the recent literature exploring better schedules for unlabeled data incorporation [23]. It is performed to show that the released data can lead to improvements. We leave more advanced techniques on the exploitation of these unlabeled resources for further works. ### Few-Shot Code-Switching As stated in the description of our datasets, Tunisian speech often involves a dynamic interplay between three distinct languages: Tunisian Arabic, French, and English. The code-switched data in our training set is not sufficient to ensure robust large-vocabulary transcriptions in English and French. To overcome this issue, we followed the Few-Shot Code-Switching approach developed by Yan _and al._[24]. This approach allows the combination of Tunisian, French and English ASR models, individually trained on monolingual datasets. The three models are similar, consisting in a self-supervised encoder and the same decoder outputting character-level posteriorgrams. The three posteriorgrams are first concatenated along with the encoder outputs. Then, a "mixer" model, consisting in our case of two layers of BiLSTM followed with a linear layer, generates a final posteriorgram that encapsulates aggregated character probabilities across all three languages. The process is represented in Figure 2. During this phase of training, the three models are frozen and the "mixer" is trained only using the code-switched train data. The French and English models are trained in a previous phase, using their respective CommonVoice [25] 14.0 datasets. The CommonVoice dataset consists of challenging crowd-collected read sentences. On monolingual datasets, the performance, for French and English, reached, respectively, 10.24 and 18.01 WER. ## 4 Results and Discussion ### Non Code-Switched Results Table 4 shows the performance obtained with our models trained and tested without code-switching. For the TARIC and IWSLT datasets, we reported the best results we found in the literature, while we in \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{2}{c}{TARIC} & \multicolumn{2}{c}{IWSLT} & \multicolumn{2}{c}{TunSwitch TO} \\ \hline & CER & WER & CER & WER & CER & WER \\ \hline Previous works & N/A & 22.6 [5] & N/A & 41.5 [16] & N/A & N/A \\ \hline Without Self-Training & CER & WER & CER & WER & CER & WER \\ \hline Without LM & 6.44 & 12.84 & 20.28 & 42.74 & 13.34 & 41.45 \\ With InDomainLM & 6.23 & 10.81 & 20.27 & **38.86** & 12.50 & 36.18 \\ With OutDomainLM & **6.13** & **10.55** & **20.32** & 39.01 & 10.08 & 26.64 \\ \hline With Self-Training & CER & WER & CER & WER & CER & WER \\ \hline Without LM & 6.33 & 11.82 & 20.49 & 42.49 & 12.65 & 38.25 \\ With InDomainLM & 6.29 & 10.83 & 21.18 & 39.46 & 12.42 & 36.07 \\ With OutDomainLM & 6.22 & **10.55** & 21.18 & 39.53 & **9.67** & **25.54** \\ \hline \hline \end{tabular} \end{table} Table 4: ASR Results on non code-switched data. Character and Word Error Rates are shown (CER and WER) for models trained with or without self-training. Proper language modelling appears to be crucial towards better performance. Figure 2: Code Switching: Three monolingual models are involved. troduce the first results on the collected dataset TunSwitch TO. The upper part of the table shows the performance without self-training, _i.e._ without the weakly supervised samples in the training set. Every line corresponds to a different textual corpus used for the LM training. "InDomain" indicates that the textual data only comes from the train and validation sets of the different considered audio corpus, while "OutDomain" indicates that external textual sentences are added to the training corpus. First, significant discrepancies in results between datasets are witnessed. This is natural given the different settings. The TARIC dataset, consisting of very similar discussions around buying train tickets, display a reduced vocabulary leading to low WERs. The IWSLT consists in telephonic (8khz) spontaneous conversations with multiple hesitations, and represents the hardest task in our benchmark. The TunSwitch TO dataset, although read, contains the rich-rest vocabulary, and is openly crowd-sourced leading to very different recording and noise conditions. It is the one closest to an industrial user-oriented ASR use-cases. In our setting, the self-training improves the performance on the three datasets, especially when no LM is used for rescoring, gaining little above 1 point of WER on TARIC and 3 on the collected data. When using language-modelling, this gain is reduced, reaching 1.1 WER progress on the TunSwitch TO set. Concerning language modelling, all three datasets see a substantial gain in performance when adding InDomain LM rescoring, respectively, 2, 3.9, and 5.3 WER absolute improvement, for TARIC, IWSLT and TunSwitch. Adding the external textual sentences to the LM training corpus, improves significantly the performance for the TunSwitch set, with 9.5 absolute WER gain in the self-training setting. This is expected, as the read testing sentences were sampled from the same source, and, thus, may cover similar topics. ### Code-Switching Results Table 6 shows the performance obtained with the "Mixer" based approach on code-switched data. The table shows again the importance of properly calibrated language models for rescoring. Using more easily available "Tunisian Only" corpora for training LMs harms the ASR performance. Using our released code-switched textual data, allows for 10 points of absolute WER progress. For the final line, we enrich the textual corpus with ten thousands English and French monolingual sentences, leading to around 1 points of WER improvement. Our best model leads to 29.47 WER on a very challenging, spontaneous trilingual code-switched radio broadcasts data, establishing a solid baseline on the collected TunSwitch CS dataset. ### Human evaluation Table 5 shows two examples, one with code-switching and one without, for references and transcriptions. The two examples display spelling errors, one in the English part, and the other one in the Tunisian Arabic one. However, a Tunisian reader is likely to accept the second transcript. This is because the Tunisian dialect does not have clear spelling conventions. Annotators, especially in the case of multiple datasets, may choose to write words differently. Reading the error reports, we observed that a non-negligible part of the errors were due to the absence of spelling conventions and may not be considered false by a human evaluator. This motivated a human evaluation of the model outputs. 25 Tunisian evaluators, reasonably fluent in English and French were recruited, and tasked to evaluate the transcriptions of 50 audios each. To make it easier for evaluators, they were only tasked to judge whether the full transcription of the sentence was correct or not, in a binary decision for each test sample. Evaluators have been handled a document showing how to use the validation website and a few examples showing good and bad transcriptions with the corresponding audios. Every audio in the test set of TunSwitch (TO and CS) is proposed to two different annotators. One sentence is considered correct if the two evaluators agreed on accepting it. The Results are reported in Table 7 showing large differences between the human and automatic sentence-level evaluation. Human Sentence Error Rate (SER) is \(42\%\) and \(29.5\%\) lower, respectively, for the sentences without and with code-switching. This being said, human evaluations should be taken with a pinch of salt, as agreement between annotators reached only 80.4% and evaluators may not be attentive enough to small errors. We think the large difference is still imputable in part to the absence of spelling conventions, questioning the way dialectal ASR should be properly evaluated. ## 5 Acknowledgements This work has benefited from funding from l'Agence de l'Innovation de Defense, and was performed using HPC resources from GENCI- '. IDRIS (Grant 2023-AD011012801R2). \begin{table} \begin{tabular}{l l l} \hline \hline & \multicolumn{2}{c}{TunSwitch CS} \\ \hline & CER & WER \\ \hline Without LM & 13.71 & 40.65 \\ With TunisianOnly LM & 17.57 & 47.45 \\ With CodeSwitched LM & 12.77 & 30.41 \\ With EN-FR enriched LM & **12.44** & **29.47** \\ \hline \hline \end{tabular} \end{table} Table 6: Results on non code-switched data. Character and Word Error Rates are shown (CER and WER). \begin{table} \begin{tabular}{l l} \hline \hline **Reference** & **Decoding Results** \\ \hline they were they were helping me out _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ _halize_ \\ \hline \hline \end{tabular} \end{table} Table 5: Examples of two transcriptions with their reference sentences. ## 6 Conclusion This paper introduces new resources for code-switched Tunisian Arabic defining a very challenging ASR task in spontaneous audio involving three languages. Using self-supervised representations, self-training and other monolingual ASR models, a solid baseline is proposed. We hope the code-switched speech recognition community will find this resource useful and builds upon the baseline.
2309.09568
The effect of initial texture on multiple necking formation in polycrystalline thin rings subjected to dynamic expansion
In this paper, we have investigated, using finite element calculations, the effect of initial texture on the formation of multiple necking patterns in ductile metallic rings subjected to rapid radial expansion. The mechanical behavior of the material has been modeled with the elasto-viscoplastic single crystal constitutive model developed by \citet{marin2006}. The polycrystalline microstructure of the ring has been generated using random Voronoi seeds. Both $5000$ grain and $15000$ grain aggregates have been investigated, and for each polycrystalline aggregate three different spatial distributions of grains have been considered. The calculations have been performed within a wide range of strain rates varying from $1.66 \cdot 10^4 ~ \text{s}^{-1}$ to $3.33 \cdot 10^5 ~ \text{s}^{-1}$, and the rings have been modeled with four different initial textures: isotropic texture, $\left\langle 001\right\rangle\parallel\Theta$ Goss texture, $\left\langle 001\right\rangle\parallel$ R Goss texture and $\left\langle 111\right\rangle\parallel$ Z fiber texture. The finite element results show that: (i) the spatial distribution of grains affects the location of the necks, (ii) the decrease of the grain size delays the formation of the necking pattern and increases the number of necks, (iii) the initial texture affects the number of necks, the location of the necks, and the necking time, (iv) the development of the necks is accompanied by a local increase of the slip activity. This work provides new insights into the effect of crystallographic microstructure on dynamic plastic localization and guidelines to tailor the initial texture in order to delay dynamic necking formation and, thus, to improve the energy absorption capacity of ductile metallic materials at high strain rates.
K. Espoir N'souglo, Katarzyna Kowalczyk-Gajewska, Mohammad Marvi-Mashhadi, Jose A. Rodriguez-Martinez
2023-09-18T08:26:47Z
http://arxiv.org/abs/2309.09568v1
The effect of initial texture on multiple necking formation in polycrystalline thin rings subjected to dynamic expansion ###### Abstract In this paper, we have investigated, using finite element calculations, the effect of initial texture on the formation of multiple necking patterns in ductile metallic rings subjected to rapid radial expansion. The mechanical behavior of the material has been modeled with the elasto-viscoplastic single crystal constitutive model developed by Marin (2006). The polycrystalline microstructure of the ring has been generated using random Voronoi seeds. Both 5000 grain and 15000 grain aggregates have been investigated, and for each polycrystalline aggregate three different spatial distributions of grains have been considered. The calculations have been performed within a wide range of strain rates varying from \(1.66\cdot 10^{4}\) s\({}^{-1}\) to \(3.33\cdot 10^{5}\) s\({}^{-1}\), and the rings have been modeled with four different initial textures: isotropic texture, \(\langle 001\rangle\parallel\Theta\) Goss texture, \(\langle 001\rangle\parallel\) R Goss texture and \(\langle 111\rangle\parallel\) Z fiber texture. The finite element results show that: (i) the spatial distribution of grains affects the location of the necks, (ii) the decrease of the grain size delays the formation of the necking pattern and increases the number of necks, (iii) the initial texture affects the number of necks, the location of the necks, and the necking time, (iv) the development of the necks is accompanied by a local increase of the slip activity. This work provides new insights into the effect of crystallographic microstructure on dynamic plastic localization and guidelines to tailor the initial texture in order to delay dynamic necking formation and, thus, to improve the energy absorption capacity of ductile metallic materials at high strain rates. keywords: Dynamic necking, Inertia, Crystal plasticity, Texture, Finite elements + Footnote †: journal: Mechanics of Materials ## 1 Introduction The ring expansion experiment developed by Niordson (1965) has become a reference benchmark problem to investigate dynamic necking localization and fragmentation of ductile metallic materials. The test consists of a thin
2309.10039
Figuring Out Gas & Galaxies In Enzo (FOGGIE) VII: The (Dis)Assembly of Stellar Halos
Over the next decade, the astronomical community will be commissioning multiple wide-field observatories well-suited for studying stellar halos in both integrated light and resolved stars. In preparation for this, we use five high-resolution cosmological simulations of Milky Way-like galaxies from the FOGGIE suite to explore the properties and components of stellar halos. These simulations are run with high time (5 Myr) and stellar mass (1000 M$_\odot$) resolution to better model the properties and origins of low density regions like stellar halos. We find that the FOGGIE stellar halos have masses, metallicity gradients, and surface brightness profiles that are consistent with observations. In agreement with other simulations, the FOGGIE stellar halos receive 30-40% of their mass from in situ stars. However, this population is more centrally concentrated in the FOGGIE simulations and therefore does not contribute excess light to the halo outskirts. The remaining stars are accreted from 10-50 other galaxies, with the majority of the accreted mass originating in 2-4 galaxies. While the inner halo ($r<50$ kpc) of each FOGGIE galaxy has a large number of contributors, the halo outskirts of three of the five galaxies are primarily made up of stars from only a few contributors. We predict that upcoming wide-field observatories, like the Nancy Grace Roman Space Telescope, will probe stellar halos around Milky Way-like galaxies out to ~100 kpc in integrated light and will be able to distinguish the debris of dwarf galaxies with extended star formation histories from the underlying halo with resolved color-magnitude diagrams.
Anna C. Wright, Jason Tumlinson, Molly S. Peeples, Brian W. O'Shea, Cassandra Lochhaas, Lauren Corlies, Britton D. Smith, Nguyen Binh, Ramona Augustin, Raymond C. Simons
2023-09-18T18:00:06Z
http://arxiv.org/abs/2309.10039v2
# Figuring Out Gas & Galaxies In Enzo (FOGGIE) VII: The (Dis)Assembly of Stellar Halos ###### Abstract Over the next decade, the astronomical community will be commissioning multiple wide-field observatories well-suited for studying stellar halos in both integrated light and resolved stars. In preparation for this, we use five high-resolution cosmological simulations of Milky Way-like galaxies from the Figuring Out Gas & Galaxies in Enzo (FOGGIE) suite to explore the properties and components of stellar halos. At \(z=0\), we find that the FOGGIE stellar halos have masses, metallicity gradients, and surface brightness profiles that are consistent with observations. In agreement with other simulations, the FOGGIE stellar halos receive 30-40% of their mass from stars that formed in situ. However, this population tends to be more centrally concentrated in the FOGGIE simulations and therefore does not contribute excess light or mass to the outskirts of the halos. The rest of the stars in each stellar halo are accreted from \(\sim 10\)-50 other galaxies, with the majority of the accreted mass originating in 2-4 of these contributors. While the phase-mixed inner halo (\(r<50\,\)kpc) of each FOGGIE galaxy includes stars from a large number of contributors, the halo outskirts of three of the five galaxies are primarily made up of stars from only a few contributors. We predict that upcoming wide-field observatories, like the _Nancy Grace Roman Space Telescope_, will probe stellar halos around Milky Way-like galaxies out to \(\sim 100\,\)kpc in integrated light and will be able to distinguish the debris of dwarf galaxies with extended star formation histories from the underlying halo with resolved color-magnitude diagrams. + Footnote †: journal: ApJ ## 1 Introduction Observations of the stellar populations of Milky Way-like galaxies are almost always limited to the galaxy's disk and brightest satellites (e.g., Tollerud et al., 2011; Geha et al., 2017; Carlsten et al., 2022). However, this is only a small part of a much larger whole. Observations that probe lower luminosities, fainter surface brightnesses, and/or wider fields have revealed that the majority of Milky Way-like galaxies are surrounded by not only large populations of faint dwarf galaxies and globular clusters, but also extended and diffuse structures of stars known as stellar halos (e.g., Mouhcine et al., 2005; Merritt et al., 2016; Harmsen et al., 2017). Although these components contain only a small fraction of the galaxy's total stellar mass, they preserve a detailed record of the early universe and the assembly history of the system as a whole. In the \(\Lambda\)CDM model of cosmic structure formation, galaxies like the Milky Way are built from the mergers of many much smaller objects (e.g., White & Rees, 1978; Searle & Zinn, 1978). While the bulk of the baryonic material from this assembly process ultimately becomes part of the disk, stars that are stripped from infalling satellites (e.g., Johnston et al., 1995; Ibata et al., 2001) or perturbed during mergers (e.g., Zolotov et al., 2009; Purcell et al., 2010) frequently adopt non-disk orbits that may take them far from the center of the galaxy. Low densities at large galactocentric distances result in long dynamical times, allowing structures in the stellar halo to persist for sometimes billions of years (e.g., Johnston et al., 1996). The global properties of the stellar halo and the distribution of substructure within it therefore contain information about the mass accretion history of the central galaxy and the many galaxies that have contributed to the system over time (e.g., Johnston, 1998; Helmi & White, 1999; Johnston et al., 2008; Amorisco, 2017). Because of their proximity, the stellar halos of the Milky Way and M31 have thus far been our primary sources of data. Both halos are predominantly old and metal-poor (e.g., Unavane et al., 1996; Chiba & Beers, 2000; Kalirai et al., 2006; Carollo et al., 2007), but have been found to contain a substantial amount of substructure (Ibata et al., 1994; Majewski et al., 1996; Chiba & Beers, 2000; Ivezic et al., 2000; Yanny et al., 2000; Ibata et al., 2001; Newberg et al., 2002; Ferguson et al., 2002; Ibata et al., 2007; Juric et al., 2008; Bell et al., 2008; Gilbert et al., 2012; Naidu et al., 2020), some of which is chemically distinct from the bulk of the halo (e.g., Cohen et al., 2018; Deason et al., 2023). Although stars that likely originated in the central disks have been found in the stellar halos of both the Milky Way and M31 (e.g., Carollo et al., 2007; Bonaca et al., 2017), the general consensus is that most of their stars formed in satellites that were eventually tidally disrupted (e.g., Searle & Zinn, 1978; Majewski et al., 1996; Bullock et al., 2001; Purcell et al., 2007; Bell et al., 2008), and differences between the halos can therefore generally be attributed to differences in their accretion histories. The nearly 100 stellar streams that have been discovered in the Milky Way halo are evidence that our galaxy has accreted and disrupted many dwarf galaxies and globular clusters throughout its history (e.g., Belokurov et al., 2006; Shipp et al., 2018; Ibata et al., 2019). However, the unbroken density profile of M31's halo, combined with its higher mass, higher metallicity, and steeper metallicity gradient suggest that M31 has had a considerably more active and extended accretion history than the Milky Way (e.g., Deason et al., 2013; Gilbert et al., 2014). Our ability to measure the detailed kinematics and chemical compositions of these nearby halos has also allowed us to derive substantial information about the population of galaxies that produced them. For instance, differences between the abundances found in Milky Way halo stars and present-day satellites were initially thought to rule out destroyed dwarf galaxies as the main contributors to the halo (Unavane et al., 1996; Venn et al., 2004). However, this disparity is now thought to indicate that the surviving classical satellites of the Milky Way are a biased subset of the many companions our galaxy has had throughout its history. While present-day dwarfs were typically accreted fairly recently, the primary building blocks of the halo were early infalling dwarfs that assembled relatively close to the Milky Way. These early contributors to the stellar halo formed their stars rapidly and therefore with little enrichment from Type Ia supernovae, making them \(\alpha\)-enhanced relative to surviving dwarfs (e.g., Robertson et al., 2005; Corlies et al., 2013; Fattahi et al., 2020; Naidu et al., 2022). Many authors (e.g., Helmi et al., 1999; Johnston et al., 2001; Lee et al., 2015; Li et al., 2022) have also performed detailed decompositions of stellar halos to associate stars with individual accretion events and thereby derive the likely infall times, star formation histories, and masses of individual dwarf contributors. Naidu et al. (2020) recently used kinematic and chemical data from the H3 Survey (Conroy et al., 2019) and the _Gaia_ mission (Gaia Collaboration et al., 2018) to associate \(\geq\)95% of a sample of giant stars in the inner (\(r<50\,\mathrm{kpc}\)) Milky Way stellar halo with specific structures, the vast majority of which are believed to be the remnants of disrupted dwarfs. In M31, data from the Pan-Andromeda Archaeological Survey (PAndAS; McConnachie et al., 2009), Project AMIGA (Absorption Maps in the Gas of Andromeda; Cohen et al., 2018), and other surveys have been used to derive the likely properties of the dwarf progenitor of the Giant Stellar Stream (GSS; e.g., Ibata et al., 2001; Conn et al., 2016; D'Souza & Bell, 2018; Gilbert et al., 2019). Beyond the Local Group, the large sizes and low surface brightnesses of stellar halos make them particularly challenging observational targets. Studies of more distant stellar halos typically either use integrated light to detect brighter substructure or use pencil-beam surveys of multiple fields to derive the properties of small patches of resolved stars in the halo. These observations have shown that stellar halos around other spiral galaxies have much in common with the Milky Way and M31's halos: most consist primarily of old stars (e.g, Mouhcine et al., 2005; Monachesi et al., 2013) and many feature relatively bright stellar streams (e.g., Malin and Hadley, 1997; Shang et al., 1998; Martinez-Delgado et al., 2010; Hood et al., 2018). However, there is also considerable diversity in the mass, luminosity, extent, and radial trends of these halos (Merritt et al., 2016; Harmsen et al., 2017; Gilhuly et al., 2022) that is thought to result from their varying accretion histories (e.g., Smercina et al., 2020, 2022). For instance, the metallicities of stellar halos appear to reflect the mass of their most significant contributor. The strong mass-metallicity trend observed in dwarf galaxies means that more massive stellar halos, which typically have more massive contributors, are more metal-rich than less massive halos (e.g., Mouhcine et al., 2005; Harmsen et al., 2017; D'Souza and Bell, 2018). Observations of diverse stellar halos therefore provide a broader context for the differences that we observe between the Milky Way and M31's halos. However, the sample size of observed stellar halos remains relatively small, and observations of only the brightest features or small patches of resolved stars are inherently limited. Over the next decade, the astronomical community will be commissioning multiple wide-field observatories that have the potential to observe stellar halos in much greater detail and much larger numbers than ever before (Johnston et al., 2001). The Vera C. Rubin Observatory (LSST Science Collaboration et al., 2009), _Euclid_(Laureijs et al., 2011), and the _Nancy Grace Roman Space Telescope_(Spergel et al., 2015) will image large patches of the sky at significant depth and in a variety of wavelengths. _Roman_, in particular, combines a wide field-of-view with the sensitivity and resolution necessary to resolve individual stars in stellar halos out to \(D\geq 10\,\mathrm{Mpc}\)(Lancaster et al., 2022). This means that we are entering an era where we will have access to the kind of detailed data previously only achievable in the Local Group over a much larger volume. We will have the ability to consider the resolved stellar populations of stellar halos as a standard part of the larger picture, rather than as an exception. With the necessity of optimizing the data from these future observatories in mind, we analyze a suite of high-resolution zoom-in simulations of Milky Way-like galaxies and their stellar halos. We discuss the FOGGIE simulations and our stellar halo selection method in SS 2 and SS 3, respectively, then explore how the properties of the FOGGIE halos compare to observed stellar halos in SS 4. In SS 5, we look at the characteristics and distributions of the contributors to the FOGGIE stellar halos and in SS 6 we examine the implications of our findings for future wide-field surveys. ## 2 The Foggie Simulation Suite The galaxies we analyze in this paper are from the FOGGIE (Figuring Out Gas & Galaxies in Enzo) simulation suite. The FOGGIE simulations were first introduced in Peeples et al. (2019) and Corlies et al. (2020), with the runs we analyze here discussed in Simons et al. (2020) and Lochhaas et al. (2021). We summarize the properties of these simulations in SS 2.1, the unique refinement scheme we use and its importance in SS 2.2, the physical prescriptions for the UV background, star formation, and feedback in SS 2.3-2.5, and our methods for subhalo finding in SS 2.6. We also include a brief discussion of the caveats of the simulations in SS 2.7. ### Halo Selection The FOGGIE production simulations consist of six high-resolution cosmological simulations of Milky Way-like galaxies run with the adaptive mesh refinement (AMR) code Enzo(Bryan et al., 2014; Brummel-Smith et al., 2019)1. In Enzo, the gravitational potential is computed via the Particle-Mesh method on the root grid and a multigrid Poisson solver on adaptively-refined grids, using a total density field calculated from all particles (stars and dark matter) and the gas density field. Gas is evolved by solving Euler's equations of hydrodynamics on the grid using the piecewise parabolic method (PPM). Each simulation is evolved to \(z=0\) using a flat \(\Lambda\)CDM cosmology with \(1-\Omega_{\Lambda}=\Omega_{\mathrm{m}}=0.285\), \(\Omega_{\mathrm{b}}=0.0461\), and \(h=0.695\). The five FOGGIE simulations that have reached \(z=0\) are included in our analysis. Footnote 1: [http://enzo-project.org](http://enzo-project.org) The central galaxies of the FOGGIE simulations were drawn from a dark-matter-only run in a cubic domain 100 Mpc/\(h\) on a side (in comoving coordinates). We selected these halos based on two criteria: 1) a \(z=0\) virial mass similar to that estimated for the Milky Way (\(\sim\)\(10^{12}\,\mathrm{M}_{\odot}\); Bland-Hawthorn and Gerhard (2016) and 2) no major (mass ratio \(>10\):1) mergers after \(z\approx 2\), the time at which the Milky Way is thought to have experienced its last major merger (Helmi et al., 2018). Each galaxy is then re-simulated from \(z=99\) at much higher resolution and with full hydrodynamics using the "zoom-in" technique. All dark matter particles located within \(3R_{\mathrm{vir}}\) of the central galaxy at \(z=0\) are re-simulated with a mass of \(M_{\mathrm{dm,part}}=1.39\times 10^{6}\,\mathrm{M}_{\odot}\). The masses of the other dark matter particles increase with distance from the region of interest, up to a maximum of \(5.69\times 10^{9}\,\mathrm{M}_{\odot}\) far away from the zoom region. ### Natural, Forced, and Cooling Refinement Throughout the majority of the simulation volume, the refinement of the underlying grid is determined by the local mass density. When the baryonic or dark matter mass of a cell exceeds a threshold mass corresponding to 8 times the mean mass of the highest resolution cells in the simulation initial conditions, the cell is divided in half along each dimension such that \[\ell_{\rm cell}(N_{\rm ref})=2^{-N_{\rm ref}}\times\frac{\ell_{\rm box}}{N_{ \rm root}}, \tag{1}\] where \(\ell_{\rm cell}\) is the new length of the cell, \(N_{\rm ref}\) is the level to which it is being refined, \(\ell_{\rm box}\) is the length of the simulation box, and \(N_{\rm root}\) is the number of root grid cells on a side. This ensures that, when this refinement criterion is dominant, grid cells maintain roughly similar masses across refinement levels. The FOGGIE simulations contain \(256^{3}\) root grid cells and are permitted to refine up to 11 levels, so the minimum cell size in each simulation is 274 comoving pc (cpc). The FOGGIE simulations also employ additional refinement schemes to improve resolution in and around their central galaxies. Those low mass dark matter particles that lie within the zoom region are designated as "must refine particles" (Simpson et al., 2013). Using a cloud-in-cell algorithm, Enzo flags the cells nearest a given must refine particle and forces the cells to refine to a minimum of \(N_{\rm ref}=4\) (\(35\,\rm ckpc\)). However, a much more stringent resolution requirement is placed on the cells nearest to the central galaxy. Beginning at \(z=6\), the simulations employ a "forced refinement" scheme, which consists of a \((288\,\rm ckpc)^{3}\) box that tracks the center of mass of the central galaxy throughout the domain and enforces a minimum of 9 levels of refinement (\(1.10\,\rm ckpc\)) within its boundaries. These cells may refine up to two more levels if the cooling length of the gas they contain is less than the length of the cell. The combination of the forced and cooling refinement schemes greatly improves the spatial and mass resolution in the warm and hot gas that fills most of the volume of the circumgalactic medium (CGM). In a traditional density-based refinement scheme, the interstellar medium (ISM) of the galaxy would reach \(N_{\rm ref}=11\) (\(274\,\rm cpc\)), while the CGM would only reach \(N_{\rm ref}=6\)-\(8\) (\(2.20\)-\(8.78\,\rm ckpc\); Corlies et al., 2020). By enforcing a higher level of refinement in the CGM, the FOGGIE simulations are able to reduce artificial mixing and resolve the detailed kinematics of the CGM (Peeples et al., 2019; Corlies et al., 2020; Lochhaas et al., 2021). The cooling refinement scheme further improves the resolution of thermally unstable gas, resulting in the cooling length being resolved in \(>99\%\) of the CGM by volume and \(>90\%\) of the CGM by mass at \(z=2\)(Simons et al., 2020). The effects of these refinement schemes on the CGM have already been explored in FOGGIE Papers I-VI (Peeples et al., 2019; Corlies et al., 2020; Zheng et al., 2020; Simons et al., 2020; Lochhaas et al., 2021, 2023). However, these techniques are also relevant to studies of the stellar halo because of their effects on satellites. Simons et al. (2020) showed that the wide range of density and velocity structures in the FOGGIE CGM causes satellites to experience ram pressure that varies over five orders of magnitude as they orbit the central galaxy. Ram pressure stripping is also highly stochastic, with 90% of the total surface momentum from ram pressure being imparted in less than 20% of a satellite's orbital time at \(z\geq 2\). As a result, satellites in the FOGGIE simulations experience less ram pressure stripping on average compared to satellites in a spherically averaged hydrostatic CGM. The resolution of the CGM may therefore affect the formation of stars in dwarfs that contribute to the stellar halo. ### Cooling and Background Radiation In order to approximate the effects of reionization, the FOGGIE simulations include a redshift-dependent UV background following Haardt and Madau (2012) with HI self-shielding following Emerick et al. (2019). The simulations compute primordial cooling by solving a non-equilibrium chemical reaction network for H, H\({}^{+}\), He, He\({}^{+}\), He\({}^{++}\), and e\({}^{-}\) using the Grackle chemical and cooling library (Smith et al., 2017). All metal species are grouped together, so the FOGGIE simulations also include metallicity-dependent cooling assuming ionization equilibrium and solar abundances. ### Star Formation Star formation within the simulations is based on the properties of local gas and broadly follows Cen and Ostriker (1992) and Cen and Ostriker (2006). Star particles form from gas that fulfills the following criteria (Enzo StarParticleCreation method 0): 1. the gas cell does not have a higher AMR level inside of it and the gas density exceeds \(10^{4}\times\) the mean density of all matter within the simulation (\(n\geq 0.016\,\rm\ cm^{-3}\) at \(z=0\)), 2. the divergence of the gas cell's velocity is negative, 3. the cooling time of the gas cell is less than its dynamical time or the temperature of the gas is \(<\)11,000 K, 4. the gas cell is Jeans unstable, and 5. the gas cell contains sufficient mass that the star particle that would form from it would exceed a minimum mass threshold. In the smallest of the FOGGIE simulations, Tempest, the minimum star particle mass is held at 1000 M\({}_{\odot}\) over the full run to \(z=0\). However, in the other simulations, this criterion varies with time. The initial value is 1000 M\({}_{\odot}\), but the threshold increases linearly with time to \(10^{4}\) M\({}_{\odot}\) between \(z=2\) and \(z=1\). This means that regions composed primarily of old stellar populations, like the stellar halo, end up with higher mass resolution than e.g., the younger spiral arms. If all of the requirements are met, a star particle will form at the center of the gas cell with \[M_{\star,\mathrm{part}}=c^{\star}M_{\mathrm{gas}}, \tag{2}\] where \(c^{\star}\) is a star formation efficiency factor: \[c^{\star}=\min\left(0.2\frac{\Delta t}{t_{\mathrm{dyn}}},0.9\right). \tag{3}\] Here, \(\Delta t\) is the length of the gas cell's timestep and \(t_{\mathrm{dyn}}\) is the dynamical time of the gas cell (although note that a minimum value of \(t_{\mathrm{dyn}}\)=1 Myr is imposed). The star particle also inherits the velocity and metallicity of its parent gas cell. ### Stellar Feedback Stellar feedback in the form of Type II supernovae (SNe) is implemented in the FOGGIE simulations following Cen and Ostriker (1992) and Cen and Ostriker (2006), with distributed feedback modifications from Smith et al. (2011). Over the course of 12 dynamical times following its formation, a star particle injects a total of \(10^{-5}\times M_{\star,\mathrm{part}}c^{2}\) thermal energy into the surrounding 27 gas cells. Note that we do not attempt to account for more delayed feedback (e.g., Type Ia SNe). During each timestep while \(t<12\,t_{\mathrm{dyn}}\), the star particle also returns a fraction of its mass to the nearest 27 gas cells: \[M_{\mathrm{ret}}=M_{\star,0}[(1+x_{1})e^{-x_{1}}-(1+x_{2})e^{-x_{2}}], \tag{4}\] where \(M_{\star,0}\) is the initial mass of the star particle, \[x_{1}=\frac{t-t_{0}}{t_{\mathrm{dyn}}}, \tag{5}\] \[x_{2}=\frac{t+\Delta t-t_{0}}{t_{\mathrm{dyn}}}, \tag{6}\] and \(t_{0}\) is the time at which the star particle was formed. By the end of this period (typically \(\lesssim\)100 Myr) the star particle will have lost 25% of its initial mass. The mass of metals returned to the gas, accounting for the recycling of gas back into stars, is \[M_{\mathrm{met}}=0.025\,M_{\star,0}(1-Z_{\star})+0.25\,Z_{\star}, \tag{7}\] where \(Z_{\star}\) is the metallicity of the star particle. ### Subhalo Finding and Merger Trees Snapshots from the FOGGIE simulations are saved at a cadence of 5.4 Myr. In each snapshot beginning at \(z\approx 6\), dark matter halos within the zoom region are identified with the ROCKSTAR halo finder (Behroozi et al., 2013), which uses a friends-of-friends algorithm in combination with temporal and 6D phase-space information. Virial quantities \(R_{\mathrm{vir}}\) and \(M_{\mathrm{vir}}\) for each halo are also calculated by ROCKSTAR using the redshift-dependent \(\rho_{\mathrm{vir}}\) of Bryan and Norman (1998). Table 1 lists the basic properties of each of the central FOGGIE galaxies in ascending order of \(M_{\star}\). Unless otherwise specified, halo properties (e.g., \(M_{\star}\)) throughout this paper are based on all particles within \(R_{\mathrm{vir}}\). Merger histories for each halo are assembled with Consistent Trees(Behroozi et al., 2013) and halo properties are collated across time using tangos(Pontzen and Tremmel, 2018). ### Caveats Although the mass resolution of the star particles and the gas in the FOGGIE simulations is state-of-the-art for cosmological simulations, the subgrid routines--particularly those associated with star formation and feedback--are imperfect and we consider here how this may impact our findings. As noted in SS 2.5, these simulations employ exclusively thermal feedback and do not attempt to account for sources of feedback other than Type II SNe. Because no sources of delayed feedback, such as Type Ia SNe, are included, feedback occurs only over a relatively short period of time (\(\sim 100\) Myr) following the formation of a star particle. Additionally, each individual star particle \begin{table} \begin{tabular}{c c c c c} \hline \hline Name & \(R_{\mathrm{vir}}\)1 & \(M_{\mathrm{vir}}\) & \(M_{\star}\) & \(M_{\mathrm{SH}}\) \\ & [kpc] & [\(10^{12}\) M\({}_{\odot}\)] & [\(10^{10}\) M\({}_{\odot}\)] & [\(10^{10}\) M\({}_{\odot}\)] \\ \hline Tempest & 201 & 0.45 & 5.44 & 0.32 \\ Maelstrom & 253 & 0.90 & 11.6 & 0.86 \\ Squall & 235 & 0.76 & 12.6 & 1.20 \\ Blizzard & 261 & 0.99 & 14.7 & 1.76 \\ Hurricane & 301 & 1.05 & 25.7 & 2.50 \\ \hline \end{tabular} \end{table} Table 1: Properties of central FOGGIE galaxies at \(z=0\) injects less energy into the surrounding medium than it would in a simulation that incorporated more complex feedback routines. The FOGGIE simulations also do not include a prescription for AGN feedback. While this likely has little impact on the many dwarf galaxies that contribute to the stellar halo (although see, e.g., Sharma et al., 2020), the central FOGGIE galaxies occupy a mass regime where AGN feedback may play a role in regulating galaxy growth (e.g., Shankar et al., 2006; Keller et al., 2016). Underpowered feedback has been shown to result in overproduction of stars and runaway growth of bulges, as galaxies cannot eject enough low angular momentum gas to effectively regulate their star formation (e.g., Maller & Dekel, 2002; Ceverino & Klypin, 2009; Governato et al., 2010). In the FOGGIE simulations, this is likely also compounded by a recently discovered issue with the star formation recipe described in SS 2.4 that causes star formation to be slightly overefficient in dense regions and slightly underefficient in more diffuse regions. As a result, the galaxies are forming too many star particles in regions where the gas is most enriched and then failing to eject enough of this metal-rich gas, further enriching future generations of stars. We can therefore also anticipate that this combination of issues will produce galaxies with above average metallicities (e.g., Brook et al., 2004; Brooks et al., 2007). We also note that metal yields remain uncertain (e.g., Peeples et al., 2014; Weinberg et al., 2023) and the normalization of the simulated vs observed mass-metallicity relation is therefore also uncertain (e.g., D'Souza & Bell, 2018). We will refer back to these caveats throughout the paper when they are relevant to the results being discussed. However, as we will show, the FOGGIE stellar halos are typically consistent with observations. Additionally, the simulations do not need to be perfect for the relative differences between the simulated stellar halos to yield insights about the factors that influence the structure and assembly of stellar halos or inform plans for future observational strategies. ## 3 Selection of stellar halos At \(z=0\), all five of the central FOGGIE galaxies are bulge-dominated disks surrounded by a diffuse and extended halo of stars. In this section, we describe how we separate out the star particles that populate the stellar halo from those that belong to the bulge or disk. In order to identify the star particles that make up the stellar halo, we use a combination of kinematic information and position. We first identify the plane of the disk for each galaxy. In three of our five galaxies (Tempest, Maelstrom, and Squall), the disk is defined by the orbital angular momentum vector of young stars (age \(<10\) Myr) in the inner 15 kpc of the galaxy. Blizzard is in the process of rejuvenating following a period of quiescence and therefore has very few extremely young stars, so its stellar disk is defined by the orbital angular momentum vector of stars with age \(<500\) Myr in its inner 15 kpc. The final galaxy, Hurricane, is a polar ring galaxy at \(z=0\), so we identify two distinct disks: a central disk and a polar disk. Because both disks contain fairly young stars, but are tilted \(\sim\)80\({}^{\circ}\) with respect to one another, we cannot effectively use the orbital angular momentum of the young stars to identify the plane of either disk. Instead, we use the orbital angular momentum of cold gas (\(T<10^{4}\) K) within the inner 7 kpc to identify the plane of the central disk and the orbital angular momentum of all gas with \(r=7\)-25 kpc to identify the plane of the polar disk. For each galaxy, we then place the disk in the x-y plane and calculate the orbital circularity for each star particle following Stinson et al. (2010): \[\epsilon=j_{\rm z}/j_{\rm circ}, \tag{8}\] where \(j_{\rm z}\) is the specific angular momentum of the star particle within the plane of the disk and \(j_{\rm circ}\) is the specific angular momentum of a star in an ideal circular orbit located at the same radius within the plane of the disk. That is, \[j_{\rm circ}=v_{\rm circ}r_{\rm xy}, \tag{9}\] where \(r_{\rm xy}\) is the distance of the star particle from the center of the galaxy within the x-y plane and \(v_{\rm circ}\) is the circular velocity of a star orbiting at this radius. Note that this is distinct from the more commonly-used orbital circularity parameter defined by Abadi et al. (2003), which defines \(j_{\rm circ}\) as the specific angular momentum of a star in a circular orbit with the same binding energy as the star particle in question (used by, e.g., Zolotov et al., 2009; Font et al., 2011; Cooper et al., 2015; Monachesi et al., 2016). We choose to use the Stinson et al. (2010) parameter for the sake of computational efficiency. However, the use of one definition as opposed to the other has only a minor impact on the assignments of stars within the inner halo (and no impact on stars with \(r>r_{\rm disk}\)) and has no effect on any of our primary conclusions. We expect star particles belonging to the disk to have \(j_{\rm z}\approx j_{\rm circ}\), so we consider any star particle with \(\epsilon=0.65-1.3\) and \(r<30\) kpc to be a disk star. Following Stinson et al. (2010), Cooper et al. (2015), and Monachesi et al. (2019), star particles with non-disk orbits (i.e., with any value of \(\epsilon>1.3\) or \(<0.65\)) located within 5 kpc of the center of the galaxy are considered to be members of the bulge. All other star particles within 350 kpc of each galaxy at \(z=0\) are classified as members of the stellar halo. We choose to limit our analysis to star particles within 350 kpc because this radius comfortably encloses the virial radii of all of our galaxies (see Table 1) without intersecting the stellar halos of any nearby massive neighbors. However, we note that all of our galaxies (and particularly the more massive galaxies, Blizzard and Hurricane) have star particles at larger radii that could reasonably be included in the stellar halo. As there is very little mass in the outskirts of the halo, including these distant star particles would have very little effect on any of our findings. We show how these definitions separate out different components of a galaxy in Figure 1. Each star particle within 100 kpc of the center of Tempest is plotted in the radius-circularity plane and colored by its age at \(z=0\). While the circularity varies from \(\epsilon\approx-2\) to 2, the majority of the star particles with \(\epsilon\approx 1\) (disk stars) are relatively young. By contrast, the bulk of the star particles that make up the bulge and the stellar halo are \(\geq 10\) Gyr old. Two satellites with intact cores are also apparent as distinct substructures within the stellar halo. The star particles bound to each occupy a fairly broad range of circularities due to the satellites' internal motions, but only a narrow range of radii, so they appear as vertical bands composed of somewhat younger stars than those that make up the majority of the halo. Both satellites are also in the process of being disrupted by Tempest's tidal forces, so their recently stripped star particles form nearly horizontal bands in the radius-circularity plane as they spread out along their original hosts' orbits. In order to better mimic observations, which typically mask out the cores of bright satellites when analyzing stellar halos (e.g., Gilbert et al., 2009; Jang et al., 2020; Gilhuly et al., 2022), we will generally exclude those star particles still associated with satellites from our analysis. We find that masking out all star particles within the radius at which the g-band surface brightness of the satellite drops below 31.5 mag arcsec\({}^{-2}\) allows us to remove the core of the satellite without eliminating extended envelopes or other tidal features that are generally considered to be part of the stellar halo. The resulting stellar halo masses are listed in the final column of Table 1. Note that we will also use alternative, observationally-motivated definitions for the stellar halo when appropriate (see Section 4.1). ## 4 Properties of Foggie Stellar Halos In this section, we summarize the properties of the FOGGIE stellar halos at \(z=0\) and compare them to observed stellar halos. In SS 4.1, we examine how different stellar halo definitions influence the mass of stellar halos. We present surface brightness maps and profiles of the stellar halos in SS 4.2, with a description of the metallicity and color gradients of the halos in SS 4.3. ### Mass In Figure 2, we show the stellar masses, virial masses, and stellar halo masses of the five FOGGIE galaxies compared to abundance matching results from Kravtsov et al. (2018). In order to make a direct comparison to Kravtsov et al. (2018), we multiply the virial mass of each galaxy by a factor of 1.25 to compensate for mass loss due to baryonic processes (e.g., supernova-driven gas outflows), following Munshi et al. (2013). We also multiply our total stellar masses by a factor of 0.6, again following Munshi et al. (2013), who find that observational measurements of stellar masses tend to neglect the contributions from old and/or low surface brightness populations, leading to a \(\sim\)40% underestimate of the true stellar mass. For consistency, we also apply this correction to the stellar halo masses shown in this figure. As we noted in SS 2.7 and as is evident in this figure, the FOGGIE galaxies have higher total stellar masses than we would expect for galaxies of their virial mass. They Figure 1: Galactocentric distance and orbital circularity (\(j_{\rm z}/j_{\rm circ}\)) for all stars within 100 kpc of the center of FOGGIE galaxy Tempest. Each star is colored according to its age at \(z=0\). Stars are classified as part of the disk, the bulge, or the halo according to their location in the distance-circularity plane. As we might expect, the majority of the stars that make up the disk are relatively young, while those that compose the bulge and the stellar halo tend to be older. Satellites and stellar streams are also apparent as distinct substructures within the stellar halo. occupy dark matter halos spanning less than 0.5 dex in mass, with only slightly more scatter (\(\sim 0.7\) dex) in total stellar mass. Their stellar halos range over nearly 1 dex in mass--from \(\sim\)3\(\times 10^{9}\) M\({}_{\odot}\) (similar to recent estimates of the mass of the Milky Way's anemic stellar halo--see, e.g., Deason et al., 2019; Mackereth and Bovy, 2020) to \(\sim\)3\(\times 10^{10}\) M\({}_{\odot}\) (more akin to the rich stellar halo of M31 as measured by, e.g., Ibata et al. (2014)). In agreement with both observations (e.g., D'Souza et al., 2014) and previous simulations (e.g., Purcell et al., 2007; Elias et al., 2018), the stellar halo masses of the FOGGIE galaxies are correlated with their stellar and virial masses. Observations find that stellar halos typically make up 0.2-14% of a galaxy's total stellar mass (e.g., Harmsen et al., 2017; Jang et al., 2020; Gilhuly et al., 2022), although it's worth noting that some galaxies appear to have no stellar halo at all (e.g., Merritt et al., 2016). We include a dashed line corresponding to 10% of the predicted Kravtsov et al. (2018) stellar mass in Figure 2 to guide the eye of the reader in gauging the fraction of mass contributed by each stellar halo. The FOGGIE stellar halos make up between 6% (Tempest) and 13% (Hurricane) of their galaxy's total stellar mass. This is broadly consistent with observations, although we do not have any stellar halos at the extreme low end of the observed range. There is no consensus method to define a stellar halo for either observations or simulations, which complicates any comparisons made between them (e.g., Sanderson et al., 2018; Merritt et al., 2020; Gilhuly et al., 2022). We make a more careful comparison to observations in Figure 3 by reproducing Figure 13 of Gilhuly et al. (2022) with the FOGGIE galaxies. In order to account for uncertainty in the proper definition, we follow Gilhuly et al. (2022) and adopt four different commonly used definitions for the stellar halo. While disk stars and bulge stars are removed via kinematic information in our fiducial definition, we make only position-based cuts for these comparisons to better mimic observations. Note, however, that we still remove star particles bound to satellites from the calculation of the stellar halo masses, as observations typically mask these out. In each panel of Figure 3, the \(x\)-axis is the total stellar mass, which we again correct in the simulations following Munshi et al. (2013), and the \(y\)-axis is the fraction of this stellar mass that is identified as part of the stellar halo by each observational definition. We include both the FOGGIE galaxies and samples of observed galaxies, which are plotted in the panel that most closely resembles each observational definition of a stellar halo. Note that the FOGGIE galaxies (with the exception of Tempest) are generally slightly more massive than most of the observations, although where possible we include data from M104 (Cohen et al., 2020; Karachentsev et al., 2020) and UGC 00180 (Trujillo and Fliri, 2016), which are similar in mass to Hurricane and Maelstrom, Squall, and Blizzard, respectively. In the top left panel of Figure 3, the stellar halo is defined as the stellar mass located beyond 5 times the scale radius of the disk (\(R_{\rm d}\)). We calculate \(R_{\rm d}\) by fitting an exponential profile to the face-on g-band surface brightness profile of each galaxy, excluding the bulge. The equation for an exponential fit is given by \[\mu(r)=\mu_{0}+1.086\frac{r}{R_{d}}, \tag{10}\] where \(\mu_{0}\) is the central surface brightness of the galaxy in \(\rm mag\,arcsec^{-2}\)(Freeman, 1970). The disk scale radius ranges from \(R_{\rm d}=2.7\)-\(3.5\) kpc for the FOGGIE galaxies, which is consistent with the R\({}_{\rm d}\)=2-\(3.5\) kpc estimated for the Milky Way (Bovy et al., 2012). We compare the resulting stellar halo mass fractions to those found for the Dragonfly Edge-on Galaxies Survey and the Dragonfly Nearby Galaxies Survey (which we will combine as DE/NGS; Gilhuly et al., 2022), both of which Figure 2: FOGGIE stellar masses (solid stars) and stellar halo masses (outlined stars) compared to abundance matching results from Kravtsov et al. (2018). Note that the values from the simulations are adjusted following Munshi et al. (2013) and therefore differ slightly from those listed in Table 1. Vertical dashed lines connect stellar masses and stellar halo masses for individual galaxies, the names of which are written alongside the vertical lines. We will use the same colors to refer to the same galaxies throughout this paper. The black dashed line shows 10% of the predicted stellar mass for a given virial mass assuming Kravtsov et al. (2018) values. Each FOGGIE stellar halo makes up 6-13% of its galaxy’s total stellar mass. are integrated light surveys of the outskirts of local (\(D<25\) Mpc) spiral galaxies. DEGS (Gilhuly et al., 2022) analyzed the stellar halos of 12 edge-on galaxies with masses greater than that of the LMC, while DNGS (Merritt et al., 2016) studied the stellar halos of 8 Milky Way analogs without regard to orientation. Stellar masses for the DE/NGS galaxies come from the _Spitzer_ Survey of Stellar Structure in Galaxies (S\({}^{4}\)G; Querejeta et al., 2015), while the stellar halo mass fractions come from Gilhuly et al. (2022). The FOGGIE galaxies are generally consistent with the observations, although they do not have as much diversity in either stellar mass or stellar halo mass fraction as the DE/NGS sample. In the top right panel of Figure 3, the stellar halo is defined as the stellar mass located beyond 20 kpc. In addition to the DE/NGS galaxies, we compare the FOGGIE galaxies to M101 (Jang et al., 2020), M104 (Cohen et al., 2020), and the galaxies of the GHOSTS survey (Radburn-Smith et al., 2011; Harmsen et al., 2017), plus the values Harmsen et al. (2017) adopted for the Milky Way (Licquia and Newman, 2015; Bland-Hawthorn and Gerhard, 2016) and M31 (Sick et al., 2015; Ibata et al., 2014). The GHOSTS sample includes 6 nearly edge-on galaxies with stellar masses similar to that of the Milky Way. Following Gilhuly et al. (2022), we use the uncorrected stellar halo masses for the GHOSTS galaxies and M104. The FOGGIE galaxies are again consistent with the ob Figure 3: FOGGIE stellar halo mass fractions compared to observations that adopt different definitions of the stellar halo. In each panel, the FOGGIE values are recalculated according to each observational definition: _Top left:_ all stellar mass beyond 5 times the scale radius of the disk; _Top right:_ all stellar mass beyond 20 kpc; _Bottom left:_ all stellar mass beyond the radius at which the stellar mass density drops below \(10^{6}\) M\({}_{\odot}\,\rm kpc^{-2}\); _Bottom right:_ all stellar mass beyond 5 times the stellar half mass radius. The FOGGIE stellar halos are generally consistent with observations. Compare to Figure 13 of Gilhuly et al. (2022). servations and display a slight trend in stellar halo mass fraction as stellar mass increases. In the lower left panel of Figure 3, the stellar halo is defined as the stellar mass located beyond the point at which the stellar surface density drops below \(10^{6}\,\mathrm{M}_{\odot}\,\mathrm{kpc}^{-2}\). This radius ranges from 15-30 kpc for the FOGGIE galaxies. We compare our galaxies to the DE/NGS galaxies as well as UGC 00180 (Trujillo and Fliri, 2016). This definition of the stellar halo produces a stellar halo mass fraction that is nearly constant across 2 dex in stellar mass in both the observations and the FOGGIE galaxies. In the lower right panel of Figure 3, the stellar halo is defined as the stellar mass located beyond 5 times the stellar half mass radius (\(R_{1/2}\)) of each galaxy. The FOGGIE galaxies are within the scatter of the DE/NGS sample, but they sit at the very top of this range. As we discussed in SS 2.7, the bulges in the central FOGGIE galaxies tend to be slightly overmassive. The stellar half-mass radius of each galaxy is therefore only \(\sim\)1 kpc--much smaller than it otherwise would be (and smaller than it typically is for observed galaxies). Accordingly, \(r>5R_{1/2}\) includes the bulk of the stellar disk mass in the calculation of the stellar halo mass. Harmsen et al. (2017) find that current simulations, regardless of resolution, tend to produce overly massive stellar halos--a trend that is also supported by the findings of Merritt et al. (2020) and Gilhuly et al. (2022). The FOGGIE stellar halos are towards the high mass end of the observations when stellar halo definitions that rely on a measurement that includes the galaxy's bulge are used. Outside of this, however, the FOGGIE simulations appear to produce stellar halos with realistic masses. We discuss possible explanations for this in SS 5.2. ### Surface Brightness #### 4.2.1 Simulated Surface Brightness Limits Although surface brightness limits are frequently taken into account when the completeness of a survey is being calculated, it is less commonly appreciated that simulations are also limited in modeling low stellar densities by their discretization of stellar populations with star particles. Canas et al. (2020), for instance, show that the vast majority of Milky Way-mass systems in Horizon-AGN have fewer than 100 star particles in what they call the "intra-halo stellar component". Assuming a stellar halo that fills the volume between 20 and 200 kpc, this implies an average of only one star particle per (70 kpc)\({}^{3}\). Given the amount of substructure that has been observed in real stellar halos, it is unlikely that large volume simulations--or even many zoom-in simulations--will be able to model the complex components of stellar halos in a realistic way. The star particle formation scheme used in FOGGIE, and which we described in SS 2.4, preferentially places star particles with lower masses in regions that primarily consist of old stars, like the stellar halo. Accordingly, our stellar halos are made up of large numbers of particles: each comprises 2.9\(\times\)10\({}^{6}\)-1.7\(\times\)10\({}^{7}\) star particles (including particles bound to satellites increases these counts by \(\lesssim 10\%\)). At \(z=0\), the median star particle mass in the FOGGIE stellar halos ranges from 919 M\({}_{\odot}\) (Tempest) to 1216 M\({}_{\odot}\) (Squall)--similar to the mass resolution achieved in the highest-resolution Milky Way zoom-in simulations at the time of writing: NIHAO's UHD simulations (Buck et al., 2020), ChaNGa's Mint Condition DC Justice League simulations (Applebaum et al., 2021), and Auriga's Level 2 simulation (Grand et al., 2021). We can convert star particle masses to approximate surface brightness limits by calculating the luminosity of each star particle and choosing a "pixel" area over which to measure the surface brightness. The latter is somewhat arbitrary, but effectively provides a normalization for the relationship between star particle masses and surface brightness. We use (1.5 kpc)\({}^{2}\) areas throughout this paper because this is roughly the size of (10 arcsec)\({}^{2}\)--an area frequently used for standard surface brightness measurements (Roman et al., 2020)--at the distance of M81. To calculate the luminosity of a star particle of a given mass, we use FSPS (Conroy et al., 2009; Conroy and Gunn, 2010) with MIST models (Dotter, 2016; Choi et al., 2016; Paxton et al., 2011, 2013, 2015) and a Kroupa (2001) IMF. We show the \(g\)-band surface brightness produced by a typical stellar halo star particle with age = 12 Gyr and metallicity [Fe/H] = \(-1.2\) as a dark red line in the left-hand panel of Figure 4. Because star particles are discrete, this line indicates the lowest surface brightness that a simulation using a given star particle mass can "detect". We show the interquartile range for the star particle masses (at formation time) in the FOGGIE stellar halos as a gray vertical band. The location of the intersection between the low end of this band (\(\sim\)10\({}^{3}\) M\({}_{\odot}\)) and the 1 particle/(1.5 kpc)\({}^{2}\) line indicates that the detection limit of the FOGGIE simulations is \(\sim\)37 mag arcsec\({}^{-2}\). In the right-hand panel of Figure 4, we show the median number of star particles contained within a (1.5 kpc)\({}^{2}\) area as a function of galactocentric distance in the FOGGIE simulations. This value typically exceeds \(10^{4}\) star particles within the disk. However, those "pixels" within the stellar halo that contain star particles typically include only 10-100 star particles (shown as peach and lavender lines, respectively, in the left-hand plot) in the inner halo and 1-10 in the outer halo. While the low densities of star particles in outer halos remains a limitation, the FOGGIE galaxies and other simulations using \(\mathrm{M_{*,part}<}1000\,\mathrm{M_{\odot}}\) offer a substantial improvement over previous generations of simulations. As we will show in SS 4.2.2, these simulations are resolving the surface brightness limits that will be most relevant for the wide-field surveys of the 2020s. #### 4.2.2 Surface Brightness Maps and Profiles Despite spanning a narrow range of stellar and virial masses, the FOGGIE galaxies have diverse stellar halos. In Figure 5, we show \(g\)-band surface brightness maps ranging from 38 \(\mathrm{mag\,arcsec^{-2}}\) to 23 \(\mathrm{mag\,arcsec^{-2}}\) for each of our 5 galaxies (cf. Figures 13 & 14 of Bullock & Johnston 2005). Note that we include all star particles falling within these projections out to a distance of 500 kpc of the center of each galaxy, so that bulge stars, disk stars, and stars bound to satellites are included. Surface brightness is calculated as described in Section 4.2.1 according to the mass, age, and metallicity of each star particle and using \((1.5\,\mathrm{kpc})^{2}\) pixels. Each galaxy is oriented such that its disk is edge-on (for Hurricane, we orient the image relative to its central disk). Below the surface brightness maps, we also show the approximate 10\(\times\)10 \(\mathrm{arcsec^{2}}\) surface brightness limits for a number of surveys (assuming 3\(\sigma\) significance). We mark future surveys with uncertain final limits in gray. We use \(g\)-band (central wavelength \(\approx\) 477 nm) limits wherever possible so that a direct comparison to the FOGGIE surface brightness maps can be made. However, for _Euclid_, we use limits for the VIS instrument, which uses a single broad band filter that covers 550-900 nm. For _Roman_, we use limits appropriate for filters F106 and F129, as these are the bluest filters that are currently planned for use in the High Latitude Wide Area Survey (HLWAS). For a more detailed discussion of these filters, see Section 6.1. The FOGGIE galaxies are shown in order of increasing total stellar mass. While there is a broad trend of stellar halo extent and general complexity increasing with the stellar mass of the galaxy it surrounds, we have only a small sample of galaxies and there is certainly scatter arising from their diverse histories. Tempest has the most compact stellar halo, with only a few streams extending beyond 100 kpc. Maelstrom's stellar halo is dominated by shell structures, but it also has a large stream resulting from the ongoing tidal disruption of an LMC-mass dwarf, the core of which is still Figure 4: _Left:_ Formation mass of star particles used in a simulation and the g-band surface brightness that these particles produce. Lines of different colors show the surface brightness of a single \((1.5\,\mathrm{kpc})^{2}\) area containing 1 (dark red), 10 (peach), or 100 (lavender) star particles. The dark gray vertical band indicates the interquartile range for star particle masses (at formation) in the stellar halos of the FOGGIE galaxies. _Right:_ The number of star particles contained within a \((1.5\,\mathrm{kpc})^{2}\) area as a function of radius in the various FOGGIE galaxies. The solid lines indicate the median value, while the shaded regions indicate the interquartile range _assuming a non-zero detection_. While the disks of the FOGGIE galaxies typically contain \(10^{4-8}\) star particles per unit area, the stellar halos typically only contain 1–10 particles per unit area. Lower mass star particles allow simulations to model lower surface brightness regions. visible in the upper left of the image. Squall's stellar halo is relatively smooth, with only a few shell structures evident. However, it has four prominent satellites that have fallen in recently and are just beginning to be tidally stripped. Blizzard has a number of extended streams--primarily stemming from the destruction of a single dwarf satellite--coupled with a series of shells. The most massive galaxy, Hurricane, also has the richest stellar halo. This is due in part to the large number of fairly bright satellites that have survived to \(z=0\) with intact cores and extended tidal streams. However, Hurricane has also simply accreted and disrupted considerably more satellites than any of the other galaxies, leading to a more massive and complex halo. The differences between the FOGGIE stellar halos are also evident in Figure 6, where we show 1D surface brightness maps of the five galaxies rendered in the \(g\) band and oriented such that the stellar disks are edge-on. Each image is 700 kpc across and shows everything within 500 kpc of the center of each galaxy. The galaxies are ordered by increasing total stellar mass. The surface brightness ranges from 38 mag arcsec\({}^{-2}\) to 23 mag arcsec\({}^{-2}\) following Figures 13 and 14 of Bullock and Johnston (2005). Labels below the colorbar mark 3\(\sigma\) detection limits integrated over \(10\times 10\) arcsec\({}^{2}\) for SDSS (Pohlen and Trujillo, 2006), the Dragonfly Edge-on/Nearby Galaxies Surveys (DE/NGS; Merritt et al., 2016; Gilhuly et al., 2022), IAC Stripe 82 (Fliri and Trujillo, 2016; Roman and Trujillo, 2018), Rubin LSST 1 and 10 year co-added data (Yoachim, 2022), the _Euclid_ VIS Wide and Deep Surveys (Euclid Collaboration et al., 2022), DECaLS (Dey et al., 2019; Roman et al., 2021), and the _Roman_ High Latitude Wide Area Survey (HLWAS; Martinez-Delgado et al., 2023; Montes et al., 2023). Estimates for future surveys are shown in gray. Note that all limits except for those shown for _Roman_ and _Euclid_ are for \(g\)-band filters. Figure 5: Surface brightness maps of the five galaxies rendered in the \(g\) band and oriented such that the stellar disks are edge-on. Each image is 700 kpc across and shows everything within 500 kpc of the center of each galaxy. The galaxies are ordered by increasing total stellar mass. The surface brightness ranges from 38 mag arcsec\({}^{-2}\) to 23 mag arcsec\({}^{-2}\) following Figures 13 and 14 of Bullock and Johnston (2005). Labels below the colorbar mark 3\(\sigma\) detection limits integrated over \(10\times 10\) arcsec\({}^{2}\) for SDSS (Pohlen and Trujillo, 2006), the Dragonfly Edge-on/Nearby Galaxies Surveys (DE/NGS; Merritt et al., 2016; Gilhuly et al., 2022), IAC Stripe 82 (Fliri and Trujillo, 2016; Roman and Trujillo, 2018), Rubin LSST 1 and 10 year co-added data (Yoachim, 2022), the _Euclid_ VIS Wide and Deep Surveys (Euclid Collaboration et al., 2022), DECaLS (Dey et al., 2019; Roman et al., 2021), and the _Roman_ High Latitude Wide Area Survey (HLWAS; Martinez-Delgado et al., 2023; Montes et al., 2023). Estimates for future surveys are shown in gray. Note that all limits except for those shown for _Roman_ and _Euclid_ are for \(g\)-band filters. brightness profiles of our galaxies. The overall shapes of the profiles are quite similar, but they vary significantly in brightness. In the left-hand panel, we show the azimuthally-averaged \(g\)-band surface brightness profile of each galaxy out to \(350\,\mathrm{kpc}\). Note that stars still bound to satellites have been removed as described in Section 3. While Blizzard and Squall have nearly identical profiles, Hurricane is considerably brighter (1.5-2 \(\,\mathrm{mag\,arcsec^{-2}}\)) than the other halos and Tempest is slightly dimmer (1-1.5 \(\,\mathrm{mag\,arcsec^{-2}}\)). This is unsurprising given both the relative masses of the stellar halos (Figure 2) and their appearances (Figure 5). Maelstrom's profile is nearly identical to those of Squall and Blizzard out to \(\approx 200\,\mathrm{kpc}\), but flattens out at this radius, rather than continuing to decline. This is due to the disruption of its LMC-mass satellite, the tidal debris from which dominates its outer halo. In the right-hand panel of Figure 6, we compare the V-band surface brightness profiles of the FOGGIE galaxies to the stellar halos of the GHOSTS sample (Harmsen et al., 2017). The range covered by this plot is identified as a dotted rectangle in the left-hand panel of this figure. In order to make a direct comparison to the observations, we calculate this profile for only a subset of the stars. The GHOSTS profiles we compare to are measured along the minor axis of each observed galaxy between 5 and 40-75 kpc in order to minimize contamination by disk stars. Following Monachesi et al. (2016), we mimic this by orienting each FOGGIE galaxy such that the disk is edge-on and then select only those stars that fall within a projected 15 degree wedge above or below the disk, starting 5 kpc from the center of the galaxy and excluding any stars bound to satellites (see the cartoon in the lower left-hand corner). The V-band profiles shown here include only these stars. Because Hurricane's polar ring is tilted at \(\sim 80^{\circ}\) relative to its central disk, we rotate the minor axis wedges an extra \(30^{\circ}\) to avoid intersecting the polar ring. However, this has only a minor effect on the surface brightness profile that we derive. All of the FOGGIE galaxies except Hurricane match the observations fairly well in shape and general normalization. As in Figure 6, Hurricane is considerably brighter than any of the other galaxies, including the observed ones. This is likely primarily due to the fact that Hurricane is the most massive of the galaxies and is more massive than any of the GHOSTS galaxies by \(\sim 0.5\,\mathrm{dex}\). Maelstrom also has a larger upturn in its surface brightness profile than we see in the observations starting at \(\sim 40\,\mathrm{kpc}\). This is the result of the intersection of its lower minor axis wedge with a stellar stream from a satellite with \(\mathrm{M_{\star}}\sim 10^{9}\,\mathrm{M_{\odot}}\) at this radius. There is Figure 6: _Left:_ Face-on g-band surface brightness profiles of the FOGGIE galaxies with star particles belonging to satellites excluded. The grey dotted rectangle indicates the range of the right-hand plot. _Right:_ V-band surface brightness profiles of the inner regions of the FOGGIE stellar halos. As shown in the cartoon in the bottom left, the profiles in this plot are measured only within a \(15^{\circ}\) wedge along the minor axis of the disk. This is done to provide a better comparison to the stellar halos in the GHOSTS survey (Harmsen et al., 2017), which are shown as dashed grey lines. The five FOGGIE halos typically span a range of \(\sim\)\(3\,\mathrm{mag\,arcsec^{-2}}\) at any given radius, but, except the brightest halo, Hurricane, are broadly consistent with the GHOSTS galaxies. also a subtler flattening in the surface brightness profiles of the other three FOGGIE galaxies at \(r\geq 40\) kpc that may be inconsistent with the GHOSTS sample. Some of this can likely be attributed to the fact that Tempest is the only FOGGIE galaxy that is not above the mass range of the GHOSTS galaxies (see the upper right-hand panel of Figure 3). However, this slight excess of light may also be linked to broader inconsistencies that have been found between simulations and observations. Merritt et al. (2020) carefully compared the stellar halos of the DNGS to a mass-matched sample from TNG100 and found that the simulated galaxies had stellar surface densities 1-2 dex higher than the observed galaxies at \(r>20\) kpc--a disparity that they dubbed the "missing outskirts" problem. Keller (2022) found a potential explanation for this issue, showing that simulations run with feedback schemes that efficiently regulate star formation in high redshift and low mass halos produce stellar halos more in line with the observations than do simulations with more traditional feedback schemes. Given the simplicity of the feedback implemented in the FOGGIE simulations, it is perhaps not surprising that we see some evidence that our stellar halos have excess light. However, our simulated stellar halos are considerably more in line with both observed masses and observed surface brightness profiles than those studied in either Merritt et al. (2020) or Elias et al. (2018) (see Figure 13 of Gilhuly et al. (2022)). If our stellar halos do suffer from the missing outskirts problem, it is likely a relatively minor concern. We address a possible explanation for this in SS 5.2. ### Metallicity and Color The stars that make up the stellar halos of Milky Way-like galaxies are generally observed to be relatively metal-poor (e.g., Mouhcine et al., 2005; Harmsen et al., 2017). However, considerable variation exists both between stellar halos and within them. The Milky Way and M31, for instance, appear to lie on opposite ends of the spectrum: the Milky Way's halo is diffuse and exceptionally metal-poor (e.g., Bell et al., 2008), while M31's halo is brighter and contains more metal-enriched stars (e.g., Ibata et al., 2014). The metallicities of other nearby galaxies of similar mass typically lie somewhere between the two (e.g., Mouhcine et al., 2005; Harmsen et al., 2017). Both observations (e.g., Mouhcine et al., 2005) and simulations (e.g., Renda et al., 2005; Robertson et al., 2005; Font et al., 2006; Purcell et al., 2008) suggest that these variations in metallicity are the result of differences in how the stellar halos are assembled. Deason et al. (2016) and D'Souza & Bell (2018) show that the metallicity of the accreted component of stellar halos reflects the metallicity of their dominant contributor(s)--typically one to two dwarf galaxies with \(M_{\star}=10^{8-10}\) M\({}_{\odot}\). The tight relationship that exists between the mass of a dwarf galaxy and its metallicity (e.g., Gallazzi et al., 2005; Kirby et al., 2013) produces a similarly strong correlation between the mass of a stellar halo and its metallicity. In the left-hand panel of Figure 7, we compare the masses and metallicities of the FOGGIE stellar halos to M101 (Jang et al., 2020), M104 (Cohen et al., 2020), and the galaxies in the GHOSTS sample (Monachesi et al., 2016; Harmsen et al., 2017). As in Figure 3, we also include the values used by Monachesi et al. (2016) and Harmsen et al. (2017) for the Milky Way and M31: the metallicity adopted for the Milky Way is the mean of the values found by Sesar et al. (2011) and Xue et al. (2015) and the metallicity used for M31 is from Gilbert et al. (2014). The latter value has been adjusted assuming \([\alpha/\mathrm{Fe}]\) = 0.3, appropriate for an old, metal-poor population of stars (e.g., Venn et al., 2004; Robertson et al., 2005). In order to make a direct comparison to the observations, we measure the metallicities of the FOGGIE stellar halos at \(r=30\) kpc within the minor axis wedges defined in Section 4.2.2. Note that we do not track the abundances of individual elements in the FOGGIE simulations, so we have converted total metallicity to [Fe/H] following Thomas et al. (2003) and assuming \([\alpha/\mathrm{Fe}]\) = 0.3. The observed galaxies show a strong correlation between stellar halo metallicity and mass, as do the FOGGIE galaxies. However, as we noted in SS 2.7, the FOGGIE galaxies are somewhat metal-rich with respect to the observations (\(\sim 0.4\) dex above the expected values). For this reason, we will largely abstain from commenting on absolute metallicities in the rest of our analysis. However, it is worth noting that the FOGGIE galaxies reproduce the slope of the observed \(M_{\mathrm{SH}}\)-metallicity relation, and it is therefore likely that relative differences between the FOGGIE stellar halos can be trusted. In the right-hand panel of Figure 7, we explore how metallicity varies within the FOGGIE stellar halos. A combination of smaller galaxy sizes at high redshift and the mass-dependence of dynamical friction are thought to cause the inner halos of galaxies to be dominated by ancient and/or massive accretion events. The outskirts of the halos, by contrast, are thought to be primarily populated by stars from lower mass and/or more recent accretion events (e.g., Bullock & Johnston, 2005; Johnston et al., 2008; Horta et al., 2023). We might, therefore, expect the dwarf galaxy mass-metallicity relation to produce a negative metallicity gradient in most stellar halos. However, the stochastic history of satellite accretion and the variations in decay time inherent in different satellite orbits lead to significant halo-to-halo variation. Flatter metallicity gradients generally indicate that many dwarfs have contributed fairly equally to a stellar halo, while a significant gradient is more indicative of a stellar halo dominated by one or two massive contributors (e.g., Monachesi et al., 2016). In order to compare to observations of metallicity gradients in stellar halos, we follow Monachesi et al. (2016) in using the \(F606W-F814W\) color gradient of red giant branch (RGB) stars as a proxy for metallicity. Although age and metallicity are degenerate (e.g., Worthey, 1994), metallicity has a stronger influence on the color of RGB stars than age does (e.g., Streich et al., 2014), and using color gradients, rather than metallicity gradients, avoids the large uncertainties (0.2-0.3 dex) associated with a conversion. We calculate the \(F606W-F814W\) color of RGB stars in the FOGGIE simulations by subsampling each star particle that falls within a minor axis wedge assuming a Kroupa (2001) IMF. We use MIST models to identify those "stars" that would be RGB stars and to calculate their \(F606W\) and \(F814W\) luminosities. The gradients plotted in Figure 7 are based on a linear fit to the \(F606W-F814W\) color of these "RGB stars" between 5 and 40 kpc. The \(F606W-F814W\) color gradients we measure are generally in good agreement with metallicity gradients over the same radial range. Squall, Blizzard, and Hurricane all have nearly flat gradients throughout most of their stellar halos, although we note that Hurricane has a negative metallicity gradient (\(-0.008\) dex/kpc) in its outer halo (\(r=170\)-220 kpc), where debris from a low mass satellite passes through its minor axis. Tempest is the only galaxy with a substantial color gradient in its inner halo, although the change in the metallicity of its stellar halo within this region is very similar to that of Maelstrom (both are \(-0.01\) dex/kpc), which has a nearly flat color gradient. The presence of an inner halo gradient in both Tempest and Maelstrom likely reflects the fact that both have had relatively recent accretion events that left debris at low impact parameters. The FOGGIE stellar halos are broadly in line with observations, although we do not have any stellar halos with extremely negative gradients. Monachesi et al. (2016) and Harmsen et al. (2017) find that roughly half of the GHOSTS stellar halos have no color/metallicity gradients, while the remainder have slightly negative gradients. In Figure 7, we also show a \(F606W-F814W\) gradient for M101 based on a linear fit to data from Jang et al. (2020). M101 has a much lighter stellar halo with a much steeper negative gradient than we see in any of the FOGGIE galaxies. This is likely partially due to the fact that our simulated sample is small. However, this may also reflect the physical prescriptions used in the simulations. Models that result in stellar halos with high in situ fractions tend to also produce substantial nega Figure 7: _Left:_ The masses and metallicities of the FOGGIE stellar halos compared to observed samples. [Fe/H] is measured at 30 kpc from the center of each galaxy based on the stars that fall within a 15\({}^{\circ}\) wedge along the minor axis of the disk. Although the FOGGIE galaxies reproduce the slope of the observed relation, they are biased high by \(\sim\)0.4 dex. _Right:_ The masses of the FOGGIE stellar halos and the \(F606W-F814W\) color gradient of RGB stars measured between 5 and 40 kpc along their minor axes, compared to observed samples. The FOGGIE galaxies typically have relatively flat color gradients, although Tempest has a slight negative color gradient. tive metallicity gradients (e.g., Font et al., 2011; Tissera et al., 2013), while those that are exclusively accretion-based typically produce no gradients (e.g., Bullock & Johnston, 2005; Font et al., 2006a). As we will discuss in Section 5.2, the FOGGIE stellar halos generally have small contributions from in situ stars, particularly along their minor axes. ## 5 (Dis)assembling stellar halos Evidence from both theory and observations suggests that stellar halos are predominantly composed of stars that originally formed in other galaxies, outside of the main progenitor of the \(z=0\) central (e.g., Bullock et al., 2001; Bullock & Johnston, 2005; Naidu et al., 2020). By disassembling stellar halos into the various galaxies that contributed to them, we have the opportunity to learn about a wide variety of dwarf galaxies--not merely the small and biased subset that survived as gravitationally self-bound entities until the present day. In this section, we will explore the origins of the star particles that contribute to the \(z=0\) stellar halos of the FOGGIE galaxies. We describe the method by which we identify and classify the sources of stellar halo star particles in SS 5.1, then discuss the in situ and ex situ star particles in SS 5.2 and SS 5.3, respectively. Note that SS 5.1 is fairly technical, so those uninterested in the details of how stellar halo star particles are assigned to a given source should feel free to skim or skip it. ### Identifying Contributors to the Stellar Halo In order to trace each individual star particle in each \(z=0\) stellar halo back to the (likely no longer gravitationally self-bound) galaxy in which it formed, we compare the location at which it formed to the locations of dark matter halos identified by Rockstar at that snapshot. The extremely fine time cadence of the FOGGIE simulations typically makes this fairly simple, as star particles are unlikely to move far from their birth sites in the \(\sim 5\) Myr that separate snapshots. If a star particle is within \(0.2R_{\rm vir}\) of the center of a dark matter halo, we assume it formed within that dark matter halo. If a star particle is within \(0.2R_{\rm vir}\) of the centers of multiple dark matter halos, we assume it formed within the one whose absolute distance to it is smallest. In instances where no appropriate host halo is found for a particular star particle, we employ several "clean-up" strategies. The first technique makes use of Consistent-Tree's "phantom" halo feature, which identifies when Rockstar has temporarily lost track of a halo and interpolates the likely position and velocity of that halo during the time when it was lost. This is particularly common during mergers--including infall to a larger halo--which is incidentally one of the most critical times for galaxies contributing stars to a stellar halo. We use the estimated positions from Consistent-Trees to determine whether a hostless star particle was within \(0.2R_{\rm vir}\) of the center of a phantom halo when it formed. If this procedure does not yield a host, we identify one manually. For each snapshot in which a hostless star particle forms, we plot the positions of the hostless star particle and any nearby star particles. In nearly all cases, the hostless star particle is obviously associated with a group of previously assigned star particles at the time that it forms and it is assigned to the same galaxy as its companions. The failure of the earlier procedures to identify hosts for these star particles is typically due to one of three issues: 1) the host halo has fallen below the limit where Rockstar is able to keep track of it, but is still forming stars; 2) the host halo is a phantom during the time when this star particle formed and Consistent-Trees did not correctly predict its position; 3) the host halo's dark matter halo has merged with that of another galaxy, but the stellar components of the galaxies are still separated by more than \(0.2R_{\rm vir}\). In rare instances, we also see star formation occurring in tidal dwarfs or dark matter halos that were never massive enough for Rockstar to identify them. A star particle that formed in one of these locations cannot be assigned to a Rockstar halo, but it still receives the same host ID as any other star particles that formed in the same location. After each star particle that is part of a stellar halo at \(z=0\) has been assigned a host ID corresponding to the galaxy that it originally formed in, we consolidate any hosts that merged prior to infall. We do this for two primary reasons: 1) If two galaxies have fully merged prior to infall to the central, their debris should occupy the same phase-space. Our results concerning contributors to the stellar halo are therefore more directly comparable to observations if we treat the two merged galaxies as a single entity. 2) The granularity of our initial host ID assignments are limited by our particle mass resolution. The mass of dark matter particles in the high resolution region of these simulations is \(\sim 10^{6}\) M\({}_{\odot}\), so we do not consider dark matter halos with a total mass \(<10^{9}\) M\({}_{\odot}\) (\(\sim 1000\) dark matter particles) to be resolved. Accordingly, all analysis involving the decomposition of our stellar halos into their component parts should be assumed to lack ultra-faint dwarfs. The dwarf galaxies that contribute to our stellar halo are almost certainly composed of smaller dwarf building blocks and many should likely possess ultra-faint satellites of their own. However, simulations from Deason et al. (2016) suggest that stars from ultra-faint dwarfs make up \(\ll\)1% of the mass of stellar halos around Milky Way-like galaxies. This dominance of more massive dwarfs is also supported by observations of the relative abundances of different populations of stars in the stellar halo (e.g., Fiorentino et al., 2015; Deason et al., 2015). By associating star particles with their host at the time at which they first fall into our central galaxy, we are limiting our analysis to galaxies that we can resolve without omitting significant contributors. As a final pass, we check that all star particles identified as belonging to a particular host halo make up only a single galaxy at infall (or prior to disruption if disruption precedes infall) and that no star particles have been incorrectly assigned to low-mass dark matter subhalos that happened to be closer to them than their true host at the time of formation. ### The in situ halo Although the majority of stars that populate the stellar halos of Milky Way-like galaxies are thought to have formed in disrupted dwarfs, there is evidence to suggest that some may have formed in situ--either within the disk of the main progenitor of the central galaxy or within the halo itself (i.e., in dense clumps within the CGM or in gas recently stripped from infalling dwarfs). Using SDSS, Carollo et al. (2007) found that the Milky Way appeared to have a more metal-rich inner halo (\(r<15\) kpc) in addition to a metal-poor outer halo and theorized that these stars may have formed dissipatively, rather than arriving through accretion. More recent observations with _Gaia_ and the H3 Survey have also found a population of relatively metal-rich stars within the Milky Way's inner halo that are thought to have formed in situ (e.g., Bonaca et al., 2017; Haywood et al., 2018; Conroy et al., 2019). Cosmological simulations typically predict that in situ stars contribute 20-50% of the mass in stellar halos around Milky Way-mass galaxies and dominate the stellar halo mass budget out to 30-40 kpc (e.g., Abadi et al., 2006; Zolotov et al., 2009; McCarthy et al., 2012; Cooper et al., 2015; Font et al., 2020). Cooper et al. (2015) find that in situ halo stars usually have one of three origins: some form as part of the central galaxy's disk and are displaced to larger radii via violent relaxation during a major merger (see also Zolotov et al., 2009; Purcell et al., 2010), but others form either in gas that has been recently stripped from dwarf galaxies or in smoothly accreted gas. Another origin has been suggested by Yu et al. (2020), who find that 5-40% of the stars that populate the outer (\(r>50\) kpc) stellar halos of the galaxies in FIRE's Latte suite were formed in outflows from the central galaxy. The FOGGIE stellar halos also include a contribution from in situ star particles. In line with other simulations, we find that 30-40% of the mass in the stellar halos of Tempest, Maelstrom, Squall, and Hurricane and 58% of the mass in Blizzard's stellar halo come from in situ star particles. We also find that the majority of these star particles either formed in the central disk of the main progenitor and were perturbed into halo orbits by mergers or formed in gas recently stripped from infalling dwarf galaxies. However, the in situ star particles in the FOGGIE stellar halos tend to be more centrally concentrated than those in other simulations. In Figure 8, we show the in situ mass fraction of each stellar halo as a function of radius. We compute this by randomly orienting each galaxy 120 times and calculating the fraction of the mass within a given projected annulus (0.5 kpc in width) that is contributed by in situ star particles. The only FOGGIE stellar halo that has an extended in situ population is Squall, and the vast majority of these star particles were originally in the disk or inner halo and were perturbed by a single event. At \(z\approx 0.7\), Squall undergoes a 4:1 prograde merger and the final coalescence of the two galaxies propels a shell of star particles that originally formed in Squall's disk into the stellar halo. The edge of this shell is clearly visible at \(r\approx 230\) kpc in Figure 8, where Squall's in situ contribution drops from a nearly constant value of 13-16% to below 10%. The in situ star particles perturbed during this merger, as well as a number that formed in the merging galaxy, also make significant contributions to the shell structures visible around Squall in Figure 5. That being said, there is no radius at which Squall's halo is in situ-dominated. The same is true of Maelstrom, and Tempest and Hurricane are accretion-dominated beyond 6 kpc and 11 kpc, respectively. Even Blizzard, which has the most substantial in situ contribution, is only in situ-dominated out to 16 kpc. Given the generally low contribution that in situ populations make to the FOGGIE stellar halos, it is somewhat unsurprising that they have little influence on the metallicity/color gradients that we derived in Section 4.3. The in situ contribution to the stellar halo along the minor axis is consistently substantially lower than the spherical average, so the negative metallicity gradients that we measure for Tempest and Maelstrom trace radial variations in the accreted population, rather than a transition from a metal-rich in situ-dominated population to a metal-poor accretion-dominated one. Although the more substantial in situ populations of Blizzard and Hurricane do cause the total metallicity profile to deviate slightly from that of the accreted star particles, the metallicities of the in situ and ex situ star particles are not different enough to produce a gradient. The metallicities of the in situ and ex situ star particles in the inner portions of Squall's minor axis wedge are also quite similar. However, while we do not observe a color/metallicity gradient in Squall's inner halo, the left-hand panel of Figure 7 shows that the metallicity of Squall's stellar halo is slightly elevated relative to the relation followed by the other FOGGIE galaxies. This is due to Squall's relatively late major merger and reflects the fact that, at \(r=30\) kpc (the location at which we measure the metallicity), \(>\)50% of the stars along Squall's minor axis formed in either Squall or the massive galaxy with which it merged and are thus comparatively metal-rich. Elevated stellar halo metallicity may therefore be an indication that a galaxy has experienced a major merger since \(z=1\), even in cases where the halo and disk otherwise show no obvious signs of disturbance at \(z=0\). The fact that Squall's extended in situ halo population results from a late major merger suggests that the mass accretion histories of the other FOGGIE galaxies are at least partially responsible for their more centrally concentrated in situ distributions. In SS2.1, we noted that all of the FOGGIE galaxies were selected to have completed their last major merger prior to \(z\approx 2\) in order to mimic the Milky Way. While Squall's final major merger was delayed in the production simulation, the other FOGGIE galaxies assembled the majority of their mass at relatively high redshifts. Their disk and inner halo regions therefore haven't been substantially perturbed since the halos themselves were much smaller and the in situ populations thus remain centrally concentrated. This is consistent with the findings of Rey & Starkenburg (2022), who used genetically modified simulations to show that later, more violent major mergers scatter in situ stellar halo populations outward. However, variations in mass accretion history cannot fully account for how compact the in situ stellar halo populations of the FOGGIE galaxies are compared to those in other simulations. Although large in situ populations seem to be most common in simulated galaxies with recent major mergers, many simulations of Milky Way-mass galaxies have extended in situ populations, even when the central galaxy has had a quiescent merger history (e.g., Monachesi et al., 2019; Font et al., 2020; Yu et al., 2020). This discrepancy is likely partially due to differences in feedback and star formation prescriptions (e.g., Font et al., 2020). As noted in SS2.7, the FOGIE simulations employ relatively simple routines that tend to produce small disks with tightly bound stars. However, some of the difference that we see may also be the result of our high temporal resolution. Because the snapshots that we use to identify the galaxy in which a star particle formed are only \(\sim\)5 Myr apart, it is less likely that we will misclassify ex situ star particles that formed during pericentric passages or in soon-to-be-stripped ram pressure compressed gas as in situ. The distributions of in situ stars in the FOGGIE halos seem to be most similar to those found by Pillepich et al. (2015) in the Fris simulation, which also has closely spaced snapshots (\(\sim\)30 Myr apart). In Eris, in situ star particles cease to dominate the mass budget beyond 10 kpc and fall below 5% of the mass at \(r>30\) kpc--similar to what we see in the halos of Tempest and Maelstrom. More compact in situ stellar populations, like those found in FOGGIE and Eris, appear to be favored by observations. Naidu et al. (2020) find that there is a substantial in situ contribution to the Milky Way's stellar halo only within the inner 15 kpc. Additionally, a number of authors have found that stellar halo simulations more closely match observations when in situ populations are entirely absent: Harmsen et al. (2017) found that the Bullock & Johnston (2005) stellar ha Figure 8: The fraction of the stellar halo mass in a given annulus that comes from star particles that formed in the main progenitor of the central galaxy as a function of radius. The global in situ mass fractions of the FOGGIE stellar halos are similar to those found in other cosmological simulations, but the in situ populations of the FOGGIE stellar halos tend to be more centrally concentrated. The only exception is Squall, which experiences a relatively late major merger that perturbs a number of in situ star particles from its disk and inner halo into wider orbits. los, which include no in situ component, are consistent with the GHOSTS stellar halos and Monachesi et al. (2019) showed that the Auriga stellar halos could be brought into closer agreement with GHOSTS by eliminating their in situ populations. In their exploration of the "missing outskirts" problem (see SS 4.2.2), Merritt et al. (2020) experimented with a variety of changes to the TNG100 galaxies to make them consistent with the DNGS sample and found that one of the most effective methods was reducing the spatial extent of the in situ halo. The relatively compact in situ populations of the FOGGIE stellar halos may therefore provide an explanation for why we do not see substantial excess light in the halo outskirts relative to observations. Additionally, because the in situ populations of the FOGGIE stellar halos are so centrally concentrated, they are considerably more sensitive to the method that we use to define the stellar halo than accreted stars are. Most of the simulations mentioned earlier in this subsection use kinematic selection criteria similar to ours, but were we to use a stellar halo definition that includes all non-disk stars with \(r>20\,\)kpc, for instance, the in situ contribution would drop substantially for every halo except Squall (down to \(<\)5% for Tempest and Maelstrom and \(<25\%\) for Blizzard and Hurricane). Accordingly, most of the observationally-motivated stellar halo definitions that we employ in Figure 3 exclude the majority of stellar halo mass that is contributed by in situ star particles. This may help to explain why the FOGGIE stellar halo mass fractions appear to be more consistent with observations than many other simulations are. ### The accreted halo The remainder of the star particles that populate the stellar halos of the FOGGIE galaxies are star particles that originally formed in other (mostly dwarf) galaxies. The number of galaxies that contribute star particles to the FOGGIE stellar halos ranges from 14 (Squall) to 48 (Hurricane) and roughly scales with the mass of the central host. As stated in SS 5.1, our resolution is limited by the dark matter particle mass in the simulations: we do not resolve galaxies with \(M_{\rm vir}<\)10\({}^{9}\,\)M\({}_{\odot}\) and this contributes to our choice to consolidate galaxies that merge prior to infall. The numbers quoted here should, therefore, be considered a lower limit on the number of individual contributors. In Figure 9, we show the evolution of the accreted mass in each FOGGIE stellar halo. We plot the fraction of the total accreted stellar halo mass that exists at \(z=0\) as a function of time, indicating each satellite accretion event using a dot. Note that we do not consider star particles that only temporarily contribute to the mass of the stellar halo or star particles that are still part of a gravitationally self-bound satellite core at \(z=0\). Additionally, we assume that whatever mass a satellite will ultimately contribute to the \(z=0\) stellar halo is instantaneously added to the stellar halo at the time at which the satellite first crosses the virial radius of the central galaxy, rather than when each individual star particle is unbound from its original host. The time at which each star particle would be considered to contribute to the stellar halo by the definition that we applied in Section 3 is therefore typically slightly later than what is shown here, particularly in cases where the satellite continues to form stars after infall. However, we use the time of infall for the sake of practicality, given the number of individual star particles that make up each halo. The growth of stellar halo mass broadly tracks that of total mass, although the latter tends to be more gradual--particularly at later times--presumably as a result of smooth dark matter accretion. We see a general trend of the more massive FOGGIE galaxies (Blizzard and Hurricane) building up their accreted stellar halo mass earlier than the less massive ones (Tempest Figure 9: The build-up of accreted stellar halo mass in the FOGGIE stellar halos over time. Each dot indicates an individual satellite accretion event. Only mass that exists within the stellar halo at \(z=0\) is considered and the time at which the mass is accreted is assumed to be the time at which the satellite it belongs to first crosses the virial radius of the central galaxy. More massive galaxies generally build up their accreted stellar halos fastest, but all of the FOGGIE stellar halos have assembled the majority of their accreted mass by \(z=1\). and Maelstrom). However, there is considerable scatter at any given time. For instance, Maelstrom is the first galaxy to amass more than 20% of its accreted stellar halo while Hurricane is the last. All of the FOGGIE galaxies have assembled half of their final accreted stellar halo mass by \(z=1\), with nearly all of the remaining assembly taking place by \(z=0.6\). The relatively early growth of our stellar halos is likely largely due to our deliberate selection of galaxies that complete their last major merger at \(z\gtrsim 2\). Notably, Squall, the only halo which does not fulfill this criterion, acquires most of its accreted stellar halo mass later than the other galaxies. Deason et al. (2016), who do not take merger history into account when selecting their simulated galaxies, see a much wider variety in the growth histories of their stellar halos (cf. their Figure 4). While all of the FOGGIE galaxies assemble the first \(\sim 15\%\) of their accreted stellar halo mass from a relatively large number of low-mass dwarfs, there is considerable variation in where the remaining accreted mass comes from. We see hints of a mass trend: Tempest and Maelstrom's stellar halos are primarily built from many smaller accretion events, while the mass that makes up the stellar halos of Blizzard and Squall is heavily dominated by star particles from 1 or 2 more substantial satellites. Hurricane has the highest number of contributors to its stellar halo overall, but much of its mass still comes from just a few of them. We show the main sources of accreted stellar halo mass more clearly in Figure 10, where we plot the cumulative fraction of accreted mass that comes from each stellar halo's five most significant contributors. While no stellar halo gets more than 50% of its accreted mass from a single satellite, Squall, Blizzard, and Hurricane all acquire the majority of their mass in just two accretion events. If we include the five most significant contributors to each stellar halo, we can account for 60-90% of the accreted mass of all five FOGGIE stellar halos. The predominance of stars from just a few accreted satellites is consistent with findings from previous simulations (e.g., Bullock and Johnston, 2005; Abadi et al., 2006; Deason et al., 2016) and estimates from observations (e.g., Belokurov et al., 2018; Naidu et al., 2020). Deason et al. (2016), in particular, find that the dominant contributors to stellar halos are typically one to two relatively massive dwarfs with M\({}_{\star}=10^{8-10}\) M\({}_{\odot}\), similar to what we see in FOGGIE. The star particles that make up the accreted portions of stellar halos are not uniformly distributed at \(z=0\). As we noted in Section 4.3, the mass and orbit of an infalling satellite has a strong influence on where its stars ultimately end up. Dynamical friction is proportional to \(M_{\rm sat}^{2}\)(e.g., Binney and Tremaine, 1987), so more massive satellites will tend to sink more deeply into the gravitational potentials of their hosts and therefore deposit their stars at smaller radii (e.g., Amorisco, 2017). Another significant factor is the time at which a satellite is accreted. At earlier times, the central galaxy (and its dark matter halo) are smaller, leading to inside-out growth of the stellar halo (e.g., Bullock and Johnston, 2005; Font et al., 2006; Johnston et al., 2008; Font et al., 2011; Pillepich et al., 2014; Amorisco, 2017; Horta et al., 2023). We show the influence of accretion time on the contributors to the FOGGIE stellar halos in Figure 11, where we divide each stellar halo up into debris from its ancient (\(t_{\rm infall}>3\) Gyr) and recent (\(t<_{\rm infall}7\) Gyr) accretion events (cf. Figure 16 of Johnston et al., 2008). Note that we include only star particles classified as belonging to either the ex situ stellar halo or a satellite embedded within it (i.e., with \(r<350\) kpc). Star particles that were accreted at earlier times tend to be more centrally concentrated than those that fell in later. Additionally, debris from earlier accretion tends to be more phase-mixed than debris from more recent accretion, which frequently includes still-bound satellite cores. This is due in part to the fact that the former has had more time in which to phase-mix and is located within a higher density area where dynamical times are relatively short. Figure 10: Cumulative fraction of total accreted stellar halo mass contributed by the 1st–5th most significant contributors to each stellar halo. Although Squall and Blizzard are the most dominated by one to two accretion events, all five FOGGIE halos receive 60–90% of their total accreted stellar halo mass from their five most significant contributors. However, the rapid nonadiabatic growth of the central galaxy at early times also leads to faster phase-mixing (e.g., Panithanpaisal et al., 2021). Accordingly, we are most likely to find spatially distinct debris, like streams, at large radii. Differences in concentration and orbital circularity have also been found to have more minor influences on the radial distribution of a satellite's debris (e.g., Johnston et al., 2008; Amorisco, 2017). More concentrated satellites will be more affected by dynamical friction, while satellites on more eccentric orbits will experience more rapid orbital decay, so stars from a more concentrated dwarf and/or a dwarf on a highly radial orbit are more likely to wind up at small radii. In Figure 12, we show how the contributors to the FOGGIE stellar halos vary with galactocentric distance. In the top panel, we plot the median number of accreted galaxies that contribute star particles to a given annulus within the stellar halo, based on 120 random orientations and annuli spaced \(0.5\,\mathrm{kpc}\) apart. As we might expect, the number of contributors peaks at small radii, where the stellar halo is most concentrated and phase-mixed, and where our line-of-sight passes through a larger portion of it (although we note that the trends are the same even if we use spherical shells, rather than projected annuli). Hurricane, which has the largest total number of contributors, has the highest peak by a significant margin (40 galaxies contributing to a single annulus), while Squall, which has the smallest total number of contributors, has the lowest. The number of galaxies contributing to the halos of Tempest, Maelstrom, and Blizzard drops off rapidly after the central peak, then begins to even out at \(r\approx 50\,\mathrm{kpc}\) for Tempest and \(r\approx 80\,\mathrm{kpc}\) for Maelstrom and Blizzard. These distances roughly correspond to the outer edge of the ancient, phase-mixed debris that we see in the left-hand panel of Figure 11. We see more gradual drop-off in Hurricane, which has a more extended phase-mixed inner halo. The number of galaxies that contribute to Squall's halo remains constant at 10-12 out to \(200\,\mathrm{kpc}\). This is due to the fact that Squall does not have any contributors whose star particles have stayed exclusively at small radii. The merger that caused many of the in situ star particles that populated Squall's disk and inner halo to scatter outward also perturbed many of the ex situ star particles that contributed to the same regions; note that the location where we finally see the number of contributors fall off is also where we see the fraction of in situ star particles decrease in Figure 8. In the bottom panel of Figure 12, we plot the median fraction of accreted mass within an annulus that originates from the dominant contributor to that annu Figure 11: Surface brightness maps of each FOGGIE halo divided into \(g\)-band light from ancient accretion events (\(t<3\) Gyr; _left_) and light from more recent accretion events (\(t>7\) Gyr; _right_), including satellite cores. Debris from early accretions tends to be centrally concentrated and well mixed while debris from more recently accreted satellites is more radially extended and is often still spatially distinct. lus (\(f_{\rm max,mass}\)), again based on 120 random orientations of each halo. The values in this panel are generally inversely proportional to those in the top panel, since more contributors to a given annulus means that each individual galaxy contributes a smaller fraction of the mass. Accordingly, \(f_{\rm max,mass}\) is typically low (\(\sim 0.2\)) at the center of the halo and increases towards 1 in the outskirts. However, this is not absolute: Blizzard and Maelstrom have nearly identical distributions of contributors, but have opposite trends in fraction of mass contributed for \(r>200\,\)kpc, while Tempest and Hurricane have nearly identical \(f_{\rm max,mass}\) profiles, but vastly different numbers of contributors to their stellar halos. Taken together, the top and bottom panels of Figure 12 show the dual roles that accretion time and dynamical friction play in the location of debris from infalling satellites. Low mass satellites are less affected by dynamical friction and can therefore deposit their debris at large radii even at fairly early times, but they are unlikely to dominate the mass budget in the outskirts if a more massive satellite has fallen in more recently. Maelstrom is clearly illustrative of this: the mass in its outskirts is almost exclusively from the recent accretion of the LMC-mass satellite that we see debris from in the right-hand panel of Figure 11. Blizzard has two fairly significant contributors to its outskirts, leading to lower \(f_{\rm max,mass}\) values, while both Tempest and Hurricane have a larger number of more equal-mass contributors at large radii, resulting in relatively low \(f_{\rm max,mass}\), even at r=300 kpc. ## 6 Implications for Future Wide-Field Surveys While we have been able to learn a lot about the merger histories and past dwarf companions of the Milky Way and M31 by studying their stellar halos in depth, stellar halo studies outside of the Local Group have typically been limited to either pencil-beam surveys with telescopes like _HST_ (e.g., Mouhcine et al. 2005a; Harmsen et al. 2017) or low resolution integrated light studies with instruments like the Dragonfly Telephoto Array (e.g., Merritt et al. 2016; Gilhuly et al. 2022). However, over the next decade, the astronomical community will commission a number of instruments that will combine wide fields-of-view with high resolution, making it possible for us to study large numbers of stellar halos in considerable detail for the first time. In this section, we explore how we can apply our findings to future stellar halo observations with a focus on three of these new observatories: the Vera C. Rubin Observatory, _Euclid_, and the _Nancy Grace Roman Space Telescope_. ### Integrated Light We will start in the limit of unresolved stars, where we observe stellar halos in integrated light. In Figure 13, we show surface brightness maps of Hurricane using the filters and surface brightness limits of different surveys. Each map is constructed in the same way as Figure 5: Hurricane is oriented such that its central disk is edge-on and the luminosity of each star particle in a given filter is calculated with MIST models and FSPS. Any pixels with surface brightness values fainter than the 3 \(\sigma\) 10\(\times\)10 Figure 12: _Top_: The median number of dwarfs that contribute star particles to a given annulus within a stellar halo as a function of radius. _Bottom_: The median fraction of accreted mass within an annulus that comes from the dominant dwarf contributor to that annulus. Many dwarfs contribute to the stellar halo at low galactocentric distances, but the outskirts of halos tend to be dominated by a small number of dwarfs. arcsec\({}^{2}\) surface brightness limit of the survey are shown in black to indicate that the light they contain would not be detected. Five of the six maps we show are for future surveys. However, in the upper left, we also include a mock SDSS map for comparison, using its \(g\)-band filter and a surface brightness limit of 27 mag arcsec\({}^{-2}\)(Pohlen and Trujillo, 2006). The other maps use the parameters of the Rubin Observatory's Legacy Survey of Space and Time (LSST) in the \(g\)-band filter (\(\mu_{\rm lim}=29\) mag arcsec\({}^{-2}\) for 1-year data, and \(\mu_{\rm lim}=30.3\) mag arcsec\({}^{-2}\) for 10-year data; Yoachim, 2022), _Euclid's_ Wide and Deep Surveys in the VIS filter (\(\mu_{\rm lim}=29.5\) & 31.5 mag arcsec\({}^{-2}\), respectively; Euclid Collaboration et al., 2022), and _Roman_'s High Latitude Wide Area Survey (HLWAS) in the F129 filter (\(\mu_{\rm lim}=30.5\) mag arcsec\({}^{-2}\); Martinez-Delgado et al., 2023; Montes et al., 2023). While the SDSS map shows only the central disk of Hurricane and the bright cores of its satellites, LSST and _Euclid_'s Wide Survey reveal Hurricane's inner halo and slight asymmetries in the outskirts of its satellites that might indicate tidal disruption. Shipp et al. (2023) similarly find that many of the satellites of Milky Way-like galaxies in the FIRE simulations have tidal tails Figure 13: Surface brightness maps of the FOGGIE galaxy Hurricane made using the filters and surface brightness limits of different surveys. The relevant survey and filter are listed in the bottom right corner of each image and the values and sources of the surface brightness limits are the same as those listed in Figure 5. Any pixel with a surface brightness below the detection limit of the survey in question is shown in black. Although much of the stellar halo is still too low surface brightness to be detected, future wide-field surveys—particularly _Roman_’s HLWAS and _Euclid_’s Deep Survey—will allow us to probe stellar halos in far greater numbers than ever before. that would have gone undetected by previous surveys, but which should be detectable with Rubin and _Euclid_. Both _Roman_'s HLWAS and _Euclid_'s Deep Survey probe Hurricane's halo out to \(\sim 100\) kpc and detect multiple stellar streams. If we compare Figure 13 to the deeper map of Hurricane in Figure 5, we can see that much of the halo still lies below the detection limits of all of these surveys; observing the full complexity of the halo requires deeper observations (\(\mu>33\) mag arcsec\({}^{-2}\)). However, the greater depth of these surveys--particularly those that reach \(\mu>\)30 mag arcsec\({}^{-2}\)--will yield considerable information about the infall times and orbits of the many dwarfs that contribute to stellar halos and therefore about the accretion histories of these systems. ### Resolved Stellar Populations In the more local universe, we will be able to use wide-field surveys to resolve individual stars in stellar halos in far greater numbers than ever before. Previous resolved star surveys of stellar halos beyond the Local Group have typically been limited to small fields because a telescope like _HST_ was required to detect and resolve the faint halo population. However, high-resolution wide-field telescopes will enable us to acquire detailed panoramic data more akin to what surveys like PAndAS have achieved with M31's halo. Because they are space-based, both _Roman_ and _Euclid_ are expected to have better galaxy-star separation than Rubin, which is limited by atmospheric seeing (although see, e.g., Mutlu-Pakdil et al., 2021; Martin et al., 2022). However, while _Euclid_'s resolution in its optical bands is very similar to that of _Roman_, its NIR resolution is slightly worse. This is particularly relevant for older stellar populations, like those that make up the vast majority of the stellar halo, because their SEDs peak in the NIR (e.g., Martinez-Delgado et al., 2023), making them easier to observe. We can see this effect in Figure 13 when comparing the surface brightness map in _Euclid_'s VIS Deep Survey to that in _Roman_'s F129 filter. Although the VIS Deep Survey is expected to probe 1 mag arcsec\({}^{-2}\) deeper than _Roman_'s HLWAS, the two maps are almost identical. This occurs, not because there are few features with surface brightnesses in between 30.5 and 31.5 mag arcsec\({}^{-2}\), but because the stellar populations that make up the stellar halo are brighter in the NIR. A _Roman_ F129 surface brightness map with \(\mu_{\rm lim}=31.5\) mag arcsec\({}^{-2}\) shows at least two streams that are not visible in the _Euclid_ VIS Deep map. We will therefore primarily focus on _Roman_ for this part of the discussion, although it is worth noting that, if the footprint of the _Roman_ HLWAS overlaps with LSST or _Euclid_'s surveys, we will likely be able to glean more information about observed stellar populations than we could get from any individual survey (e.g., Eifler et al., 2021; Gezari et al., 2022). We will also primarily be focusing on information that can be derived from the positions of stars and their luminosities in various filters. Although _Roman_ is expected to yield some kinematic data for stars in the nearby universe (WFIRST Astrometry Working Group et al., 2019), we expect to get far more precise data with the upcoming thirty meter-class telescopes (e.g., the Giant Magellan Telescope; Johns et al., 2012). We therefore leave kinematic-based diagnostics (including orbit characterization) for a future paper. However, stellar positions and luminosities can provide a significant amount of information on their own through color-magnitude diagrams (CMDs). With deep enough CMDs, we may be able to derive ages, metallicities, and star formation histories for stellar halos. While this data provides useful information even if we can only come up with global values for the stellar halo (see, e.g., Figure 7), we may also be able to use it to identify debris from different contributors and infer the properties of their progenitors. In Figure 14, we explore how the ages of stars vary within individual stellar halos. In the top left panel, we show Tempest's stellar halo, with each \((1.5\,{\rm kpc})^{2}\) pixel colored by the width of the age interquartile range of the stellar halo star particles that fall within it. As we noted in Figure 1, the bulk of Tempest's stellar halo is very old (age\(>\)10 Gyr) with variations on the order of \(\pm\)1 Gyr. As observational uncertainties concerning the age of a star correlate with the value of that age (e.g., Weisz et al., 2011), we cannot expect to be able to reliably distinguish a 10 Gyr old star from an 11 Gyr old one, so this offers little extra information. However, there are two wedge-shaped regions close to the center of the stellar halo in which the age interquartile range is \(\sim\)6 Gyr. To determine whether this difference in age is detectable, we show representative theoretical isochrones from MIST in _Roman_ filters in the adjacent panel. In orange, we show the isochrone for a stellar population with an age of 5 Gyr--the median age of the star particles that make up these structures--and in purple we show the isochrone for a stellar population with an age of 11 Gyr--the median age of the underlying stellar halo. The separation between the two isochrones in the regions surrounding the giant branch and the main sequence turn-off are considerable and it is therefore likely that the younger stars populating these regions could be identified as belonging to a distinct structure--in this case debris from a late infalling satellite. The expected 5\(\sigma\) depth of _Roman_'s HLWAS is \(\sim\,26.5\)(Montes et al., 2023)--shown as a dashed line on the CMD at the distance of M81--so we will not detect the main sequence turn-off for distances greater than \(\approx\)1 Mpc. We note, though, that deeper measurements are possible with General Observer (GO) programs. Additionally, the HLWAS is expected to resolve individual RGB stars out to \(\sim\)10 Mpc (Lancaster et al., 2022), with the tip of the red giant branch and bright asymptotic giant branch (AGB) stars distinguishable to even greater distances. Harmsen et al. (2023) show that the ratio of AGB to RGB stars alone can be used to constrain the age of a population in a stellar halo. It is therefore possible that structures in age-space will prove to be a key component of _Roman_'s survey of stellar halos throughout the Local Volume. In the bottom panels of Figure 14, we show the age interquartile ranges for the other four halos. Although Blizzard and Squall have nearly uniformly old stellar halos, Maelstrom and Hurricane both have structures with diverse enough ages that they are likely to be detectable within a CMD. Like Tempest, Maelstrom has a single dwarf contributor that produces most of this structure, but Hurricane's age diversity comes from at least 4 different recently accreted dwarfs. By combining these anomalies in age-space with position data, we can pick out structures that likely originated from the same dwarf contributor and potentially reconstruct the infall time, orbit, and star formation history of that dwarf. In regions where there are no discernible structures in age-space, we may be compelled to rely largely on position data. Fortunately, certain types of debris, like streams, can remain intact and spatially distinct for long periods of time (e.g., Johnston et al., 1996; Pearson et al., 2015). While many of the stream-finding algorithms that have been developed for use with _Gaia_ data are designed to take advantage of the 6D phase-space information available in much of the Milky Way's stellar halo (e.g., Malhan and Ibata, 2018; Shih et al., 2022), other algorithms, such as the Hough Stream Spotter(Pearson et al., 2019, 2022), work by looking for linear structures in stellar position data. Such streamfinders can (and have) been used to identify structures in stellar halos around external galaxies and will likely prove to be crucial in the quest to disentangle structures in distant stellar halos. Even when debris does not form structures that look like streams along a given line-of-sight, stars from indi Figure 14: _Top_: In the left panel, we show Tempest’s stellar halo, colored by the width of the age interquartile range of the star particles that contribute to each \((1.5\,\mathrm{kpc})^{2}\) pixel. Gray pixels contain no star particles. While the majority of the halo is uniformly old, there is a clear structure populated by younger stars. In the right panel, we show theoretical isochrones for this younger population (orange) and the older population (purple) that makes up the bulk of the halo. The giant branches and main sequence turn-offs are distinguishable. _Bottom_: Age interquartile ranges for the other four halos. While Squall and Blizzard are almost uniformly old, Maelstrom and Hurricane both have structures with noticeable variations in age. vidual satellites--particularly from either low mass or recently accreted objects--are often still spatially distinct. In Figure 15, we show examples of debris from three different dwarf galaxies that contribute to Tempest's stellar halo. Each panel shows the same line-of-sight on the entire \(z=0\) halo and each \((1.5\,\mathrm{kpc})^{2}\) pixel is colored by the fraction of halo mass within it that comes from star particles that formed in each individual dwarf. The dwarfs are arranged in order of latest to earliest infall time. Even though Dwarf 2 was accreted \(\approx\)9 Gyr ago, much of its debris is still thin and structured enough to be identified either by eye or with a traditional stream-finding algorithm. By contrast, the debris from Dwarfs 1 and 3 appears to be relatively diffuse and would likely be difficult to disentangle with typical stream-finders. Dwarf 3 is a fairly massive dwarf that was accreted \(>\)10 Gyr ago and is therefore largely phase-mixed, but Dwarf 1 fell in quite recently (\(\approx\)4 Gyr ago) and appears more structured when viewed from other lines-of-sight. However, much of the debris from theses dwarfs is still spatially clustered, to the extent that there are regions, particularly in the outskirts of Tempest, where the stellar halo is dominated by debris from just one of them. This is consistent with results from Font et al. (2008), who find that bright features in the outer halos of the Bullock and Johnston (2005) simulations tend to originate from a single dwarf. We have already demonstrated the clustering of debris in halo outskirts, in a more general sense, in the bottom panel of Figure 12: three of the five FOGGIE stellar halos get more than 50% of their mass within a given annulus from a single contributor at most galactrocentric distances beyond 100 kpc. However, looking at Figure 15, we can see that Figure 12 fails to take into account how azimuthally clustered much of the debris from a single dwarf may be. The logical next step is to develop a more observationally-motivated version of Figure 12, which uses the 2D positions of star particles (as they appear when projected onto an arbitrary plane) to determine how large of an area is typically dominated by debris from a single dwarf - an "optimal search area". We could then potentially use this information to place statistical constraints on the properties of the dwarfs that contributed to an observed stellar halo, even in regions where we cannot pick out individual structures in either age- or position-space. This sort of measurement would, however, be impacted by the use of discrete star particles in our simulations. We assign a single position and velocity to each star particle, but, in truth, each star that a particle represents should be spread over a phase-space volume, the size of which is likely impacted by a number of factors. In the next phase of this research, we plan to generate more realistic synthetic data by using software like ananke(Sanderson et al., 2020) and GalaxyFlow(Lim et al., 2022) to better model the small-scale structure of our stellar halos so that we can more effectively test the efficacy of tools like streamfinders on simulated data and use the FOGGIE simulations to identify optimal search areas for telescopes like _Raman_. ## 7 Summary We use the FOGGIE suite, a set of high-resolution cosmological simulations of Milky Way-like galaxies, to study the properties of stellar halos and the galaxies that they are built from. We summarize our primary findings below: * The masses, surface brightness profiles, and metallicity/color gradients of the FOGGIE stellar halos are generally consistent with those of observed Figure 15: Fraction of mass contributed to each \((1.5\,\mathrm{kpc})^{2}\) pixel by three different dwarfs along the same line of sight in Tempest’s stellar halo. Gray pixels contain no star particles. Although most of the debris does not form linear structures from this line-of-sight and is therefore unlikely to be identified with a stream-finding algorithm, it still remains clumped together, particularly in the outskirts of the stellar halo. stellar halos. We see only slight evidence that the FOGGIE stellar halos may have excess light in their outskirts, like that which has been found in a number of other simulations (e.g., Merritt et al., 2020; Keller, 2022). We largely attribute this to the relatively compact in situ populations of the FOGGIE stellar halos. * Although the FOGGIE simulations were selected to cover a small range of virial mass and to have somewhat similar merger histories, their stellar halos have diverse properties. They vary considerably in appearance--from the stream- and shell-dominated halos of Hurricane and Maelstrom to the relatively smooth halos of Blizzard and Squall--and range over \(\approx\)3 dex in surface brightness at any given radius. * While the majority of the stars that make up the FOGGIE stellar halos originate in dwarf galaxies disrupted by the more massive central, 30-40% are formed either in the central disk or in the halo itself. Although the overall mass fraction of in situ stars is consistent with other simulations, in situ populations in the FOGGIE simulations tend to be more centrally concentrated, in line with recent observations of the Milky Way (e.g., Naidu et al., 2020). This difference appears to be due to a combination of the high temporal resolution and conservative star formation and feedback prescriptions employed in the FOGGIE simulations and the quiescent merger histories that characterize most of the galaxies. * The only FOGGIE galaxy that experiences a late major merger (Squall) also has a more extended in situ population and sits higher on the M\({}_{\rm SH}\)-metallicity relation than the other galaxies. High stellar halo metallicity may therefore be an indication that a galaxy has experienced a major merger at \(z<1\), even when the disk and stellar halo appear undisturbed at \(z=0\). * Each FOGGIE stellar halo contains stars that originally formed in 14 (Squall) to 48 (Hurricane) other galaxies and the majority of the mass contributed by these galaxies is accreted prior to \(z=1\). The more massive FOGGIE galaxies tend to build up their accreted mass more quickly than the lower mass FOGGIE galaxies, and the bulk of this mass comes from fewer, more massive galaxies. However, the five most massive accreted objects contribute the majority (60-90%) of the accreted mass in all five of the FOGGIE stellar halos. * The number of contributors to a stellar halo, the masses of those contributors, and the times at which they are accreted all play a significant role in the composition of a stellar halo at any given radius. The FOGGIE stellar halos tend to have the most contributors at small galactocentric distances and do not receive more than \(\approx\)20% of their accreted mass from any individual contributor within this region. Beyond the phase-mixed inner halo, however, the number of contributors drops off and the accreted mass fraction is dominated by a single contributor at galactocentric distances \(>\)100 kpc in three of the five halos. The remaining two halos have a large number of more equal-mass contributors at large galactocentric distances. * Future surveys by high-resolution wide-field telescopes like Rubin, _Roman_, and _Euclid_ will probe the outskirts of large numbers of galaxies to much greater depths than previous large surveys, allowing us to study the stellar halos of far more galaxies than ever before. Mock observations of the FOGGIE galaxies based on _Roman_'s HLWAS and _Euclid_'s Deep Survey suggest that these surveys will detect stellar halos out to \(\approx\)100 kpc and identify stellar streams at even larger galactocentric distances in integrated light. * Three of the five FOGGIE stellar halos contain structures with sufficiently diverse ages that stars belonging to them should be identifiable in CMDs made with _Roman_ throughout the Local Volume. The methods that we have explored in this paper for disassembling stellar halos in order to better understand their contributors and overall histories are truly just the tip of the iceberg. High-resolution wide-field observatories like Rubin, _Euclid_, and _Roman_ will open up an entirely new parameter space for understanding stellar halos and the dwarf galaxies that create them. Resolving stellar populations in external galaxies is a particularly promising avenue and we are working on creating more realistic synthetic data from the FOGGIE stellar halos in order to prepare for the multitude of data that will soon be available to the astronomical community. The authors thank Sebastian Gomez, Ayan Acharyya, Karoline Gilbert, Eric Bell, Sarah Loebman, Claire Kopenhafer, Erik Tollerud, Marla Geha, Jillian Bellovary, and Ferah Munshi for encouragement and useful discussions related to this work. During the course of this work ACW and JT were supported by the _Nancy Grace Roman Space Telescope_ Project, under the Milky Way Science Investigation Team. RA, CL, and MSP were supported for this work in part by NASA via an Astrophysics Theory Program grant 80NSSC18K1105. RA and CL also acknowledge financial support from the STScI Director's Discretionary Research Fund (DDRF). BWO acknowledges support from NSF grants #1908109 and #2106575 and NASA ATP grants NNX15AP39G and 80NSSC18K1105. RA's efforts for this work were additionally supported by HST GO #16730. NB acknowledges support from the 2022 STScI Space Astronomy Summer Program. BDS is supported by Science and Technology Facilities Council Consolidated Grant RA5496. Computations described in this work were performed using the publicly-available Enzo code ([http://enzo-project.org](http://enzo-project.org)), which is the product of a collaborative effort of many independent scientists from numerous institutions around the world. Their commitment to open science has helped make this work possible. The python packages matplotlib(Hunter, 2007), numpy(Walt et al., 2011), tangos(Pontzen and Tremmel, 2018), scipy(Virtanen et al., 2020), yt(Turk et al., 2011), datashader(Bednar et al., 2022), and Astropy(Astropy Collaboration et al., 2013, 2018, 2022) were all used in parts of this analysis. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center and were sponsored by NASA's Science Mission Directorate; we are grateful for the superb user-support provided by NAS. ## Data Availability These results were generated from the FOGGIE cosmological simulation suite. Tangos databases containing the global properties of each galaxy in each snapshot are available upon email request.
2309.07545
DBLPLink: An Entity Linker for the DBLP Scholarly Knowledge Graph
In this work, we present a web application named DBLPLink, which performs entity linking over the DBLP scholarly knowledge graph. DBLPLink uses text-to-text pre-trained language models, such as T5, to produce entity label spans from an input text question. Entity candidates are fetched from a database based on the labels, and an entity re-ranker sorts them based on entity embeddings, such as TransE, DistMult and ComplEx. The results are displayed so that users may compare and contrast the results between T5-small, T5-base and the different KG embeddings used. The demo can be accessed at https://ltdemos.informatik.uni-hamburg.de/dblplink/.
Debayan Banerjee, Arefa, Ricardo Usbeck, Chris Biemann
2023-09-14T09:15:36Z
http://arxiv.org/abs/2309.07545v2
# DBLPLink: An Entity Linker for the DBLP Scholarly Knowledge Graph ###### Abstract In this work, we present a web application named DBLPLink, which performs entity linking over the DBLP scholarly knowledge graph. DBLPLink uses text-to-text pre-trained language models, such as T5, to produce entity label spans from an input text question. Entity candidates are fetched from a database based on the labels, and an entity re-ranker sorts them based on entity embeddings, such as TransE, DistMult and ComplEx. The results are displayed so that users may compare and contrast the results between T5-small, T5-base and the different KG embeddings used. The demo can be accessed at [https://ldemos.informatik.uni-hamburg.de/dblplink/](https://ldemos.informatik.uni-hamburg.de/dblplink/). Code and data shall be made available at [https://github.com/uhh-lt/dblplink](https://github.com/uhh-lt/dblplink). ## 1 Introduction and Related Work Entity Linking (EL) is a natural language processing (NLP) task that involves associating named entities mentioned in text to their corresponding unique identifiers in a knowledge graph (KG). For example, in the question: _Who is the president of USA?_, the named entity span of _USA_ has to be linked to the unique identifier Q301 in the Wikidata KG [1]. Several entity linkers exist [2] over general purpose KGs such as Wikidata, and more specialized KGs, such as bio-medical [3] or financial KGs [4], however, to the best of our knowledge, no working entity linker exists for scholarly KGs. Footnote 1: [https://www.wikidata.org/wiki/Q30](https://www.wikidata.org/wiki/Q30) Footnote 2: [http://openalex.org/](http://openalex.org/) A scholarly KG is a special sub-class of KGs, which contains bibliographic information about research publications, authors, institutions etc. Some well-known scholarly KGs are the OpenAlex2, ORKG3 and DBLP4. In this work, we focus on the DBLP KG, which caters specifically to computer science, and as a result, is smaller in size than other scholarly KGs. DBLP, which used to stand for Data Bases and Logic Programming5, was created in 1993 by Michael Ley at the University of Trier, Germany [5]. At the time of its release6, the RDF dump consisted of 2,941,316 person entities, 6,010,605 publication entities, and 252,573,199 RDF triples. Footnote 3: [https://orkg.org/](https://orkg.org/) Footnote 4: [https://dblp.org/](https://dblp.org/) Footnote 5: [https://en.wikipedia.org/wiki/DBLP](https://en.wikipedia.org/wiki/DBLP) Footnote 6: [https://blog.dblp.org/2022/03/02/dblp-in-rdf/](https://blog.dblp.org/2022/03/02/dblp-in-rdf/) DBLPLink can handle simple and complex questions pertaining to authorship, venues, institutions and other information available in the DBLP KG. ## 2 Web Interface As shown in Figure 1, the UI consists of three main parts. In **Section A**, the user can either type a question as input or select a question from the drop-down menu. Further, the user can select which model to use for label span detection, and which embeddings to use for re-ranking of entities. In **Section B**, the results of DBLPLink are displayed. First, the top-ranked entity for each detected span is displayed, with a corresponding label and type from the DBLP KG. A hyperlink to the entity, which points to the original DBLP entity web page is also shown. Additionally, a distance metric is shown which denotes how close a match this entity is to the input question. A lower distance means a better match. Towards the bottom of the UI, we can briefly see collapsible boxes called "Ranked Entities", which further display the top 10 ranked entities for each of the detected label spans. Lastly, in **Section C**, the user has an option to remove certain combinations of results from the screen, if the UI becomes too cluttered. Our expectation is that the user shall try multiple combinations of T5 and entity embeddings to compare and contrast the results, which may need occasional cleanup from the UI. ## 3 Architecture ### Label and Type Generation As seen in Figure 2, the first step is to produce salient labels and types from the given input question. For this purpose, we use the DBLP-QuAD [6] dataset to fine-tune T5-small and T5-base [7] models, on the task of producing entity labels and types from the input question. Figure 1: User interface of DBLPLink. The question reads: “Who were the co-authors of Ashish Vaswani in the paper ’Attention is all you need’?” ### Candidate Generation With the entity labels and types produced in the previous step, a free-text-search is performed on an Elasticsearch7 instance, which contains entity URLs with their corresponding labels. The results are further filtered by the types. This gives us a list of candidate entities. In normal operation of the demo application, we present the top-ranked candidate as the final linked entity. We only proceed to the disambiguation stage if the top entity candidate has a label, that is the same as another entity in the candidate list. Footnote 7: [https://www.elastic.co/](https://www.elastic.co/) ### Disambiguation In case two entities in the candidate list share the same label, we proceed with disambiguation, which requires a further re-ranking of the candidate list. For this, we follow a common approach of using Siamese neural networks [8] for learning text similarity between text pairs [9]. We embed the input question and the candidate entities in a common embedding space. For this purpose, we create a 969-dimensional embedding, where for a given question, we use the first 768 dimensions for the BERT embedding. We fill the remaining 201 dimensions with zeros. For the entity candidates, we fill the first 768 dimensions with the BERT embedding of the entity label, while the next 200 dimensions are reserved for the entity embeddings. We use three different kinds of embeddings in our experiments, namely TransE [10], ComplEx [11], and DistMult [12]. For the remaining 969th dimension, we store the degree of string similarity match between the entity label and the input question. For training, pairs of positive and negative samples are used with a triplet ranking loss function and L2 distance metric. During inference, a question and an entity candidate are vectorised and passed through the trained Siamese network. The cosine distance between the two resulting embeddings is computed, and the pair with the lowest distance is considered the most suitable match. ## 4 Evaluation We evaluate our entity linker on the 2.000 questions of the test split of the DBLP-QuAD dataset and measure the F1-score. In Table 1, under the heading 'Label Sorting', we consider the top Figure 2: Architecture of DBLPLink. ranked candidate after the label sorting phase as the linked entity. We perform no further disambiguation. Under the 'conditional-disambiguation' setting, we perform disambiguation only if two entities in the candidate list share the same label. Under the 'hard-disambiguation' setting, re-ranking based on Siamese network cosine distances is always run after the candidate generation phase, essentially ignoring the label sorting order. We see that hard-disambiguation lags behind significantly in performance when compared to plain label sorting, which points to the learning that for DBLP KG, degree of string match of an author or a publication is more important than the KG embeddings. Based on this finding, we allow the web application to run in 'conditional-disambiguation' mode for better performance. In the case of conditional disambiguation, performance is marginally better when using TransE and DistMult when compared to label sorting, because not many cases of ambiguous labels exist in the DBLP-QuAD test set. However, it is evident from the hard disambiguation case, that DistMult performs the best on a pure disambiguation task. This may be explained by the inherent suitability of DistMult for 1-to-N relationships, which is close to the nature of the DBLP KG model, where one author may have several papers. On the contrary, TransE expects 1-to-1 relationships, while ComplEx works better for symmetric relationships. Another interesting outcome of the experiments is that the difference in parameter sizes of T5-small and T5-base does not produce any difference in performance. This may be explained by the fact that in the span label production task, much of the focus is on copying the right part of the input to the output. Since the learned knowledge of the model weights from the pre-training task is not being exploited, the larger size of T5-base does not seem to matter. ## 5 Conclusion In this work, we presented DBLPLink, which is a web-based demonstration of an entity linker over the DBLP scholarly KG. In the future, we would like to add further interactivity to the UI where users can provide feedback on quality of the results. Additionally, a conversational interface for question answering would be desirable for question answering tasks, and we would like to build it in a future version. ## 6 Acknowledgements This research is performed as a part of the ARDIAS project, funded by the "Idea and Venture Fund" research grant by Universitat Hamburg, which is part of the Excellence Strategy of the Federal and State Governments. This work has additionally received funding through the German Research Foundation (DFG) project NFDI4DS (no. 460234259). \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & & \multicolumn{3}{c|}{conditional-disambiguation} & \multicolumn{3}{c|}{hard-disambiguation} \\ \hline & Label Sorting & TransE & ComplEx & DistMult & TransE & ComplEx & DistMult \\ \hline T5-small & 0.698 & 0.700 & 0.692 & 0.699 & 0.511 & 0.482 & 0.537 \\ T5-base & 0.698 & **0.701** & 0.692 & **0.701** & 0.521 & 0.484 & 0.547 \\ \hline \end{tabular} \end{table} Table 1: F1-scores for the entity linking task across different combinations of span detector and entity re-ranker
2308.00103
A Unified Treatment of Kepler Occurrence to Trace Planet Evolution I: Methodology
We present Kepler exoplanet occurrence rates for planets between $0.5-16$ R$_\oplus$ and between $1-400$ days. To measure occurrence, we use a non-parametric method via a kernel density estimator and use bootstrap random sampling for uncertainty estimation. We use a full characterization of completeness and reliability measurements from the Kepler DR25 catalog, including detection efficiency, vetting completeness, astrophysical- and false alarm reliability. We also include more accurate and homogeneous stellar radii from Gaia DR2. In order to see the impact of these final Kepler properties, we revisit benchmark exoplanet occurrence rate measurements from the literature. We compare our measurements with previous studies to both validate our method and observe the dependence of these benchmarks on updated stellar and planet properties. For FGK stars, between $0.5-16$ R$_\oplus$ and between $1-400$ days, we find an occurrence of $1.52\pm0.08$ planets per star. We investigate the dependence of occurrence as a function of radius, orbital period, and stellar type and compare with previous studies with excellent agreement. We measure the minimum of the radius valley to be $1.78^{+0.14}_{-0.16}$ R$_\oplus$ for FGK stars and find it to move to smaller radii for cooler stars. We also present new measurements of the slope of the occurrence cliff at $3-4$ R$_\oplus$, and find that the cliff becomes less steep at long orbital period. Our methodology will enable us to constrain theoretical models of planet formation and evolution in the future.
Anne Dattilo, Natalie M. Batalha, Steve Bryson
2023-07-31T19:26:18Z
http://arxiv.org/abs/2308.00103v1
# A Unified Treatment of Kepler Occurrence to Trace Planet Evolution I: Methodology ###### Abstract We present _Kepler_ exoplanet occurrence rates for planets between \(0.5-16\) R\({}_{\oplus}\) and between \(1-400\) days. To measure occurrence, we use a non-parametric method via a kernel density estimator and use bootstrap random sampling for uncertainty estimation. We use a full characterization of completeness and reliability measurements from the _Kepler_ DR25 catalog, including detection efficiency, vetting completeness, astrophysical- and false alarm reliability. We also include more accurate and homogeneous stellar radii from _Gaia_ DR2. In order to see the impact of these final _Kepler_ properties, we revisit benchmark exoplanet occurrence rate measurements from the literature. We compare our measurements with previous studies to both validate our method and observe the dependence of these benchmarks on updated stellar and planet properties. For FGK stars, between \(0.5-16\) R\({}_{\oplus}\) and between \(1-400\) days, we find an occurrence of \(1.52\pm 0.08\) planets per star. We investigate the dependence of occurrence as a function of radius, orbital period, and stellar type and compare with previous studies with excellent agreement. We measure the minimum of the radius valley to be \(1.78^{+0.14}_{-0.16}\) R\({}_{\oplus}\) for FGK stars and find it to move to smaller radii for cooler stars. We also present new measurements of the slope of the occurrence cliff at \(3-4\) R\({}_{\oplus}\), and find that the cliff becomes less steep at long orbital period. Our methodology will enable us to constrain theoretical models of planet formation and evolution in the future. 0000-0002-4882-8082]Anne Dattilo 0000-0002-4072-0882]Natalie M. Batalha 0000-0001-8883-0885]Steve Bryson ## 1 Introduction _Kepler_'s foremost legacy has been enabling detailed exoplanet demographic studies. Launched in 2009, _Kepler_ was designed as a demographics mission to study the population of planets in our galaxy orbiting within 1 au of their host stars (Borucki et al., 2010). It also aimed to measure the frequency of Earth-like planets within their star's habitable zone. These small planets are difficult to detect around Sun-like stars due to both their small size and long orbital period. To detect these planets, _Kepler_ observed a single field continuously for almost four years. In order to enable studies of planetary demographics, the _Kepler_ Mission built a homogeneous planet catalog. A considerable amount of resources went into characterizing this catalog: both in the completeness (the fraction of planets that were correctly identified and vetted as planet candidates) and in the reliability (the fraction of planet candidates that are truly planets). The final uniform planet catalog, Data Release 25 (hereafter DR25), was released in 2018
2309.13530
Singly Generated Radical Operator Algebras
We examine two nonselfadjoint operator algebras: the weighted shift algebra, and the Volterra operator algebra. In both cases, the operator algebra is the norm closure of the polynomials in the operator norm. In the case of the weighted shift algebra, the existence of a gauge action allows us to apply Fourier analysis to study the ideals of the algebra. In the case of the Volterra operator algebra, there is no gauge action, and other methods are needed to study the norm structure and the ideals.
Justin R. Peters
2023-09-24T02:41:28Z
http://arxiv.org/abs/2309.13530v3
# Singly generated radical operator algebras ###### Abstract. We examine two nonselfadjoint operator algebras: the weighted shift algebra, and the Volterra operator algebra. In both cases, the operator algebra is the norm closure of the polynomials in the operator norm. In the case of the weighted shift algebra, the existence of a gauge action allows us to apply Fourier analysis to study the ideals of the algebra. In the case of the Volterra operator algebra, there is no gauge action, and other methods are needed to study the norm structure and the ideals. Key words and phrases:operator algebra, C\({}^{*}\)-cover, completely isometric isomorphism, gauge automorphism, wiegheted shift operator, Volterra integral operator Here we consider commutative operator algebras, which need not be self-adjoint. In the case of semi-simple commutative operator algebras, the Gelfand theory provides a complete description. At the other extreme, there is at this point no comprehensive theory of commutative radical operator algebras. This paper deals primarily with two types of commutative radical operator algebras: namely, those generated by weighted shifts, and the one generated by the Volterra integral operator. Given a bounded linear operator \(T\) on a complex Hilbert space \(H\), there are various topologies in which one take the closure of the polynomials in \(T\) to form an operator algebra. In this paper, we deal with the operator-norm closure. Thus, by the operator algebra \(\mathcal{A}_{T}\) we mean the operator-norm closure of the linear subspace of \(\mathcal{B}(H)\) generated by \(\{T,T^{2},T^{3},\dots\}\). If \(T\) is a bounded linear operator on the Hilbert space \(H\), then by definition the operator algebra \(\mathcal{A}_{T}\) is completely isometrically represented on \(H\), and C\({}^{*}(T)\), the C\({}^{*}\)-algebra generated by \(T\) in \(\mathcal{B}(H)\), is a C\({}^{*}\)-cover. The coordinate-free study of \(\mathcal{A}_{T}\) would include, say, the determination of the closed ideals of \(\mathcal{A}_{T}\), rather than the invariant subspaces arising from the action of \(\mathcal{A}_{T}\) on the Hilbert space \(H\), or, say, the existence of certain automorphisms, which is a property of the abstract operator algebra and not a particular representation. The coordinate-free study of operator algebras was stimulated by [1], which gave internal'matrix-norm' conditions for a Banach algebra to be an operator algebra. Our approach here is necessarily a hybrid, as most properties of the operator algebra can only be deduced from the given representation, at least with the tools we have available. One automorphism that has proved fruitful in the C\({}^{*}\)-theory is the gauge automorphism. For example, this is useful in proving the simplicity of the Cuntz algebras \(\mathcal{O}_{n}\) (e.g., [3], Theorem V.4.6). However, gauge actions have been employed in nonselfadjoint operator algebras as well ([5]). For an operator \(T\in\mathcal{B}(H),\) we say that the operator algebra \(\mathcal{A}_{T}\) admits a gauge action if the map \(T\mapsto zT\) (\(|z|=1\)) extends to an isometric isomorphism of \(\mathcal{A}_{T}.\) (See Definition 1.) The existence of a gauge action on \(\mathcal{A}_{T}\) allows for the application of Fourier analysis on the elements \(S\in\mathcal{A}_{T},\) which in turn has application to the ideal structure of the algebra. For some operators \(T\in\mathcal{B}(H)\) the associated operator algebra \(\mathcal{A}_{T}\) will admit a gauge action, while others will not. We show that if \(T\) is a weighted shift operator, then \(\mathcal{A}_{T}\) admits a gauge action. However if \(V\) is the Volterra integral operator, then \(\mathcal{A}_{V}\) fails to admit a gauge action. This distinction implies that \(\mathcal{A}_{T}\) and \(\mathcal{A}_{V}\) are not isomorphic as operator algebras. (Corollary 6) Section 2 provides some background results regarding gauge actions and applications of gauge automorphisms to singly generated operator algebras \(\mathcal{A}_{T},\) and some basic examples of operators \(T\) for which the associated algebra \(\mathcal{A}_{T}\) either does, or does not, admit a gauge action. In Section 3 we consider operator algebras generated by weighted shift operators \(T.\) In addition to the operator norm on \(\mathcal{A}_{T},\) there is a norm arising from a cyclic and separating unit vector for \(\mathcal{A}_{T}.\) But in general, the Hilbert space norm and the operator norm are inequivalent. (Remark 4) But if the weight sequence is square summable, then the two norms are equivalent. (Proposition 6) Proposition 5 gives a sufficient condition for an element \(S\) in the unit ball of \(\mathcal{A}_{T}\) to be an extreme point. In particular, the normalized powers of \(T,\ T_{n}=\frac{1}{||T^{n}||}T^{n}\) are extreme points of the unit ball of \(\mathcal{A}_{T}.\) Under the same conditions on the weights, there is an isomorphism of the lattice of closed ideals of \(\mathcal{A}_{T}\) and closed invariant subspaces. (Proposition 7) We give two results describing which elements \(S\in\mathcal{A}_{T}\) generate gauge-invariant ideals. We conclude this section showing that the operator algebra \(\mathcal{A}_{T}\) is a (nonunital) integral domain. The final section of the paper deals with \(\mathcal{A}_{V},\) the operator algebra generated by the classical Volterra integral operator on \(L^{2}[0,1].\) The closure of the polynomials in \(V\) in the strong operator topology turns out to be the commutant of \(V,\) and hence corresponds also to the weak and weak\({}^{*}\) closed algebras generated by \(V.\) ([2] Theorem 5.10) Another weakly closed algebra associated with \(V\) is \(\operatorname{Alg}(\operatorname{Lat}V),\) the weakly closed algebra of operators in \(\mathcal{B}(L^{2}[0,1])\) which leaves the lattice of subspaces \(\operatorname{Lat}V\) invariant. This algebra, which contains the commutant \(\{V\}^{c},\) is non-commutative ([2] Theorem 5.12). The operator norm closed algebra generated by \(V,\ \mathcal{A}_{V}\) by contrast is composed of operators which share important properties of \(V\): any \(S\in\mathcal{A}_{V}\) is quasinilpotent and compact. Furthermore, given \(S\in\mathcal{A}_{V}\) there is a measurable function \(f\) on \([0,1],\) integrable over compact subsets of \([0,1),\) such that if \(\rho\in L^{2}[0,1],\) \[S\rho(x)=\int_{0}^{x}f(x-t)\rho(t)\,dt\text{ for almost all }x\in[0,1].(\text{Theorem \ }1)\] However, \(f\) need not be integrable over \([0,1],\) as shown in Example 11. If \(f\) is in \(L^{1}[0,1],\) then the \(L^{1}\) norm of \(f\) dominates the operator norm \(||S||.\) While Theorem 1 allows us to represent an arbitrary \(S\in\mathcal{A}_{V}\) as a operator defined by a kernel, it does not provide another tool to calculate or estimate the norm. Even in the case of polynomials of low degree in \(V\) little is known. Remarkably, the recent paper [12] appears to be the first to have obtained an exact value for \(||V^{2}||,\) expressed as the solution to a transcendental equation ([12], Corollary 3.2 1). Their computations are limited to polynomials of degree \(2\) in \(V.\) As a consequence of this lack of computational tools, we are not able to make any assertions as to the extreme points of the unit ball of \(\mathcal{A}_{V},\) as we did for the unit ball of the radical weighted shift algebra. Footnote 1: \(||V^{2}||=\eta_{0}^{-2},\) where \(\eta_{0}\) is the least positive solution \(\eta\) to the equation \(\cosh(\eta)\cos(\eta)=-1.\) I recall many years ago hearing that Paul Halmos had obtained an expression for \(||V^{2}||\) as the solution of a transcendental equation, but cannot find a reference for it. In [11] it is shown that the nilpotent elements are dense in \(\mathcal{A}_{V}.\) We obtain the same result here as a consequence of Theorem 1. The last result is an extension of Titschmarsh's theorem on zero divisors of \(L^{1}[0,1]\) to the operator algebra \(\mathcal{A}_{V}\) (Corollary 9). ## 1. Background, notation and examples ### Gauge Actions and Fourier analysis on Singly Generated Algebras Let \(T\) be a bounded linear operator on a complex Hilbert space \(H,\) and \(\mathcal{A}_{T}\) the operator algebra in \(\mathcal{B}(H)\) which is the operator norm closure of the polynomials in \(T\) which vanish at the origin. Let \(\mathbb{T}=\{z\in\mathbb{C}:|z|=1\}.\) **Definition 1**.: Let \(Aut(\mathcal{A}_{T})\) denote the group of isometric automorphisms of \(\mathcal{A}_{T}.\) We say that \(\mathcal{A}_{T}\) admits a _gauge action_ if there exists a continuous homomorphism \(\gamma:\mathbb{T}\to Aut(\mathcal{A}_{T})\) such that \(\gamma_{\lambda}(T)=\lambda T\quad(\lambda\in\mathbb{T}).\) By a _continuous homomorphism_ we mean that for each \(\lambda_{0}\in\mathbb{T}\) and \(S\in\mathcal{A}_{T},\) \[||\gamma_{\lambda}(S)-\gamma_{\lambda_{0}}(S)||\to 0\text{ as }\lambda\to \lambda_{0}\text{ in }\mathbb{T}.\] _Remark 1_.: In the examples of gauge actions that arise here, the gauge automorphisms are completely isometric. Assume that \(\mathcal{A}_{T}\) admits a gauge action. Then if \(p\) is any polynomial with \(p(0)=0,\ \gamma_{\lambda}(p(T))=p(\lambda T).\) Since, by definition, the algebra \(\mathcal{A}_{T}\) is the norm closure of such polynomials in \(T,\) it follows that the action of \(\gamma_{\lambda}\) on polynomials in \(T\) determines the action of \(\gamma_{\lambda}\) on \(\mathcal{A}_{T}.\) Now if \(p\) is a polynomial, \(p(z)=\sum_{j=1}^{n}a_{j}z^{j},\) then \[\hat{p}(k)T^{k}=\int_{\mathbb{T}}\gamma_{\lambda}(p(T))\,\lambda^{-k}\,d| \lambda|=\begin{cases}a_{k}T^{k}\text{ if }1\leq k\leq n\\ 0\text{ otherwise}\end{cases}\] Thus, for \(S\in\mathcal{A}_{T},\) \[\hat{S}(k)T^{k}=\int_{\mathbb{T}}\gamma_{\lambda}(S)\ \lambda^{-k}\,d|\lambda|\] is well-defined. We say that \(\hat{S}(k)\in\mathbb{C}\) is the \(k^{\text{th}}\) Fourier coefficient of \(S.\) **Lemma 1**.: _If \(\mathcal{A}_{T}\) admits a gauge action then \(S\in\mathcal{A}_{T}\) is uniquely determined by its Fourier series._ Proof.: It is enough to prove that if \(S\in\mathcal{A}_{T}\) is nonzero, then \(\{\hat{S}(k)\}\) is not the zero sequence. Suppose \(S\neq 0,\) and that \(\hat{S}(k)=0\) for all \(k.\) There is a continuous linear functional \(\varphi\) on \(\mathcal{A}_{T}\) for which \(\varphi(S)\neq 0,\) and hence the continuous function \(f(\lambda)=\varphi(\gamma_{\lambda}(S))\) is nonzero. However, \[\hat{f}(k) =\int_{\mathbb{T}}\varphi(\gamma_{\lambda}(S))\,\lambda^{-k}\,d| \lambda|\] \[=\varphi(\int_{\mathbb{T}}\gamma_{\lambda}(S)\,\lambda^{-k}\,d| \lambda|)\] \[=\varphi(\hat{S}(k)T^{k})\] \[=\hat{S}(k)\varphi(T^{k})\] \[=0\] This holds for \(k=1,2\dots\), but also for \(k\leq 0\), since the Fourier coefficients \(\hat{S}(k)=0\) for all polynomials \(S\) and hence for all \(S\in\mathcal{A}_{T}\). This implies \(f\) is identically zero, which is a contradiction. Just as with classical Fourier series, we associate with \(S\in\mathcal{A}_{T}\) the formal series \[S\sim\sum_{j=1}^{\infty}\hat{S}(j)T^{j} \tag{1}\] We would like to construct a sequence of polynomials in \(T\) which converges to \(S\) in some sense. To this end, let \(\varphi\) be a continuous linear functional on \(\mathcal{A}_{T}\) and \(p(z)=\sum_{j=1}^{n}a_{j}z^{j}\) a polynomial. Then \[\widehat{\varphi(p(T))}(k)=\int_{\mathbb{T}}\varphi(\gamma_{\lambda}(p(T)))\, \lambda^{-k}\,d|\lambda|=a_{k}\varphi(T^{k})\] Since an arbitrary \(S\in\mathcal{A}_{T}\) is a norm limit of polynomials, we have that \[\widehat{\varphi(S)}(k):=\int_{\mathbb{T}}\varphi(\gamma_{\lambda}(S))\, \lambda^{-k}\,d|\lambda|=a_{k}\varphi(T^{k})\] where \(S\sim\sum_{j=1}^{\infty}a_{j}T^{j}\). **Proposition 1**.: _With notation as in the above paragraph, define the function \(f:\mathbb{T}\to\mathbb{C},\ f(\lambda)=\varphi(\gamma_{\lambda}(S)).\) Then_ 1. \(f\) _is a continuous function on_ \(\mathbb{T}\) _with_ \(\hat{f}(n)=0\) _for_ \(n\leq 0.\)__ 2. _The sequence of functions_ \[s_{n}(\lambda) =\sum_{j=1}^{n}\frac{n-j}{n}\hat{f}(j)\lambda^{j}\] \[=\sum_{j=1}^{n}\frac{n-j}{n}a_{j}\varphi(T^{j})\lambda^{j}\ ( \lambda\in\mathbb{T})\] _converges uniformly in_ \(\lambda\) _to_ \(f.\)__ 3. _The sequence_ \(\{S_{n}(\lambda)=\sum_{j=1}^{n}\frac{n-j}{n}a_{j}T^{j}\lambda^{j}\}\) _converges weakly to_ \(\gamma_{\lambda}(S),\) _uniformly in_ \(\lambda\in\mathbb{T}.\)__ 4. _There is a sequence_ \(R_{n}\) _in the convex hull of the sequences_ \(\{S_{n}(1):n=1,2,\dots\}\) _which converges in norm to S._ Proof.: 1. By assumption, the map \(\lambda\in\mathbb{T}\mapsto\gamma_{\lambda}(S)\in\mathcal{A}_{T}\) is norm continuous, and since \(\varphi\) is norm continuous, it follows that \(f:\mathbb{T}\to\mathbb{C}\) is continuous. Now \(\hat{S}(k)=0\) for \(k\leq 0\), so the same holds for \(f.\) 2. By Fejer's Theorem, the sequence of arithmetic means of the partial sums of the Fourier series for \(f\) converges uniformly to \(f\) on \(\mathbb{T}\). 3. Since, for an arbitrary continuous linear functional \(\varphi,\ \varphi(S_{n}(\lambda))=s_{n}(\lambda),\) this is just a restatement of [2]. 4. Follows from [3] by taking \(\lambda=1\) and applying the Hahn-Banach separation theorem. _Notation._ The unitization of \(\mathcal{A}_{T}\) will be denoted \(\tilde{\mathcal{A}}_{T}.\) At times it will be convenient to work in \(\tilde{\mathcal{A}}_{T}.\) The gauge action \(\gamma\) extends naturally to \(\tilde{\mathcal{A}}_{T}\) with \(\gamma_{\lambda}(I)=1.\) Of course for \(S\in\tilde{\mathcal{A}}_{T},\ \hat{S}(0)\) may be nonzero. ### Nonselfadjoint operator algebras which admit a gauge action _Example 1_.: Let \(\mathcal{M}_{2}\) be the C\({}^{*}\)-algebra of \(2\times 2\) matrices, with standard matrix units \(e_{i,j},\ 1\leq i,j\leq 2.\) Let \(T=e_{1,2}.\) Then \(\mathcal{A}_{T}\) admits a gauge action. Indeed, since \(T^{2}=0,\) the operator space \(\mathcal{A}_{T}=\mathbb{C}\cdot T\) is one-dimensional, and the map \(\gamma_{\lambda}\) is a linear map with \(\gamma_{\lambda}(aT)=\lambda aT,\ a\in\mathbb{C}.\) To see that \(\gamma\) is completely isometric, it suffices to show that it extends to \(\mathcal{M}_{2}.\) Define \(U=e_{1,1}+\lambda e_{2,2}.\) Then \(\gamma_{\lambda}(A)=U^{*}AU\ A\in\mathcal{M}_{2},\lambda\in\mathbb{T}\) extends the action of \(\gamma\) on \(\mathcal{A}_{T}\) to the C\({}^{*}\)-envelope, \(\mathcal{M}_{2}.\) Alternatively, we can invoke the description of \(\mathcal{M}_{2}\) as the universal C\({}^{*}\)-algebra generated by an operator \(T\) which is nilpotent of index \(2\) satisfying \[T^{*}T+TT^{*}=I\] \(T\in\mathcal{B}(H)\) for some Hilbert space \(H,\) and \(I\) the identity in \(\mathcal{B}(H).\) Since \(\lambda T\) satisfies these same conditions for \(\lambda\in\mathbb{T},\) it follows from the universal property that \(T\mapsto\lambda T\) is automorphism of \(\mathcal{M}_{2}.\) _Example 2_.: Let \(T\) be the multiplication operator on \(L^{2}(\mathbb{T}),\ T\xi(z)=z\xi(z).\) The unital algebra \(\tilde{\mathcal{A}}_{T}\) is the disc algebra \(\mathcal{A}(\mathbb{D}),\) and the algebra \(\mathcal{A}_{T}\) is the subalgebra of functions \(f\) satisfying \(f\perp 1,\) where \(1\) is the constant function in \(L^{2}(\mathbb{T}).\) The gauge action \(\gamma_{\lambda}\) is given by \(\gamma_{\lambda}(T)\xi(z)=\lambda z\xi(z).\) Thus for \(f\in\mathcal{A}(\mathbb{D}),\ \gamma_{\lambda}f(z)=f(\lambda z).\) This is isometric, even completely isometric. Indeed, the C\({}^{*}-\)envelope of \(\mathcal{A}(\mathbb{D})\) is \(C(\mathbb{T}),\) and the gauge action on the disc algebra is the restriction of the gauge action on \(C(T),\ \gamma_{\lambda}(f)(z)=f(\lambda z).\) The Fourier series (as defined in equation 1 ) of \(f\in\mathcal{A}(\mathbb{D})\) is the usual Fourier series of the function \(f.\) _Example 3_.: Let \(\{S_{1},\ldots S_{d}\}\) be isometries which satisfy the Cuntz relation \(\sum_{j=1}^{d}S_{j}S_{j}^{*}=I.\) Now if \(i_{1},\ldots,i_{n}\in\{1,\ldots,d\}\) and \(\mu=(i_{1},\ldots,i_{n})\) we write \(S_{\mu}=S_{i_{1}}\ldots S_{i_{n}}\) and \(|\mu|=n.\) Let \(\mathcal{A}\) be the Dirichlet algebra generated by the "monomials" \(S_{\mu}S_{\nu}^{*}\) with \(|\mu|\geq|\nu|.\) Then \(\mathcal{A}\) is a nonself-adjoint subalgebra of the Cuntz algebra \(\mathcal{O}_{d}.\) Note that \(\mathcal{A}\) is invariant under the canonical gauge action on \(\mathcal{O}_{d}.\) Thus, \(\mathcal{A}\) admits a gauge action. The gauge action on this subalgebra of \(\mathfrak{O}_{n}\) was considered in [5]. _Example 4_.: Let \(\{S_{1},\ldots,S_{d}\}\) be the isometries of Example 3. If \(\mathcal{A}\) is the nonself-adjoint algebra generated by \(\{S_{1},\ldots,S_{d}\}\subset\mathfrak{O}_{n},\) then \(\mathcal{A}\) admits a gauge action, since it is invariant under the canonical gauge action on \(\mathcal{O}_{d}.\) This algebra is known as Popescu's noncommutative disc algebra. _Example 5_.: Let \(\{S_{1},\ldots,S_{d}\}\) be as in Example 3. Here we assume that these operators are represented in some Hilbert space \(\mathcal{B}(H).\) Choose one of the isometries, say \(S_{1},\) and let \(\gamma\) be the canonical gauge action on \(\mathcal{O}_{d}.\) Since \(\gamma_{\lambda}(S_{1})=\lambda S_{1},\) it follows that the subalgebra \(\mathcal{A}_{S_{1}}\) generated by \(S_{1}\) of the Cuntz algebra \(\mathcal{O}_{d}\) is invariant under \(\gamma.\) Hence the gauge action on the Cuntz algebra \(\mathcal{O}_{d}\) restricts to a gauge action on \(\mathcal{A}_{S_{1}}.\) _Example 6_.: A variety of examples can be constructed as subalgebras of graph C\({}^{*}\)-algebras which admit gauge actions. In this context one can obtain examples which are analogues of examples 3, 4 and 5, and where the generating isometries are replaced by Cuntz-Krieger partial isometries. Let \(\mathcal{A}\) be an operator algebra, and \(\mathfrak{A}=\mathrm{C}_{env}^{*}(\mathcal{A})\) be its C\({}^{*}\)-envelope. Then \(\mathcal{A}^{*}\) is an operator algebra defined as a subalgebra of \(\mathfrak{A}.\) **Proposition 2**.: _If \(\gamma\) is a gauge action on the operator algebra \(\mathcal{A},\) then the adjoint algebra \(\mathcal{A}^{*}\) admits a gauge action, also denoted by \(\gamma\) defined by_ \[\gamma_{\lambda}(A^{*})=(\gamma_{\bar{\lambda}}(A))^{*}\ \lambda\in\mathbb{T},\ A\in \mathcal{A}\] The proof is routine. **Proposition 3**.: _Every completely isometric automorphism of a unital operator algebra \(\mathcal{A}\) lifts to a \(*\)-automorphism of the C\({}^{*}\)-envelope \(\mathrm{C}_{env}^{*}(\mathcal{A}),\) which fixes \(\mathcal{A}\) as a set._ This is Proposition 10.1 of [4]. This tells us that if a unital operator algebra \(\mathcal{A}\) admits a gauge action \(\gamma,\) then each \(\gamma_{\lambda}\) extends to an automorphism, which we also denote by \(\gamma_{\lambda},\) of the C\({}^{*}\)-envelope, but does not immediately imply that the map \(\lambda\in\mathbb{T}\mapsto\gamma_{\lambda}\) is continuous on the C\({}^{*}\)-envelope. ### Examples of operators in Hilbert space which do not admit a gauge action _Example 7_.: Let \(0\neq P\) be a projection in \(\mathcal{B}(H).\) As in Example 1\(\mathcal{A}_{P}\) is one-dimensional, but in this case does not admit a gauge action. Indeed, since \(P=P^{2},\) if \(\gamma\) were a gauge action on \(\mathcal{A}_{P}\) we would have \[\lambda P=\gamma_{\lambda}(P)=\gamma_{\lambda}(P^{2})=\gamma_{\lambda}(P) \lambda_{\lambda}(P)=\lambda^{2}P,\ \lambda\in\mathbb{T}\] which is absurd. _Example 8_.: More generally, suppose that \(T\in\mathcal{B}(H)\) is such that, for some \(n>1,0\neq T^{n}\) and the set \(\{T,T^{2},\dots T^{n}\}\) is linearly dependent. Then \(\mathcal{A}_{T}\) does not admit a gauge action. Indeed, suppose to the contrary that \(\mathcal{A}_{T}\) admits a gauge action \(\gamma,\) and, choosing a dependence relation of minimal degree, we can assume that \(a_{1}T+\dots+a_{m}T^{m}=0,\ m\leq n\) and \(a_{m}\neq 0.\) Then \[0=\int_{\mathbb{T}}\gamma_{\lambda}(\sum_{k=1}^{m}(a_{k}T^{k})\lambda^{-m}\,d |\lambda|=a_{m}T^{m}\] Since \(T^{m}\neq 0,\) it follows that \(a_{m}=0,\) a contradiction. _Example 9_.: Let \(H\) be a Hilbert space with orthonormal basis \(\{e_{n}\}_{n=1}^{\infty},\) and let \(T\in\mathcal{B}(H)\) be the operator defined by \(Te_{n}=\frac{1}{n}e_{n},\ n\geq 1.\) We claim that the operator algebra \(\mathcal{A}_{T}\) does not admit a gauge action. Consider the operator \(T-T^{2}\in\mathcal{A}_{T}.\) This is a compact, self-adjoint operator in \(\mathcal{B}(H),\) so its norm is the maximum of the absolute values of the eigenvalues. \(||T-T^{2}||=||(T-T^{2})e_{2}||_{2}=\frac{1}{4}.\) Suppose that \(\mathcal{A}_{T}\) admits a gauge action \(\gamma.\) Then \(\gamma_{\lambda}(T-T^{2})=\lambda T-\lambda^{2}T^{2},\) so for \(\lambda=-1\) we obtain \(-T-T^{2}.\) Computing \(||-T-T^{2}||\) we have \(||-T-T^{2}||=||(-T-T^{2})e_{1}||_{2}=2.\) This is a contradiction, since by definition the gauge action is isometric on \(\mathcal{A}_{T}.\) ### Gauge invariant Ideals in Operator algebras with gauge actions Let \(\mathcal{A}_{T}\) be the operator algebra generated by an operator \(T\in\mathcal{B}(H),\) and suppose \(\mathcal{A}_{T}\) admits a gauge action \(\gamma.\) A closed ideal \(\mathcal{J}\subset\mathcal{A}_{T}\) is _gauge invariant_ if, whenever \(S\in\mathcal{J},\) then \(\gamma_{\lambda}(S)\in\mathcal{J}\ (\lambda\in\mathbb{T}).\) **Proposition 4**.: _Let \(\mathcal{J}\neq(0)\) be a gauge invariant ideal in \(\mathcal{A}_{T}.\) Then there exists \(n\in\mathbb{N}\) such that \(\mathcal{J}=\text{$<\!T^{n}\!>$}.\) That is, \(\mathcal{J}\) is the closed ideal in \(\mathcal{A}_{T}\) generated by \(T^{n}.\)_ Proof.: Let \(n=\inf\{k\geq 1:\hat{S}(k)\neq 0\text{ for some }S\in\mathcal{J}\}.\) Thus, there exists \(S\in\mathcal{J}\) with \(\int_{\mathbb{T}}\gamma_{\lambda}(S)\lambda^{-n}\,d|\lambda|=a_{n}T^{n}\neq 0.\) Since \(\mathcal{J}\) is closed and gauge invariant, \(T^{n}\in\mathcal{J}.\) It follows that any \(S\in\mathcal{A}_{T}\) with Fourier series \(S\sim\sum_{k=n}^{\infty}c_{k}T^{k}\in\mathcal{J}.\) Thus, \(<\)\(T^{n}\)\(>\subset\mathcal{J}.\) That is, the closed ideal generated by \(T^{n}\) is contained in \(\mathcal{J}.\) On the other hand, let \(S\in\mathcal{J}.\) Then, by definition of \(n,\ S\) has Fourier series of the form \(\sum_{k=n}^{\infty}c_{k}T^{k},\) so that \(\mathcal{J}\subset<\)\(T^{n}\)\(>\). One ideal which is invariant under the gauge action is the Jacobson radical; indeed, it is invariant under all isometric automorphisms. **Corollary 1**.: _Let \(T\in\mathcal{B}(H)\) be an operator such that \(\mathcal{A}_{T}\) admits a gauge action. Then either \(\mathcal{A}_{T}\) is semi-simple, or \(\mathcal{A}_{T}\) is radical._ Proof.: Let \(\mathcal{J}\neq(0)\) denote the Jacobson radical of \(\mathcal{A}_{T}.\) Since the Jacobson radical is invariant under all isometric automorphisms, by Proposition 4 it follows that if the Jacobson radical is nonzero, there is an \(n\in\mathbb{N}\) such that \(\mathcal{J}=<\)\(T^{n}\)\(>.\) But if \(T^{n}\) is quasinilpotent, that is, has spectrum \(\{0\},\) it follows from the Spectral Mapping Theorem that \(T\) has spectrum \(\{0\}.\) Hence, the ideal generated by \(T,\) which is \(\mathcal{A}_{T}\) is in the Jacobson radical. _Example 10_.: Here we note that it can happen that if \(T\in\mathcal{B}(H)\) does not admit a gauge action, then we can have \((0)\neq Rad(\mathcal{A}_{T})\neq\mathcal{A}_{T}.\) Let \(H=H_{1}\oplus H_{2}\) and \(T=I_{1}\oplus N,\) where \(I_{1}\) is the identity on \(H_{1}\) and \(N\in\mathcal{B}(H_{2})\) is a nonzero nilpotent, with \(N^{2}=0.\) Let \(p(z)=z-z^{2}.\) Then \(p(T)=0\oplus N\in Rad(\mathcal{A}_{T}),\) so that while \(\mathcal{A}_{T}\) is not a radical algebra, it has a non-trival Jacobson radical. The disc algebra \(\mathcal{A}(\mathbb{D})\) has a rich lattice of ideals. ([6]) Not unexpectedly, there are few gauge invariant ideals. **Corollary 2**.: _If \(\mathcal{J}\) is a gauge invariant closed ideal of \(\mathcal{A}(\mathbb{D}),\) then (in the notation of Example 2) \(\mathcal{J}=<\)\(z^{n}\)\(>\) for some \(n\in\mathbb{N}.\)_ Proof.: The conclusion follows immediately from Proposition 4. ## 2. Operator algebras generated by weighted shifts In this section, \(T\) will denote a weighted shift operator. Let \(\{e_{n}\}_{n\geq 0}\) be an orthonormal basis for the Hilbert space \(H,\) with \(Te_{n}=a_{n}e_{n+1},\ n\geq 0,\) and \(a_{n}\neq 0\) for all \(n.\) Since \(T\) is bounded, the sequence \(\{a_{n}\}\) is bounded, and \(||T||=\sup_{n}|a_{n}|.\) We begin by showing that the operator algebra admits a gauge action. **Lemma 2**.: _Let \(T\) be as above. Then \(\mathcal{A}_{T}\) admits a gauge action._ Proof.: With \(\{e_{n}\}_{n\geq 0}\) as above, define the unitary \(W_{\lambda},\ \lambda\in\mathbb{T},\) by \(W_{\lambda}e_{n}=\lambda^{n}e_{n}.\) Now \[W_{\lambda}TW_{\lambda}^{*}e_{n}=W_{\lambda}T(\bar{\lambda^{n}}e_{n})=\bar{ \lambda^{n}}a_{n}W_{\lambda}e_{n+1}=\bar{\lambda^{n}}\lambda^{n+1}a_{n}e_{n+1}= \lambda Te_{n}\] holds for any \(n\geq 0,\) and since the \(\{e_{n}\}_{n\geq 0}\) form a basis, we have \(W_{\lambda}TW_{\lambda}^{*}=\lambda T.\) Thus the map \(\lambda\in\mathbb{T}\mapsto W_{\lambda}TW_{\lambda}^{*}\in\mathcal{B}(H)\) is continuous., and so \(\lambda\mapsto(W_{\lambda}TW_{\lambda}^{*})^{n}=W_{\lambda}T^{n}W_{\lambda}^{*}\) is continuous, and hence \(\lambda\mapsto W_{\lambda}p(T)W_{\lambda}^{*}\) for any polynomial \(p\) with \(p(0)=0.\) Now if \(S\in\mathcal{A}_{T}\) and \(\epsilon>0\) is given, there is a polynomial \(p\) with \(||p(T)-S||<\epsilon/3.\) Now let \(\lambda_{0}\in\mathbb{T}\) and \(\delta>0\) be such that if \(|\lambda-\lambda_{0}|<\delta,\) then \(||W_{\lambda}p(T)W_{\lambda}^{*}-W_{\lambda_{0}}p(T)W_{\lambda_{0}}^{*}||< \epsilon/3.\) Then \[||W_{\lambda}SW_{\lambda}^{*}-W_{\lambda_{0}}SW_{\lambda_{0}}^{*}|| \leq||W_{\lambda}(S-p(T))W_{\lambda}^{*}||+\] \[||W_{\lambda}p(T)W_{\lambda}^{*}-W_{\lambda_{0}}p(T)W_{\lambda_{ 0}}^{*}||+||W_{\lambda_{0}}(p(T)-S)W_{\lambda_{0}}^{*}||\] \[<\epsilon/3+\epsilon/3+\epsilon/3\] Thus \(\gamma_{\lambda}(S)=W_{\lambda}SW_{\lambda}^{*}\) is a gauge action on \(\mathcal{A}_{T}.\) _Remark 2_.: We claim, furthermore, that the action is completely isometric. Now the C\({}^{*}\)-algebra generated by \(T\) in \(\mathcal{B}(H),\ C^{*}(T),\) is a C\({}^{*}\)-cover for \(\mathcal{A}_{T},\) and the action of \(\gamma_{\lambda}\) on \(\mathcal{A}_{T}\) is the restriction to \(\mathcal{A}_{T}\) of the automorphism \(S\in\mathrm{C}^{*}(T)\mapsto\gamma_{\lambda}(S):=W_{\lambda}SW_{\lambda}^{*}.\) While it is clear that \(\gamma_{\lambda}\) is isometric on the C\({}^{*}\)-cover, it is not obvious that the map \(\lambda\in\mathbb{T}\mapsto\gamma_{\lambda}\) is continuous, since \(\lambda\mapsto W_{\lambda}\) is not continuous. It is more convenient to work with the unital algebras \(\tilde{\mathcal{A}}_{T},\ \tilde{\mathcal{A}}_{T}^{*}.\) The C\({}^{*}\) cover of \(\tilde{\mathcal{A}}_{T}\subset\mathcal{B}(H)\) is the closure in \(\mathcal{B}(H)\) of the union \[\bigcup_{n=1}^{\infty}(\tilde{\mathcal{A}}_{T}^{*}\tilde{\mathcal{A}}_{T})^{ n}\subset\mathcal{B}(H)\] Now since the action is continuous on \(\tilde{\mathcal{A}}_{T}\) and \(\tilde{\mathcal{A}}_{T}^{*},\) (Proposition 2) it is continuous on \((\tilde{\mathcal{A}}_{T}^{*}\tilde{\mathcal{A}}_{T})^{n}.\) And since it is isometric, it is thus continuous on the closure of the union. Thus, the gauge action on \(\mathcal{A}_{T}\) is the restriction of a gauge action on a C\({}^{*}\)-cover. **Lemma 3**.: _Let \(T\) be as in Lemma 2 If \(S\in\mathcal{A}_{T},\) and_ \[Se_{0}=\sum_{n=1}^{\infty}c_{n}e_{n},\text{ then }c_{n}=\hat{S}(n)a_{0}\dots a_{n-1}\] Proof.: Let \(W_{\lambda}\) (\(\lambda\in\mathbb{T}\)) be the family of unitary operators from Lemma 2, so that \(W_{\lambda}e_{n}=\lambda^{n}e_{n}.\) Let \(v\) be a linear combination of basis vectors. Since \(\int_{\mathbb{T}}\lambda^{-n}W_{\lambda}v\,d|\lambda|\) is a multiple of \(e_{n},\) it follows that, \[\int_{\mathbb{T}}\lambda^{-n}W_{\lambda}v\,d|\lambda|=<v,e_{n}>e_{n}\] This holds for arbitrary vectors in \(H.\) Suppose \(Se_{0}=\sum_{k=1}^{\infty}c_{k}e_{k}.\) Then \[c_{n}e_{n} =<Se_{0},e_{n}>e_{n}\] \[=\int_{\mathbb{T}}W_{\lambda}(Se_{0})\,\lambda^{-n}\,d|\lambda|\] \[=\int_{\mathbb{T}}W_{\lambda}SW_{\lambda}^{*}e_{0}\,\lambda^{-n} \,d|\lambda|\] \[=(\int_{\mathbb{T}}W_{\lambda}SW_{\lambda}^{*}\lambda^{-n}\,d| \lambda|)e_{0}\] \[=(\int_{\mathbb{T}}\gamma_{\lambda}(S)\,\lambda^{-n}\,d|\lambda| )e_{0}\] \[=\hat{S}(n)T^{n}e_{0}\] where we have used that \(W_{\lambda}^{*}e_{0}=e_{0}.\) Thus, \(c_{n}e_{n}=\hat{S}(n)T^{n}e_{0},\) so that \(c_{n}=\hat{S}(n)(a_{0}a_{1}\cdots a_{n-1}).\) _Remark 3_.: Let \(T\) be as above, and \(\tilde{\mathcal{A}}_{T}\) the unitization of \(\mathcal{A}_{T}.\) Then the vector \(e_{0}\) is a cyclic and separating vector for \(\tilde{\mathcal{A}}_{T}.\) That it is cyclic is clear, for if \(v\) is any finite linear combination of basis vectors \(v=\sum_{n=0}^{N}c_{n}e_{n},\) let \(p\) be the polynomial \(p(z)=\sum_{n=0}^{N}\frac{c_{n}}{a_{0}\cdots a_{n-1}}z^{n}\) (where the empty product is defined to be \(1\)), then \(p(T)e_{0}=v.\) That \(e_{0}\) is separating is also straightforward. First note that if \(S\in\tilde{\mathcal{A}}_{T},\) there is a sequence of polynomials \(\{p_{n}\}\) with \(\{p_{n}(T)\}\) converging to \(S\) in the norm of \(\tilde{\mathcal{A}}_{T},\) so that by definition of the norm, \(p_{n}(T)v\to Sv\) for every \(v\in H\) and in particular for \(v=e_{0}.\) By Proposition 1 these polynomials can be taken to be convex combinations of Fejer polynomials, so that \(\hat{p}_{n}(k)\to\hat{S}(k)\) for every \(k=0,1,\ldots.\) So if \(S\neq 0,\) there is some \(k\) with \(\hat{S}(k)\neq 0,\) and so \(Se_{0}=\sum_{j=0}^{\infty}\hat{S}(j)a_{0}\cdots a_{j-1}e_{j}\neq 0.\) It is natural to ask for a description of the extreme points of the unit ball of \(\mathcal{A}_{T}.\) While that seems out of reach in our context, a sufficient condition is at hand. **Proposition 5**.: _Let the weighteds of \(T\) satisfy \(|a_{1}|\geq|a_{2}|\geq\cdots\) and let \(\mathcal{B}=\{S\in\mathcal{A}_{T}:||S||\leq 1\}\) be the closed unit ball. Then \(S\in\mathcal{B}\) is an extreme point of \(\mathcal{B}\) if \(||S||=||Se_{0}||_{2}=1.\) In particular, the elements \(T_{n}:=\frac{1}{||T^{n}||}T^{n}\) are extreme points of \(\mathcal{B}.\)_ Proof.: As noted in Remark 3, the map \(S\in\mathcal{A}_{T}\mapsto||Se_{0}||_{2}\) is a norm on \(\mathcal{A}_{T},\) satisfying \(||Se_{0}||_{2}\leq||S||,\) and thus the map \(S\mapsto Se_{0}\) maps the unit ball of \(\mathcal{A}_{T}\) into the unit ball of the Hilbert space \(H.\) Since every unit vector in Hilbert space is an extreme point of the unit ball in \(H,\) it follows that if \(||S||=||Se_{0}||_{2}=1,\) then \(Se_{0}\) is extreme in the unit ball of \(H,\) and _a fortori_\(S\) is extreme in the unit ball of \(\mathcal{A}_{T}.\) In particular, the condition on the weights \(|a_{1}|\geq|a_{2}|\geq\cdots\) guarantees that the monomials \(T^{n}\) assume their norm at \(e_{0},\) hence the normalized monomials \(T_{n}\) are extreme points of the unit ball of \(\mathcal{A}_{T}.\) _Remark 4_.: If the weights satisfy \(|a_{0}|\geq|a_{1}|\geq|a_{2}|\geq\cdots\) then, as noted in the proof of Proposition 5, the norms \(||T^{n}||,\ ||T^{n}e_{0}||_{2}\) coincide. However, it need not be the case that the two norms coincide, or even are equivalent, on the operator algebra \(\mathcal{A}_{T}.\) To see this, let the weights satisfy \(a_{0}=a_{1}=\cdots=1,\) and take \(p_{n}(T)=T+T^{2}+\cdots T^{n}\ (n\in\mathbb{N}),\) and let \(v_{n}=\frac{1}{\sqrt{n}}(e_{0}+e_{1}+\cdots e_{n-1}).\) One calculates that \[||p_{n}(T)v_{n}||_{2}=\frac{\sqrt{2n^{2}+1}}{\sqrt{3}}\ \text{while}\ ||p_{n}(T)e_{0}||_{2}=\sqrt{n}.\] Thus, \[\frac{||p_{n}(T)||}{||p_{n}(T)e_{0}||_{2}}\geq\frac{\sqrt{2}}{\sqrt{3}}\sqrt{n}\] so the norms are inequivalent on \(\mathcal{A}_{T}.\) While the operator norm is not in general equivalent to the norm \(S\mapsto||Se_{0}||_{2}\) on \(\mathcal{A}_{T},\) under certain restrictions the two norms are equivalent. **Proposition 6**.: _Let \(\{a_{n}\}_{n\geq 0}\) be a sequence satisfying \(|a_{0}|\geq|a_{1}|\geq|a_{2}|\geq\cdots\) with \(\sum_{n=0}^{\infty}|a_{n}|^{2}:=M^{2}<\infty,\) and \(a_{n}\neq 0\) for all \(n.\) Then the operator norm on \(\mathcal{A}_{T}\) is equivalent to the norm \(S\mapsto||Se_{0}||_{2}.\)_ Proof.: Let \(p(z)=\sum_{j=1}^{r}c_{j}z^{j}.\) Then \[||p(T)e_{k}||_{2}^{2} =\sum_{j=1}^{r}|c_{j}|^{2}\,|a_{k}a_{k+1}\cdots a_{k+j-1}|^{2}\] \[\leq\sum_{j=1}^{r}|c_{j}|^{2}\,(\frac{|a_{k}|}{|a_{0}|})^{2}\,|a_{ 0}a_{1}\cdots a_{j-1}|^{2}\] \[\leq(\frac{|a_{k}|}{|a_{0}|})^{2}||p(T)e_{0}||_{2}^{2}\] Now let \(v\) be a unit vector which is a finite linear combination of basis vectors, so \(v=\sum_{\ell=0}^{N}\beta_{\ell}e_{\ell}\) with \(\sum_{\ell=0}^{N}|\beta_{\ell}|^{2}=1.\) Thus \[||p(T)v||_{2} \leq\sum_{\ell=0}^{N}|\beta_{\ell}|||p(T)e_{\ell}||_{2}\] \[\leq\sum_{\ell=0}^{N}|\beta_{\ell}|\frac{|a_{\ell}|}{|a_{0}|}||p (T)e_{0}||_{2}\] \[\leq\frac{1}{|a_{0}|}||p(T)e_{0}||_{2}(\sum_{\ell=0}^{N}|\beta_{ \ell}|\,|a_{\ell}|)\] \[\leq\frac{1}{|a_{0}|}||p(T)e_{0}||_{2}(\sum_{\ell=0}^{N}|\beta_{ \ell}|^{2})^{\frac{1}{2}})(\sum_{k=0}^{N}|a_{k}|^{2})^{\frac{1}{2}}\] \[\leq\frac{M}{|a_{0}|}||p(T)e_{0}||_{2}\] Now since \(||p(T)v||_{2}\leq\frac{M}{|a_{0}|}||p(T)e_{0}||_{2}\) for a dense set of unit vectors \(v,\) it follows that \(||p(T)||\leq\frac{M}{|a_{0}|}||p(T)e_{0}||_{2}.\) Finally, since we can approximate an arbitrary \(S\in\mathcal{A}_{T}\) by polynomials in \(T\), so we conclude that \(||S||\leq\frac{M}{|a_{0}|}||Se_{0}||_{2}\) for all \(S\in\mathcal{A}_{T}.\) As a result of the equivalence of the two norms, several results follow immediately. **Corollary 3**.: _Let the weighted shift \(T\) be as in Proposition 6. Then for \(S\in\mathcal{A}_{T},\) the partial sums of the Fourier series,_ \[\sum_{k=1}^{n}\hat{S}(k)T^{k}\] converge in norm to \(S.\)_ Corollary 4: _Let the weighted shift \(T\) be as in Proposition 6. Then the sequence_ \[p_{n}(T):=\sum_{k=1}^{n}c_{k}T^{k}\] _converges in norm to an element \(S\in\mathcal{A}_{T}\) if and only if_ \[\sum_{k=1}^{\infty}|c_{k}|^{2}\left|a_{0}\cdots a_{k-1}\right|^{2}<\infty\] Proposition 6 not only tells us that the operator norm is equivalent to a Hilbert space norm, but gives a mapping \[\mathcal{F}:\mathcal{A}_{T}\to H,\ S\mapsto Se_{0}\] which maps \(\mathcal{A}_{T}\) onto the closed subspace \(H_{1}\) spanned by the basis vectors \(e_{n}:\ n\geq 1.\) One can also define \(\tilde{\mathcal{F}}:\tilde{\mathcal{A}}_{T}\to H\) by \(S\mapsto Se_{0}.\) It is easy to see how to adapt Proposition 6 to the unital algebra \(\tilde{\mathcal{A}}_{T}.\) Note that the unital algebra \(\tilde{\mathcal{A}}_{T}\) maps onto \(H.\) Since \(\mathcal{F}\) is a Banach space isomorphism, it gives a one-to-one map of closed subspaces of \(\mathcal{A}_{T}\) to closed subspaces of \(H_{1},\) and similarly \(\tilde{\mathcal{F}}\) maps closed subspaces of \(\tilde{\mathcal{A}}_{T}\) onto closed subspaces of \(H.\) Furthermore Proposition 7: _Suppose the weights of \(T\) satisfy the conditions of Proposition 6._ 1. _The map_ \(\mathcal{F}\) _is an isomorphism of the lattice of closed ideals of_ \(\mathcal{A}_{T}\) _onto the lattice of closed_ \(T\)_-invariant subspaces of_ \(H_{1}.\)__ 2. _The map_ \(\tilde{\mathcal{F}}\) _is an isomorphism of the lattice of closed ideals of_ \(\tilde{\mathcal{A}}_{T}\) _onto the lattice of closed_ \(T\)_-invariant subspaces of_ \(H.\)__ Proof: We prove only the second statement. Let us first observe that the map \(\tilde{\mathcal{F}}\) is \(T\)-equivariant. That is, \(T\tilde{\mathcal{F}}(S)=\tilde{\mathcal{F}}(TS),\ S\in\tilde{\mathcal{A}}_{T}.\) Indeed, this follows immediately from the definition of \(\tilde{\mathcal{F}}.\) We claim that a closed subspace \(\mathcal{I}\subset\tilde{\mathcal{A}}_{T}\) is a closed ideal if and only if it is \(T\)-invariant. Clearly, if \(\mathcal{I}\) is an ideal in \(\tilde{\mathcal{A}}_{T},\) then it is \(T\)-invariant. On the other hand, if a closed subspace \(\mathcal{I}\subset\tilde{\mathcal{A}}_{T}\) is \(T\)-invariant, then it is invariant under multiplication by any polynomial in \(T.\) Let \(S\in\mathcal{I}\) and \(R\in\tilde{\mathcal{A}}_{T}.\) If \(\{p_{n}\}\) is a sequence of polynomials such that \(\{p_{n}(T)\}\) converges in norm to \(R,\) then \(p_{n}(T)S\) converges in norm to \(RS.\) Thus, \(\mathcal{I}\) is a closed ideal. Now clearly the map \(\tilde{\mathcal{F}}\) maps closed subspaces of \(\tilde{\mathcal{A}}_{T}\) to closed subspaces of \(H,\) and since \(\tilde{\mathcal{F}}\) is \(T\)-equivariant, it is an isomorphism of the lattice of closed \(T\)-invariant subspaces of \(\tilde{\mathcal{A}}_{T}\) onto the lattice of closed \(T\)-invariant subspaces of \(H.\) But as shown above, the closed \(T\)-invariant subspaces of \(\tilde{\mathcal{A}}_{T}\) are exactly the closed ideals. We know from Proposition 4 that the gauge invariant ideals in the algebra \(\mathcal{A}_{T}\) generated by a unilateral weighted shift \(T\) are all of the form \(<T^{k}>.\) Given an element \(S\in\mathcal{A}_{T},\) one can ask when the ideal \(<S>\) is of the form \(<T^{k}>\) for some \(k\in\mathbb{N}.\) **Corollary 5**.: _Let the weighted shift \(T\) be as in Proposition 6, and let \(S\in\mathcal{A}_{T}.\) Suppose \(\hat{S}(j)=0,\ j=1,\dots,k-1\) and \(\hat{S}(k)\neq 0\) for some \(k>1.\) Then the closed ideal \(<\)\(S\)\(>\)\(=\)\(<\)\(T^{k}\)\(>\) if and only the closed subspace generated by the vectors \(Se_{0},\ TSe_{0},\ T^{2}Se_{0},\dots\) contains the basis vector \(e_{k}.\)_ Proof.: The condition \(\hat{S}(j)=0\) for \(j=1,\dots,k-1\) implies that \(<\)\(S\)\(>\)\(\subset\)\(<\)\(T^{k}\)\(>.\) Indeed, there is a sequence of polynomials \(\{p_{n}\}\subset\)\(<\)\(T^{k}\)\(>\) converging in norm to \(S,\) hence \(S\in\)\(<\)\(T^{k}\)\(>,\) and so the closed ideal \(<\)\(S\)\(>\)\(\subset\)\(<\)\(T^{k}\)\(>.\) Applying the map \(\mathcal{F}:\mathcal{A}_{T}\to H,\) it follows that \(Se_{0}\) is contained in the closed invariant subspace generated by the vector \(e_{k}.\) By Proposition 7, in order for the two ideals to coincide, the corresponding subspaces under the map \(\mathcal{F}\) must coincide. Thus it is necesessary and sufficient that the closed subspace generated by the vectors \(Se_{0},\ TSe_{0},\ T^{2}Se_{0},\dots\) equal the closed subspace generated by \(e_{k},\ Te_{k}\ T^{2}e_{k},\dots,\) which is the subspace generated by the basis vectors \(e_{k},\ e_{k+1},\ e_{k+2},\dots.\) Thus, if the closed subspace generated by the vectors \(Se_{0},\ TSe_{0},\ T^{2}Se_{0},\dots\) contains the vector \(e_{k},\) by invariance it contains the vectors \(e_{k+1},\ e_{k+2},\ \dots,\) and hence the two closed subspaces coincide. The weights \(\{a_{n}\}\) satisfying the conditions of Proposition 6 satisfy \(\lim_{n}a_{n}=0,\) so that the wighted shift \(T\) is quasinilpotent, and hence the algebra \(\mathcal{A}_{T}\) is radical. The consequences of the Lemma mentioned so far did not make direct use of the fact that the elements of the algebra are all quasinilpotent. The following Proposition gives a different sort of criterion as to when an element \(S\in\mathcal{A}_{T}\) generates an ideal of the form \(<\)\(T^{k}\)\(>.\) Here we do not need to assume the equivalence of the operator norm to the norm \(S\mapsto||Se_{0}||_{2},\) rather we need only assume that the weighted shift \(T\) is quasinilpotent. Recall ([13]) that a necessary and sufficient condition for a weighted shift operator to be quasinilpotent is that \(\lim_{n}\sup_{k}|a_{k+1}\cdots a_{k+n}|^{\frac{1}{n}}=0.\) **Proposition 8**.: _Let \(\{a_{n}\}_{n\geq 0}\) be a sequence of nonzero weights such that the unilateral weighted shift operator \(Te_{n}=a_{n}e_{n+1}\) is quasinilpotent, _and hence the algebra \(\mathcal{A}_{T}\) is radical. Let \(S\in\mathcal{A}_{T}\) be a nonzero element such that \(\hat{S}(j)=0,\ j<k\) and \(\hat{S}(k)\neq 0.\) If in the unital algebra \(\tilde{\mathcal{A}}_{T},\ S\) factors as \(S=T^{k}Q\) for some \(Q\in\tilde{\mathcal{A}}_{T},\) then \(<\!\!S\!\!>=<\!\!T^{k}\!\!>.\) In particular, if \(S\) is a polynomial \(\hat{S}(k)T^{k}+\cdots\hat{S}(n)T^{n},\) then \(<\!\!S\!\!>=<\!\!T^{k}\!\!>.\)_ Proof.: First observe that \(<\!\!S>\subset<\!\!T^{k}\!\!>.\) By Proposition 1 there is a sequence of polynomials \(\{p_{n}(T)\}\subset\!\!\!<\!\!T^{k}\!\!>\) converging to \(<\!\!S\!\!>.\) Thus \(S\) belongs to the closed ideal \(<\!\!T^{k}\!\!>\), and hence \(<\!\!S\!\!>\subset<\!\!T^{k}\!\!>\). Now we prove the reverse containment. Since multiplying \(S\) by a nonzero constant does not change the ideal \(<\!\!S\!\!>\), we may assume that \(\hat{S}(k)=1,\) so that if \(S=T^{k}Q,\) then \(\hat{Q}(0)=1.\) Writing \(Q=I-R,\) we have that \(I-R\) is invertible in \(\tilde{\mathcal{A}}_{T}\) with inverse \(I+R+R^{2}+\cdots.\) Indeed, since \(R\in\mathcal{A}_{T}\) which is radical, given any \(r>0,\ ||T^{n}||\leq r^{n}\) for \(n\geq N_{r}.\) Thus, \(S(I-R)^{-1}=S+SR+SR^{2}+\cdots.\) Note that while \(S(I-R)^{-1}\) is a product in the unital algebra, the sum \(S+SR+SR^{2}+\cdots.\) is computed in \(\mathcal{A}_{T},\) and equals \(T^{k}.\) It follows that \(T^{k}\in<\!\!S\!\!>\), and hence \(<\!\!T^{k}\!\!>\subset<\!\!S\!\!>\). Finally observe that if \(S\) is a polynomial in \(T,\) then the factorization \(S=T^{k}Q\) is realizable in \(\tilde{\mathcal{A}}_{T}.\) In [14], Theorem 3, Shields characterizes the commutant of a weighted shift in terms of formal power series. In particular, that implies that the commutant is an integral domain. It follows that the smaller algebra \(\mathcal{A}_{T}\) is also an integral domain, though in our case a non-unital integral domain. The existence of the gauge action on \(\mathcal{A}_{T}\) allows us to deduce the same result. **Proposition 9**.: _Let \(T\) be a weighted shift with nonzero weight sequence as in Lemma 2. Then \(\mathcal{A}_{T}\) is a non-unital integral domain._ _In particular, if \(T\) is a quasinilpotent weighted shift, then the nonzero elements of \(\mathcal{A}_{T}\) are quasinilpotent and not nilpotent._ Proof.: Let \(R,S\in\mathcal{A}_{T},\) be nonzero elements. From Lemma 1 we know that the Fourier coefficients of \(R\) are not all zero, and similarly for \(S.\) By Proposition 1 there is a sequence of polynomials \(\{p_{n}\}\) (resp., \(\{q_{n}\}\)) which are convex combinations of Fejer polynomials, so that \(\{p_{n}(T)\}\) converges to \(R\) (resp., \(\{q_{n}(T)\}\) converges to \(S\)). In particular, it follows from the Fejer property that \(\widehat{p_{n}}(k)\neq 0\) implies \(\hat{R}(k)\neq 0\) (resp., \(\widehat{q_{n}}(k)\neq 0\) implies \(\hat{S}(k)\neq 0).\) Furthermore, for all \(k,\ \widehat{p_{n}}(k)\rightarrow\hat{R}(k)\) (resp., \(\widehat{q_{n}}(k)\rightarrow\hat{S}(k)\)) as \(n\rightarrow\infty.\) Let \[j_{0}=\min\{j:\hat{R}(j)\neq 0\}\ \text{and}\ k_{0}=\min\{k:\hat{S}(k)\neq 0\}.\] Now \(\{p_{n}(T)q_{n}(T)\}\) converges in norm to \(RS\), and so \(\{\widehat{p_{n}q_{n}}(\ell)\}\) converges to \(\widehat{RS}(\ell)\) for all \(\ell\in\mathbb{N}.\) If \(\ell_{0}=\min\{\ell:\widehat{p_{n}q_{n}}(\ell)\neq 0\text{ for }n\text{ sufficiently large}\}\) then \(\ell_{0}=j_{0}+k_{0}\) and \(\widehat{p_{n}q_{n}}(\ell_{0})=\widehat{p_{n}}(j_{0})\widehat{q_{n}}(k_{0}).\) Hence \[\widehat{RS}(\ell_{0})=\lim_{n}\widehat{p_{n}}(j_{0})\widehat{q_{n}}(k_{0})\neq 0\] so that \(RS\neq 0.\) For the second statement, if \(T\) is quasinilpotent, then the elements of \(\mathcal{A}_{T},\) are quasinilpotnt, and the nonzero elements are not nilpotent as \(\mathcal{A}_{T}\) is an integral domain. ## 3. The Volterra operator algebra Let \(V\) be the Volterra operator on \(L^{2}[0,1],\) given by \(V\xi(x)=\int_{0}^{x}\xi(t)\,dt.\) Then we know ([8]) that for \(n\geq 0,\) \[V^{n+1}\xi(x)=\frac{1}{n!}\int_{0}^{x}(x-t)^{n}\xi(t)\,dt \tag{2}\] Let \(f\) be any \(L^{2}[0,1]\) function and let \(V_{f}\) denote the operator on \(L^{2}[0,1]\) given by \[V_{f}\xi(x)=\int_{0}^{x}f(x-t)\xi(t)\,dt\] Observe this is bounded, since \[|V_{f}\xi(x)| =|\int_{0}^{x}f(x-t)\xi(t)\,dt|\] \[\leq\int_{0}^{x}|f(x-t)|\,|\xi(t)|\,dt\] \[\leq[\int_{0}^{x}|f(x-t)|^{2}\,dt]^{\frac{1}{2}}[\int_{0}^{x}| \xi(t)^{2}|\,dt]^{\frac{1}{2}}\] \[\leq||f||_{2}||\xi||_{2}\] Now, since any \(L^{2}[0,1]\) function \(f\) is the \(L^{2}\) limit of a sequence of polynomials, \(\{p_{n}\},\) we have that \[|(V_{f}-V_{p_{n}})\xi(x)|\leq||f-p_{n}||_{2}||\xi||_{2}\] so that \[||V_{f}-V_{p_{n}}||\to 0 \tag{3}\] Let \(\mathcal{A}_{V}\) denote the Volterra operator algebra, by which we mean the operator norm closure of the polynomials \(p\) in \(V\) with \(p(0)=0.\) We have just shown that \(V_{f}\in\mathcal{A}_{V}\) if \(f\in L^{2}[0,1].\) We would like to characterize arbitrary \(T\in\mathcal{A}_{V}.\) **Theorem 1**.: 1. _Let_ \(f\in L^{1}[0,1].\) _Then for_ \(\rho\in L^{2}[0,1],\) _the function_ \((V_{f})\rho(x):=\int_{0}^{x}f(x-t)\rho(t)\,dt\in L^{2}[0,1],\) _and_ \[||(V_{f})\rho||_{2}\leq||f||_{1}\,||\rho||_{2}\text{ and hence }||V_{f}||\leq||f||_{1}\] 2. \(f\in L^{1}[0,1]\mapsto||V_{f}||\) _is a norm on_ \(L^{1}[0,1].\)__ 3. _Let_ \(T\in\mathcal{A}_{V}\) _and_ \(\{f_{n}\}\) _a sequence of functions in_ \(L^{1}[0,1]\) _such that_ \(V_{f_{n}}\to T\) _in_ \(\mathcal{A}_{V}.\) _If_ \(0<x_{0}<1,\) _then the sequence_ \(\{\int_{0}^{x_{0}}|f_{n}(x)|\,dx\}\) _is bounded._ 4. _Let_ \(0<x_{0}<1\) _and let_ \(S_{x_{0}}=\{f:f\in L^{1}[0,1],\)__\(f(x)=0\text{ for }x_{0}<x\leq 1\}.\) _Then on_ \(S_{x_{0}}\) _the operator norm_ \(f\mapsto||V_{f}||\) _and the_ \(L^{1}\)_-norm_ \(f\mapsto\int_{0}^{1}|f|\) _are equivalent._ 5. _Let_ \(T\in\mathcal{A}_{V}.\) _Then there is a measurable function_ \(f\) _on_ \([0,1],\) _integrable over compact subsets of_ \([0,1),\) _such that_ \(T=V_{f}.\)__ 6. _If_ \(f,g\) _are measurable functions on_ \([0,1]\) _such that_ \(V_{f},V_{g}\in\mathcal{A}_{V}\) _and_ \(\alpha\in\mathbb{C},\) _then_ \(V_{\alpha f}=\alpha V_{f}\) _and_ \(V_{f+g}=V_{f}+V_{g}.\)__ Proof.: Let \(\rho\in L^{2}[0,1]\) and \(f\in L^{1}[0,1].\) For \(x\in[0,1]\) define the function \(f_{x}\) by \[f_{x}(t)=\begin{cases}f(x-t)\text{ if }0\leq t\leq x\\ 0\text{ if }x<t\leq 1\end{cases}\] \[|(V_{f})\rho(x)| =|\int_{0}^{x}f(x-t)\rho(t)\,dt|\] \[\leq\int_{0}^{x}|f(x-t)|\,|\rho(t)|\,dt\] \[\leq\int_{0}^{1}(\sqrt{|f_{x}(t)|})\,(\sqrt{|f_{x}(t)|}|\rho(t)| )\,dt\] \[\leq(\int_{0}^{1}|f_{x}(t)|\,dt)^{\frac{1}{2}}\,(\int_{0}^{1}|f_{ x}(t)|\,|\rho(t)|^{2}\,dt)^{\frac{1}{2}}\] \[\leq||f_{x}||_{1}^{\frac{1}{2}}\,(\int_{0}^{1}|f_{x}(t)|\,|\rho(t )|^{2}\,dt)^{\frac{1}{2}}\] Thus \[\int_{0}^{1}|(V_{f})\rho(x)|^{2}\,dx \leq||f||_{1}\int_{0}^{1}\int_{0}^{1}|f_{x}(t)|\,|\rho(t)|^{2}\,dt\,dx\] \[\leq||f||_{1}\int_{0}^{1}(\int_{0}^{1}|f_{x}(t)|\,dx)|\rho(t)|^{2}\,dt\] \[\leq||f||_{1}||f||_{1}||||\rho|^{2}||_{1}\] \[\leq(||f||_{1}\,||\rho||_{2})^{2}\] Thus, \((V_{f})\rho\in L^{2}[0,1]\) and \(||(V_{f})\rho||_{2}\leq||f||_{1}\,||\rho||_{2}.\) Thus \(||V_{f}||\leq||f||_{1}.\) To prove (2), note that by definition, \[||V_{f}||=\sup\{|<(V_{f})\rho,\xi>|\ :||\rho||_{2}\leq 1,\ ||\xi||_{2}\leq 1\}.\] For \(f\in L^{1}[0,1],\) define \[\rho(t)=\overline{\operatorname{sgn}(f(1-t))}:=\begin{cases}\overline{f(1-t) }\ \text{if}\ f(1-t)\neq 0\\ 0\ \text{otherwise}\end{cases}\] and let \(F(x)=\int_{0}^{x}f(x-t)\rho(t)\,dt.\) Then \(F\) is continuous, and \(F(1)=||f||_{1}.\) Now let \(\xi(x)=\operatorname{sgn}(F(x)).\) Since \(||\rho||_{2},\ ||\xi||_{2}\leq 1,\) we have \[||V_{f}||\geq|<(V_{f})\rho,\xi>|=\int_{0}^{1}|F(x)|\,dx>0.\] For (3), if the sequence of integrals is not bounded, then for all \(x_{0}\leq x\leq 1,\) the values of \(T(1)(x)\) are either \(\pm\infty\) or undefined. But that contradicts that \(T(1)\in L^{2}[0,1].\) For (4), if the space \(S_{x_{0}}\) is complete in the operator norm \(f\mapsto||V_{f}||,\) then since by part (1) \(||V_{f}||\leq||f||_{1},\) it follows from a Corollary of the Open Mapping Theorem that the two norms are equivalent. Suppose that \(S_{x_{0}}\) is not complete in the operator norm. Then there is an element \(T\in\mathcal{A}_{V},\ ||T||=1,\) and a sequence \(\{f_{n}\}\subset S_{x_{0}}\) so that \(\{V_{f_{n}}\}\) converges to \(T\) with \(\int_{0}^{x_{0}}|f_{n}|\) unbounded. But that contradicts (3). For (5), let \(0<x_{0}<1\) and define the projection \(P_{x_{0}}\) on \(L^{2}[0,1]\) by \[P_{x_{0}}\rho(t)=\begin{cases}\rho(t)\ \text{if}\ 0\leq t\leq x_{0}\\ 0\ \text{otherwise}\end{cases}\] For \(f\in L^{1}[0,1],\) let \(f_{x_{0}}\in S_{x_{0}}\) be defined by \(f_{x_{0}}(t)=\begin{cases}f(t)\ \text{if}\ 0\leq t\leq x_{0}\\ 0\ \text{otherwise}\end{cases}\) Observe that \[P_{x_{0}}V_{f}P_{x_{0}}=P_{x_{0}}V_{f_{x_{0}}}P_{x_{0}}\] Also note that \(f\in S_{x_{0}}\mapsto||P_{x_{0}}V_{f}P_{x_{0}}||\) is a norm on \(S_{x_{0}},\) weaker than the operator norm. That it is a norm follows from (2), with the interval \([0,1]\) replaced by \([0,x_{0}].\) Let \(T\in\mathcal{A}_{V}\) and \(\{f_{n}\}\) a sequence of functions in \(L^{1}[0,1]\) such that \(\{V_{f_{n}}\}\) converges to \(T.\) Then \[\{P_{x_{0}}V_{f_{n}}P_{x_{0}}=P_{x_{0}}V_{f_{n,x_{0}}}P_{x_{0}}\}\text{ converges to }P_{x_{0}}TP_{x_{0}}\] In other words, the sequence \(\{V_{f_{n},x_{0}}\}\) converges in the norm defined in the previous paragraph. Since by (3) the integrals \(\int_{0}^{x_{0}}|f_{n}|\) are bounded, the limit of the sequence \(\{f_{n,x_{0}}\}\) with respect to this norm belongs to \(S_{x_{0}}.\) Thus, the limit of \(\{V_{f_{n},x_{0}}\}\) has the form \(V_{g}\) for some \(g\in S_{x_{0}}.\) Now if \(x_{0}<y<1,\) then \(\{f_{n,y}\}\) converges with respect to the norm \(q\in S_{y}\mapsto||P_{y}V_{q}P_{y}||\) to a function \(h.\) In other words, \(\{V_{f_{n,y}}\}\) converges in this norm to an operator \(V_{h}\) for some \(h\in S_{y},\) and furthermore, the restriction of \(h\) to the interval \([0,x_{0}]\) coincides with \(g.\) Thus, if \(x_{0}<x_{1}<x_{2}<\cdots<1\) with \(\lim_{n}x_{n}=1,\) then we obtain a sequence of functions \(f_{x_{n}}\) such that \(f_{x_{n+1}}\) restricted to \([0,x_{n}]\) equals \(f_{x_{n}}\) on that interval. Thus we obtain a function \(f,\) measurable on \([0,1],\) whose restriction to \([0,x_{n}]\) is equals the restriction of \(f_{x_{n}}\) to \([0,x_{n}].\) Finally, the fact that \(f\) is integrable over compact subsets of \([0,1)\) follows from the fact that all of the \(f_{x_{n}}\) are integrable. To verify (6), let \(\{p_{n}\},\ \{q_{n}\}\) be sequences of polynomials such that \(\{V_{p_{n}}\},\{\ V_{q_{n}}\}\) converge to \(V_{f},\ V_{g}\) respectively. Since \(V_{p_{n}+q_{n}}=V_{p_{n}}+V_{q_{n}},\) and \(V_{\alpha p_{n}}=\alpha V_{p_{n}},\) the conclusion follows. In [9] G. Little and J. B. Reade prove an asympotic estimate for the norm of powers of the Volterra operator \(V:\) \[\lim_{n}n!||V^{n}||=\frac{1}{2}.\] This result can be interpreted as an asymptotic estimate of the ratio of the operator norm to the \(L^{1}\)-norm on the set of functions \(f_{n}(x)=x^{n}.\) Indeed, since \(n!V^{n+1}=V_{f_{n}},\) we have \[\frac{||V_{f_{n}}||}{||f_{n}||_{1}}=\frac{n!||V^{n+1}||}{\frac{1}{n+1}}=(n+1)!||V^{n+1}||\to\frac{1}{2}\] as \(n\to\infty.\) Since the set of functions \(\{f_{n}:\ n=0,1,\dots\}\) spans a dense subspace of the operator algebra \(\mathcal{A}_{V},\) this result seems to suggest that the two norms may be equivalent. However, it turns out that the two norms are not equivalent, as the following example shows. _Example 11_.: Let \(\mathcal{S}\) be the space of Lebesgue measurable functions \(f\) on \([0,1]\) such that, for every \(0<x<1,\ f|_{[0,x]}\in L^{2}[0,x].\) For \(f\in\mathcal{S},\) define \[\rho_{x}(t)=\begin{cases}\frac{1}{c(x)}\overline{f(x-t)},\text{ if }c(x)\neq 0 \text{ and }t\leq x\\ 0,\text{ otherwise}\end{cases}\] where \(c(x)=[\int_{0}^{x}|f(x-t)|^{2}\,dt]^{\frac{1}{2}}.\) Then, clearly, for any function \(\rho\in L^{2}[0,1]\) of unit norm, \(|\int_{0}^{x}f(x-t)\rho(t)\,dt|\leq\int_{0}^{x}f(x-t)\rho_{x}(t)\,dt.\) Thus, if \(G(x)=(\int_{0}^{x}f(x-t)\rho_{x}(t)\,dt)^{2},\) and if \(f\) is not zero a.e. in the interval \([0,x],\) we have \[G(x) =\frac{1}{c(x)^{2}}(\int_{0}^{x}|f(x-t)|^{2}\,dt)^{2}\] \[=\int_{0}^{x}|f(x-t)|^{2}\,dt\] Hence \(||V_{f}||\leq[\int_{0}^{1}G(x)\,dx]^{\frac{1}{2}}.\) Let \(\mathcal{S}_{1}=\{f\in\mathcal{S}:\int_{0}^{1}\int_{0}^{x}|f|^{2}<\infty\},\) and for \[f\in\mathcal{S}_{1},\text{ set }||f||_{\sharp}=[\int_{0}^{1}\int_{0}^{x}|f|^{2}]^{ \frac{1}{2}}.\] If the kernel \(k_{f}\) is defined by \[k_{f}(x,t)=\begin{cases}f(x-t)\text{ if }t\leq x\\ 0\text{ if }t>x\end{cases}\] then the condition \(f\in\mathcal{S}_{1}\) is equivalent to \(k_{f}\in L^{2}([0,1]\times[0,1])\) and in that case \(||f||_{\sharp}=||k_{f}||_{2}.\) Thus \(||f||_{\sharp}\) is the Hilbert-Schmidt norm of the operator \(V_{f}.\) To show that \(\mathcal{A}_{V}\) properly contains \(L^{1}[0,1]\) it suffices to exhibit a function \(f\) with \(||f||_{\sharp}<\infty\) but \(f\notin L^{1}[0,1].\) Now let \(f=\sum_{n=1}^{\infty}\frac{2^{n}}{n}\,\chi_{[1-2^{-(n-1)},1-2^{-n})}.\) Then \(\int_{0}^{1}f=\sum_{n=1}^{\infty}\frac{1}{n}\) diverges, so \(f\notin L^{1}[0,1].\) However, \(||f||_{\sharp}^{2}=\int_{0}^{1}G\) is finite. To see this, view the area under the graph of \(G\) as divided into horizontal strips. The portion of the area between \(G(1-2^{-(n-1)})\) and \(G(1-2^{-n})\) is \(\frac{3}{2}\cdot\frac{1}{n^{2}}.\) Thus, \[\int_{0}^{1}G=\sum_{n=1}^{\infty}\frac{3}{2}\cdot\frac{1}{n^{2}}=\pi^{2}/4.\] It follows that the operator norm \(||V_{f}||\) is finite, and in fact \(||V_{f}||\leq\pi/2.\) If \(f_{n}:=f\chi_{[0,1-2^{-n})},\) then \(f_{n}\in L^{1}[0,1]\) and a calculation similar to the above shows that \(||V_{f}-V_{f_{n}}||\to 0\) as \(n\to\infty.\) Since \(V_{f_{n}}\in\mathcal{A}_{V}\) and \(\mathcal{A}_{V}\) is by definition complete, \(V_{f}\in\mathcal{A}_{V}.\) Thus, \(V_{f}\in\mathcal{A}_{V},\) but \(f\notin L^{1}[0,1].\) _Remark 5_.: There is no simple relationship between the Hilbert Schmidt norm of \(V_{f}\) and \(||f||_{1}.\) We have just seen that the Hilbert-Schmidt norm of \(V_{f}\) can be finite and \(f\) not in \(L^{1}[0,1].\) On the other hand, if \(f\in L^{1}[0,1]\) is such that, for some \(0<x<1,\ f|_{[0,x]}\notin L^{2}[0,x],\) then the \(L^{1}\) norm of \(f\) is finite but the Hilbert Schmidt norm of \(V_{f}\) is not. _Example 12_.: If \(f\) is measurable on \([0,1],\) the condition \(k_{f}\in L^{1}([0,1]^{2})\) does not imply \(V_{f}\in\mathcal{A}_{V}.\) Let \(f(x)=\frac{1}{(1-x)^{\frac{3}{2}}}.\) Then \[||k_{f}||_{1}=\int_{0}^{1}(\int_{0}^{x}f(t)\,dt)\,dx<\infty\] To show \(V_{f}\) is unbounded, it is enough to find \(\rho\in L^{2}[0,1]\) such that \((V_{f})\rho\notin L^{2}[0,1].\) Take \(\rho\) to be the constant \(1.\) Then \[||(V_{f})\rho||_{2}^{2}=\int_{0}^{1}|\int_{0}^{x}f(t)\,dt|^{2}\,dx\text{ diverges.}\] This shows that that the conditions of Theorem 1 part (5) on a measurable function \(f\) to satisfy \(V_{f}\in\mathcal{A}_{V}\) are necessary but not sufficient. ### Operator Algebraic properties of \(\mathcal{A}_{V}\) We turn now from a discussion of the norm of operators in \(\mathcal{A}_{V}\) to its properties as an algebra. Since the Volterra operator is quasinilpotent, the algebra \(\mathcal{A}_{V}\) is a commutative radical operator algebra. From the example of the weighted shift, we know that radical operator algebras \(\mathcal{A}_{T}\) can admit a gauge action. Does the same hold for \(\mathcal{A}_{V}?\) **Proposition 10**.: \(\mathcal{A}_{V}\) _does not admit a gauge action._ Proof.: We will make use of the formula for \(V^{n+1}\) (cf equation 2). Now by the Muntz-Szasz Theorem, the function \(g(x)=x\) can be uniformly approximated in \([0,1]\) by polynomials in \(\{x^{2},x^{3},\dots\}.\) Thus given \(\epsilon>0,\) we can find a polynomial \(p(x)=a_{2}x^{2}+a_{3}x^{3}+\dots+a_{n}x^{n}\) satisfying \(||g-p||_{\infty}=\sup_{0\leq x\leq 1}|x-p(x)|<\epsilon.\) Let \(\rho\in L^{2}[0,1]\) with \(||\rho||_{2}=1.\) Then \[||V^{2}(\rho)-\sum_{j=2}^{n}j!a_{j}V^{j+1}(\rho)||_{2} =||\int_{0}^{x}[(x-t)-p(x-t)]\rho(t)\,dt||_{2}\] \[\leq||\int_{0}^{x}|(x-t)-p(x-t)||\rho(t)|\,dt||_{2}\] \[\leq||\int_{0}^{1}\epsilon|\rho(t)|\,dt||_{2}\] \[\leq\epsilon\] Since this holds for all \(\rho\in L^{2}[0,1]\) of norm \(1,\) it follows that \[||V^{2}-\sum_{j=2}^{n}j!a_{j}V^{j+1}||\leq\epsilon.\] Now we assume that the operator algebra \(\mathcal{A}_{V}\) admits a gauge action, \(\gamma.\) Since by definition the Fourier coefficient \(\hat{V^{2}}(2)=1,\) and \((\sum_{j=2}^{n}\widehat{j!a_{j}V^{j+1}})(2)=0,\) and since the operation \(S\in\mathcal{A}_{V}\mapsto\hat{S}(2)=\int_{\mathbb{T}}\gamma_{\lambda}(S) \lambda^{-2}\,d|\lambda|\) is norm-decreasing, it follows that \[||V^{2}||=||\int_{\mathbb{T}}(\gamma_{\lambda}(V^{2}-\sum_{j=2}^{n}j!a_{j}V^{j +1})\lambda^{-2}\,d|\lambda|\,||\leq\epsilon\] which is absurd, since \(\epsilon>0\) was arbitrary. Thus \(\mathcal{A}_{V}\) does not admit a gauge action. If \(T\) is a quasinilpotent weighted shift, then the fact that the lattice \(\operatorname{Lat}T\) is discrete and \(\operatorname{Lat}V\) is continuous tells us that the two operators are not unitarily equivalent. But to show that the operator algebras \(\mathcal{A}_{T},\ \mathcal{A}_{V}\) are not isomorphic requires another argument. **Corollary 6**.: _Let \(T\) be a quasinilpotent weighted shift. Then the radical operator algebras \(\mathcal{A}_{T},\ \mathcal{A}_{V}\) are not completely isometrically isomorphic._ Proof.: Since \(\mathcal{A}_{T}\) admits a gauge action, and \(\mathcal{A}_{V}\) does not, they cannot be completely isometrically isomorphic. Define a 'convolution' on elements of \(\mathcal{A}_{V}\) as follows: if \(f,\ g\) are measurable functions on \([0,1]\) such that \(V_{f},\ V_{g}\in\mathcal{A}_{V},\) set \[f*g(x)=\int_{0}^{x}f(x-t)g(t)\,dt \tag{4}\] First observe that since the restrictions of \(f,g\) to the interval \([0,x]\) are integrable if \(x<1\), it follows that \(f*g\) is well defined. Furthermore, a calculation shows that, for \(\rho\in L^{2}[0,1]\), \[V_{f*g}(\rho)=V_{f}(V_{g}(\rho))\text{ and hence }V_{f*g}=V_{f}V_{g}. \tag{5}\] It is well known that the closed invariant subspaces of the Volterra operator \(V\) have the form \(\{\xi\in L^{2}[0,1]:\xi(t)=0\text{ for }0\leq t\leq x_{0}\}\), for \(0<x_{0}<1.\) ([2], Theorem 5.5. Note that their notation differs from our: their \(V\) is \(V^{*}\) here.) The same holds for the operator algebra, \(\mathcal{A}_{V}.\) This yields some information about the closed ideals of \(\mathcal{A}_{V}.\) **Proposition 11**.: _Let \(0<x_{0}<1.\) The subspace \(\mathcal{I}_{x_{0}}=\{V_{f}:\ f(t)=0,\text{ for }0\leq t\leq x_{0}\}\) is a closed ideal of \(\mathcal{A}_{V}.\)_ Proof.: Let \(V_{g}\in\mathcal{I}_{x_{0}}\) and \(V_{f}\in\mathcal{A}_{V}.\) Since \(V_{f}V_{g}=V_{f*g},\) and since we have \(f*g(x)=\int_{0}^{x}f(x-t)g(t),\) it is clear that if \(x\leq x_{0}\) then \(f*g(x)=0.\) Thus, \(\mathcal{I}_{x_{0}}\) is an ideal. The proof that \(\mathcal{I}_{x_{0}}\) is a closed ideal is similar to the proof of statement (2) of Theorem 1. Suppose \(V_{f}\in\mathcal{A}_{V},\ V_{f}\notin\mathcal{I}_{x_{0}}.\) This implies that the restriction of \(f\) to the interval \([0,x_{0}]\) is not zero. Let \(F(x)=\int_{0}^{x}f(x-t)\rho(t)\,dt,\) where \[\rho(t)=\begin{cases}sgn(f(x_{0}-t)),\text{ if }t\leq x_{0}\\ 0,\text{ if }t>x_{0}\end{cases}\] Then \(F\) is continuous, and \(F(x_{0})=\int_{0}^{x_{0}}|f(x_{0}-t)|\,dt>0.\) Let \[\xi(t)=\begin{cases}sgn(F(x)),\text{ if }x\leq x_{0}\\ 0,\text{ if }x>x_{0}\end{cases}\] Now let \(V_{g}\in\mathcal{I}_{x_{0}}\) be arbitrary. Then \[<(V_{f}-V_{g})(\rho),\xi>=\int_{0}^{x_{0}}|F(x)|\,dx\text{ is a positive constant.}\] It follows that no sequence \(\{V_{g_{n}}\}\subset\mathcal{I}_{x_{0}}\) converges to \(V_{f}.\) Hence \(\mathcal{I}_{x_{0}}\) is a closed ideal. _Remark 6_.: In Proposition 7 we established a one-to-one correspondence between closed ideals of the operator algebra \(\mathcal{A}_{T}\) of the weighted shift \(T,\) and invariant subspaces. If such a relationship were to exist for the Volterra operator algebra, then we would have a complete description of the closed ideals of \(\mathcal{A}_{V}.\) Note that if an operator algebra \(\mathcal{A}\) is completely isometrically represented on a Hilbert space \(H\) with cyclic vector \(\xi_{0},\) then given an invariant subspace \(H_{1}\) there is a closed ideal \(\mathcal{I}\) by \(\mathcal{I}=\{a\in\mathcal{A}:a\xi_{0}\in H_{1}\}.\) In the other direction, there is no assurance that if \(\mathcal{I}\) is a closed ideal of \(\mathcal{A},\) that the subspace \(\mathcal{I}\cdot\xi_{0}\) is closed in \(H.\) **Lemma 4**.: _Let \(\alpha,\beta\in(0,1)\) and suppose \(f,g\) are measurable functions on \([0,1]\) such that \(V_{f},V_{g}\in\mathcal{A}_{V},\) and_ 1. _If_ \(f\) _is supported on_ \([\alpha,1]\) _and_ \(g\) _is supported on_ \([\beta,1],\) _then_ \(f*g\) _is supported on_ \([(\alpha+\beta),1]\) _if_ \(\alpha+\beta<1,\) _and_ \(f*g=0\) _if_ \(\alpha+\beta\geq 1.\)__ 2. _In particular, if_ \(\beta=1-\alpha,\) _then_ \(f*g=0.\)__ Proof.: The proof is a straightforward computation, making use of the convolution formula, equation 4. **Corollary 7**.: _If \(f\) is a measurable function on \([0,1]\) such that \(V_{f}\in\mathcal{A}_{V}\) and \(f\) is supported on \([\alpha,1],\) then \((V_{f})^{n}=0\) if \(n\alpha\geq 1.\)_ Proof.: This follows from repeated application of Lemma 4 The following result appears as Proposition 2.13 of [11]. It is proved there using methods entirely different from those in this paper. **Corollary 8**.: _The nilpotent elements in the Volterra operator algebra \(\mathcal{A}_{V}\) are dense._ Proof.: Let \(V_{f}\in\mathcal{A}_{V}.\) Since the elements of the form \(V_{g},\ g\in L^{1}[0,1]\) are dense in \(\mathcal{A}_{V},\) given \(\epsilon>0,\) let \(g\in L^{1}[0,1]\) satisfy \(||V_{f}-V_{g}||<\epsilon.\) There is \(\delta>0\) such that if \(E\subset[0,1]\) is measurable with \(\mu(E)\leq\delta,\) then \(\int_{E}|g|\leq\epsilon.\) Define \(h\in L^{1}[0,1]\) by \[h(x)=\begin{cases}0,\ \text{if}\ 0\leq x<\delta\\ g(x),\ \text{if}\ \delta\leq x\leq 1\end{cases}\] Then \[||V_{f}-V_{h}|| \leq||V_{f}-V_{g}||+||V_{g}-V_{h}||\] \[<\epsilon+||g-h||_{1}\] \[<2\epsilon\] Observe that the second inequality above makes use of part (1) of Theorem 1, since \[||V_{g}-V_{h}||=||V_{g-h}||\leq||g-h||_{1}\] Let \(n\in\mathbb{N}\) satisfy \(n\delta\geq 1.\) It follows from Corollary7 that \(V_{h}\) is nilpotent in \(\mathcal{A}_{V}.\) _Remark 7_.: \(L^{1}[0,1]\) is a Banach algebra under the convolution \(f*g(x)=\int_{0}^{x}f(x-t)g(t)\,dt.\) Hence this Banach algebra is dense in \(\mathcal{A}_{V},\) and the convolution is the restriction of that in \(\mathcal{A}_{V}\) to \(L^{1}.\) One version of the Titschmarsh convolution is: if \(f,g\in L^{1}[0,1]\) and \(f*g=0\) then there is an \(\alpha\in[0,1]\) such that \(\mathrm{supp}f\subset[\alpha,1]\) and \(\mathrm{supp}g\subset[1-\alpha,1].\) ([2] Problem 5.4 and [8] Theorem 10, Sec. 38.3) The next result shows that the Titschmarsh convolution theorem also holds in the larger algebra \(\mathcal{A}_{V}.\) In particular, the converse of statement (2) of Lemma 4 holds. **Corollary 9**.: _Let \(f,g\) be real-valued measurable functions on \([0,1]\) such that \(V_{f},\ V_{g}\in\mathcal{A}_{V}.\) Then \(V_{f*g}=0\) if and only if there exists \(\alpha\in[0,1]\) such that \(\mathrm{supp}f\subset[\alpha,1]\) and \(\mathrm{supp}g\subset[1-\alpha,1].\)_ Proof.: One direction follows from Lemma 4 If \(V_{f*g}=V_{f}V_{g}=0,\) then \(V(V_{f}V_{g})=V_{F}V_{g}=0\) where \(F(x)=\int_{0}^{x}f(t)\,dt.\) It follows that \(V_{F}V_{g}\xi=0\) for all \(\xi\in L^{2}[0,1].\) Thus \(\int_{0}^{x}F(x-t)(V_{g}\xi)(t)\,dt=0\) for a.a. \(x\in[0,1].\) As \(F=V_{f}(1)\) and \(V_{g}\xi\) are both in \(L^{2}[0,1]\subset L^{1}[0,1],\) it follows from the classical Titschmarch theorem that there exists \(\alpha\in[0,1]\) such that \(F=0\) in \([0,\alpha]\) and \(\int_{0}^{x}g(x-t)\xi(t)\,dt=0\) for \(x\in[0,1-\alpha].\) Thus \(f=0\) in \([0,\alpha].\) Suppose \(g\neq 0\) in \([0,1-\alpha].\) Thus for some \(x_{0}\in[0,1-\alpha],\ \int_{0}^{x_{0}}|g|\,dt\neq 0.\) Define \(\xi(t)=sgn(g(x_{0}-t)).\) Then \(\int_{0}^{x}g(x-t)\xi(t)\,dt\) is a nonzero continuous function in the interval \([0,1-\alpha],\) a contradiction. The author would like to thank Chris Phillips for his comments on an earlier version of this paper.
2309.05352
Neural Discovery of Permutation Subgroups
We consider the problem of discovering subgroup $H$ of permutation group $S_{n}$. Unlike the traditional $H$-invariant networks wherein $H$ is assumed to be known, we present a method to discover the underlying subgroup, given that it satisfies certain conditions. Our results show that one could discover any subgroup of type $S_{k} (k \leq n)$ by learning an $S_{n}$-invariant function and a linear transformation. We also prove similar results for cyclic and dihedral subgroups. Finally, we provide a general theorem that can be extended to discover other subgroups of $S_{n}$. We also demonstrate the applicability of our results through numerical experiments on image-digit sum and symmetric polynomial regression tasks.
Pavan Karjol, Rohan Kashyap, Prathosh A P
2023-09-11T09:53:28Z
http://arxiv.org/abs/2309.05352v1
# Neural Discovery of Permutation Subgroups ###### Abstract We consider the problem of discovering subgroup \(H\) of permutation group \(S_{n}\). Unlike the traditional \(H\)-invariant networks wherein \(H\) is assumed to be known, we present a method to discover the underlying subgroup, given that it satisfies certain conditions. Our results show that one could discover any subgroup of type \(S_{k}(k\leq n)\) by learning an \(S_{n}\)-invariant function and a linear transformation. We also prove similar results for cyclic and dihedral subgroups. Finally, we provide a general theorem that can be extended to discover other subgroups of \(S_{n}\). We also demonstrate the applicability of our results through numerical experiments on image-digit sum and symmetric polynomial regression tasks. ## 1 Introduction ### Background Deep Learning has proven to be a successful paradigm for learning the underlying regularities of sensory data such as images, text, and audio (Brown et al., 2020; He et al., 2016; Ramesh et al., 2022). The data in the physical world possess a predefined structure with a low-dimensional manifold approximation within a higher dimensional euclidean space (Cayton, 2005; Scholkopf et al., 1998). However, the task of supervised learning in such a high-dimensional data space demands a large number of data points to counter the curse of dimensionality. Thus, universal function approximations using neural networks in such a setting can be prohibitively expensive to curate large datasets for diverse applications such as medical imaging. This calls for the need for inductive bias to be incorporated into our networks such that they can utilize these priors for learning valuable representations in the feature space. Convolutional Neural Networks proposed by (LeCun et al., 1995) incorporate translation equivariance and thus preserve translation symmetry. This is highly effective for perception tasks since it enables the model with a notion of locality and symmetry, i.e., the input and label are both invariant to shifts (preserves this property across layers), and has likewise shown substantial gains in image recognition tasks as demonstrated in (Szegedy et al., 2017; He et al., 2016). However, from a group-theoretic perspective, CNN happens to represent a particular case of invariance under the action of a specific group. This leads to studying and understanding its usability when extended to a more general setting, i.e., equivariance or invariance to any generic group action. Thus, learning such representations across neural nets ensures preserving symmetry across the network and efficiently discovering the underlying factors of data variations by utilizing these priors. ### Group Invariance and Equivariance Learning symmetries from data has been studied extensively in (Senior et al., 2020; Raviv et al., 2007; Monti et al., 2017; Rossi et al., 2022). Invariant and equivariant classes of functions impose a powerful inductive prior to our models in a statistically efficient manner which aids in learning useful representations on a wide range of data (Bogatskiy et al., 2020; Esteves, 2020). Group equivariant or invariant networks (Cohen et al., 2018; Esteves et al., 2018) exploit the inherent symmetrical structure in the data, i.e., equivariance or invariance to a certain set of group operations (geometric priors) and can thus result in a significant reduction in the sample complexity and lead to better generalization. This has ubiquitous applications in various tasks such as predicting protein interactions (Gainza et al., 2020) and estimating population statistics (Zaheer et al., 2017). One of the important classes of group invariance networks corresponds to the permutation group \((S_{n})\), i.e., the group of all permutations of a set of cardinality \(n\). Zaheer et al. (2017) have focused extensively on the applicability of permutation equivariance and invariance functions on arbitrary objects such as sets. Whereas, (Kicki et al., 2020) proposes a \(G\)-invariant network to approximate functions that are in variant under the action of any given permutation subgroup of \(S_{n}\). Moreover, it is crucial to consider subgroups of \(S_{n}\), since any finite group is isomorphic to a subgroup of \(S_{n}\) _(Cayley's theorem)_ for some \(n\). For example, the Quarternanian group \(Q_{8}\) is isomorphic to a subgroup of \(S_{8}\). In addition, other interesting applications of functions correspond to subgroups of \(S_{n}\). For instance, the area of an \(n\)-polygon is a \(\mathbb{Z}_{n}\)-invariant function of the polygon's vertices (Kicki et al., 2020). ### Contributions In most of the works mentioned earlier, the group (or subgroup) is assumed to be known a priori. This restricted form of modeling choice leads to reduced flexibility (also restrictions). It makes incorporating symmetries into our networks highly infeasible for real-world applications where the underlying structure is unknown. Motivated by this, we demonstrate a general framework, i.e., \(G\)-invariant network and a linear transformation for discovering the underlying subgroup of \(S_{n}\) under certain conditions. Our main contributions can be summarized as follows: In this work, we propose a general framework to discover the underlying subgroup of \(S_{n}\) under a broad set of conditions. * We prove that we could learn any conjugate group (with respect to \(G\)) via a linear transformation and \(G\)-invariant network. * We extend this approach, i.e., a linear transformation and \(G\)-invariant network to different classes of subgroups such as permutation group of \(k\) (out of \(n\)) elements \(S_{k}\), cyclic subgroups \(\mathbb{Z}_{k}\) and dihedral subgroups \(D_{2k}\). The \(G\)-invariant networks for the above families are \(S_{n},\mathbb{Z}_{n}\) and \(D_{2n}\) respectively. In the latter two cases, \(k\) should divide \(n\). * We prove a general theorem that can guide us to discover other classes of subgroups. * We substantiate the above results through experiments on image-digit sum and symmetric polynomial regression tasks. ## 2 Prior Work ### Group Invariant and Equivariant Networks Significant progress has been made in incorporating invariances to deep neural nets in the last decade (Cohen et al., 2019; Cohen and Welling, 2016; Ravanbakhsh et al., 2017; Ravanbakhsh, 2020; Wang et al., 2020). We observe that most of the invariant neural networks proposed in the literature assume the knowledge of the underlying symmetry group. Various generalizations, i.e., group equivariant or invariant neural networks, are presented in (Cohen et al., 2019; Kondor et al., 2018). Cohen and Welling (2016) introduce _Group Equivariant Convolutional Neural Networks (G-CNNs)_ as a natural extension of the Convolutional Neural Network to construct a representation with the structure of a linear G-space. Further, Cohen et al. (2019) presents a general theory for studying G-CNNs on homogeneous spaces and illustrates a one-to-one correspondence between linear equivariant maps of feature spaces and convolutions kernels. Cohen and Welling (2016) provides a theoretical framework to study steerable representations in convolutional neural networks and establish mathematical connections between representation learning and representation theory. Ravanbakhsh (2020) presents the universality of invariant and equivariant MLPs with a single hidden layer. Additionally, they show the unconditional universality result for Abelian groups. Kondor and Trivedi (2018) utilize both representation theory and noncommutative harmonic analysis to establish the convolution formulae in a more general setting, i.e., invariance under the action of any compact group. ### Permutation Invariant and Equivariant Networks Zaheer et al. (2017) demonstrates the applicability of equivariant and invariant networks on various set-like objects. Further, they show that any permutation invariant function can be expressed in a standard form, i.e., \(\rho\left(\sum_{i}\phi\left(x_{i}\right)\right)\), which corresponds to an elegant deep neural network architecture. Janossy pooling (Murphy et al., 2018) extends the same to build permutation invariant functions using a generic class of functions. The works, as mentioned earlier, focus mainly on the permutation group \(S_{n}\). Recent works byicki et al. (2020) and Maron et al. (2019) provide a general architecture invariant to any given subgroup of \(S_{n}\).icki et al. (2020) design a \(G\)-invariant neural network for approximating functions (can specifically approximate any _G-invariant function_) \(f:X\to R\) using \(G\)-equivariant network and sum-product formulation, where \(X\) is a compact subset of \(R^{n\times m}\), for some \(n,m>0\)) for any given permutation subgroup \(G\) of \(S_{n}\). They extend this work to study the invariance properties of hierarchical groups \(G<H\leq S_{n}\). However, in most cases, the underlying subgroup is generally unknown. ### Automatic Symmetry Discovery Dehmamy et al. (2021) introduces the _Lie algebra convolutional network (L-Conv)_, an infinitesimal version of G-Conv, for automatic symmetric discovery. Their framework for continuous symmetries relies on _Lie algebras_ rather than _Lie groups_ and can thus encode an infinite group without discretizing (Cohen and Welling, 2016) or summing over irreps. They show that the \(L\)-Conv network can serve as a building block for constructing any group equivari ant feedforward architecture. They also unveil interesting connections between equivariant loss and Lagrangians in field theory and robustness and Euler-Lagrange equations. However, these apply only to _Lie groups_ and are not specific to subgroups of the permutation groups. Anselmi et al. (2019) proposes to learn symmetry-adapted representations and also deduce a regularization scheme for learning these representations without assuming the knowledge of the underlying subgroup (of \(S_{n}\)). However, their proposed solution is implemented in an unsupervised way. Benton et al. (2020) and Zhou et al. (2020) also propose different methods for learning symmetries when the group \(G\) is unknown. ## 3 Preliminaries This section gives a brief overview of various mathematical concepts used in our work. Let \(G\) be a group. 1. **Group action** :- The action of \(G\) on a set \(X\) is defined using the following map (written as \(g\cdot x,\ \forall g\in G\) and \(x\in X\)) : \[\theta:G\times X\to X,\] (1) satisfying the following properties : * \(g_{1}\cdot(g_{2}\cdot x)=(g_{1}g_{2})\cdot x\quad\forall g_{1},g_{2}\in G\) and \(x\in X\), * \(1\cdot x=x,\quad\forall x\in X\) where \(1\) is the identity element of \(G\). 2. **Group invariant function** :- A function \(f:X\to Y\) is said to be group invariant with respect to \(G\), if, \[f(x)=f(g\cdot x),\quad\forall g\in G\text{ and }x\in X\] (2) We call \(f\) a \(G\)-invariant function. 3. **Group equivariant function** :- A function \(f:X\to Y\) is said to be group equivariant with respect to \(G\), if for any \(g\in G\), \(\exists\,\tilde{g}\in G\), such that \[f(g\cdot x)=\tilde{g}\cdot f(x),\forall x\in X\] (3) We call \(f\) a \(G\)-equivariant function. 4. **Conjugate subgroups** :- Two subgroups \(G_{1}\) and \(G_{2}\) of \(G\) are said to be conjugates, if \(\exists g\in G\) such that, \[G_{2}=gG_{1}g^{-1}:=\{gkg^{-1}:k\in G_{1}\}\] (4) 5. **Normal subgroup** :- A subgroup \(N\) is said to be normal in \(G\), if \(\forall g\in G\) \[gNg^{-1}=N\] (5) i.e., there are no subgroups that are conjugate to \(N\). We describe the notations used for various subgroups of \(S_{n}\) in Table (1). Henceforth, unless explicitly mentioned, we follow the notations mentioned in Table (1). ## 4 Proposed Work ### Problem statement We consider the problem of learning an \(H\)-invariant function \(f:X\to\mathbb{R}\), where \(X=[0,1]^{n}\subset R^{n}\) and \(H\) is the unknown subgroup of \(S_{n}\). In general, learning such a function is intractable. However, we show that it is possible to learn such a function, i.e., discover the underlying subgroup \(H\), where \(H\) belongs to a certain class of subgroups (we explicitly state our conditions in Theorem 4.3, 4.4 and 4.5). The general consequence of our analysis is that learning a \(H\)-invariant function is thus equivalent to learning a \(G\)-invariant function along with a linear transformation, given that \(G\) and \(H\) satisfy certain conditions. Since any given \(G\) can have several such subgroups, we propose to learn the underlying subgroup \(H\) by exploiting the existing structures using a family of \(G\)-invariant functions (such as the one mentioned in Zaheer et al. (2017) for the permutation group \(S_{n}\)) and a learnable linear transformation. We formalize these ideas in the coming subsections. To prove our results, we employ the following theorem regarding \(S_{n}\)-invariant functions (Zaheer et al., 2017), which shows that any such function can be expressed in a canonical form. **Theorem 4.1** (Deep sets).: \(f:X=[0,1]^{n}\to\mathbb{R}\) _is a permutation invariant (\(S_{n}\)-invariant) continuous function iff if has the representation,_ \[f(x)=\rho\left(\sum_{i=1}^{n}\gamma(x_{i})\right),\ x=[x_{1},x_{2},\ldots x_{ n}]^{T} \tag{6}\] _for some continuous outer and inner functions \(\rho:\mathbb{R}^{n+1}\to\mathbb{R}\), \(\gamma:[0,1]\to\mathbb{R}^{n+1}\)._ We get the following result if we consider the permutations of the first \(k\) elements. **Corollary 4.1.1**.: \(f:[0,1]^{n}\to\mathbb{R}\) _be an \(S_{k}^{0}\)-invariant con \begin{table} \begin{tabular}{|l l|} \hline **Symbol** & **Description** \\ \hline \(S_{n}\) & Permutation group of \(n\) elements \\ \(S_{k}^{(0)}\) & Permutation subgroup of first \(k\) elements \\ \(S_{k}\) & Permutation subgroup of random \(k\) elements \\ \(\mathbb{Z}_{n}\) & Cyclic subgroup of \(n\) elements \\ \(\mathbb{Z}_{k}^{(0)}\) & Cyclic subgroup of first \(k\) elements \\ \(\mathbb{Z}_{k}\) & Cyclic subgroup of random \(k\) elements \\ \(D_{2n}\) & Dihedral subgroup of \(n\) elements \\ \(D_{2k}^{(0)}\) & Dihedral subgroup of first \(k\) elements \\ \(D_{k}\) & Dihedral subgroup of random \(k\) elements \\ \(A_{n}\) & Alternating subgroup of \(n\) elements \\ \(A_{k}\) & Alternating subgroup of random \(k\) elements \\ \hline \end{tabular} \end{table} Table 1: Descriptions of notations tinuous function if it has the representation,_ \[f(x)=\rho\left(\sum_{i=1}^{k}\gamma(x_{i}),\ x_{k+1},\ldots,x_{n}\right) \tag{7}\] Proof.: To prove Theorem 4.1, it has been shown that (Zaheer et al., 2017), \(\mathcal{X}^{(n)}=\{x_{1},x_{2},\ldots,x_{n}\subset[0,1]^{n}:x_{1}\leq x_{2} \leq x_{3}\cdots\leq x_{n}\}\) is homeomorphic to \(\sum_{i=1}^{n}\gamma(X_{i})\), where \[\gamma(t)=\left[1,t,t^{2},\ldots t^{n}\right]^{T} \tag{8}\] Hence, \(\mathcal{X}^{(n:k)}=\{x_{1},x_{2},\ldots,x_{n}\subset[0,1]^{n}:x_{1}\leq x_{2} \leq x_{3}\cdots\leq x_{k}\}\) is homeomorphic to \(\sum_{i=1}^{k}\gamma(X_{i})\times[0,1]^{n-k}\). Let, \(E(x)=\left[\,\sum_{i=1}^{k}\gamma(x_{i}),\ x_{k+1},\ldots,x_{n}\right]^{T}\). Then, it is an homeomorphism from \(\mathcal{X}^{(n:k)}\) to \(Im(E)\) (Image of E). If we set \(\rho=fE^{-1}\), we get \(\rho\left(E(x)\right)=f(x)\). We use the same definition of \(\gamma\)(Zaheer et al., 2017) provided in the eq. (8) in the subsequent results as well. Now, we state our first result using the conjugacy relation between subgroups. **Lemma 4.2**.: _Any \(S_{k}\)-invariant function \(\psi\), can be realized through composition of an \(S_{k}^{(0)}\)-invariant function \(\phi\) and a linear transformation \(M\), i.e., \(\psi=\phi\cdot M\). In addition, \(\psi\) can be realised through the following form,_ \[\psi(x)=\rho\left(\sum_{i=1}^{k}\gamma\left(m_{i}^{T}x\right),\ m_{k+1}^{T}x, \ldots,m_{n}^{T}x\right), \tag{9}\] _where \(m_{i}\) is the \(i^{th}\) row of \(M\)._ Proof.: Note that any \(S_{k}\) is conjugate to \(S_{k}^{(0)}\). Thus, \(\exists\ g\in S_{n}\) such that \[S_{k}^{(0)}=gS_{k}g^{-1} \tag{10}\] Let \(\psi:X\to R\) be an \(S_{k}\)-invariant function, i.e., \[\psi(x) =\psi(h\cdot x),\quad\forall h\in S_{k},x\in X\] \[\psi(\big{(}g^{-1}g\big{)}\cdot x) =\psi(\big{(}g^{-1}ug\big{)}\cdot x),\quad\forall u\in S_{k}^{(0)}\] \[(\psi g^{-1})(g\cdot x) =(\psi g^{-1})(u\cdot(g\cdot x))\] \[(\psi g^{-1})(Mx) =(\psi g^{-1})(u\cdot(Mx)) \tag{11}\] From eq. (11), we see that \(\phi=\psi\cdot g^{-1}\) and \(M=g\) are the desired \(S_{k}^{(0)}\)- invariant function and the linear transformation respectively and \(\phi=\psi\cdot M\). We get the second part of the result by applying Corollary 4.1.1 to \(\phi\). We could also relax the conjugacy condition, i.e., discover subgroups of type \(S_{k}\) when \(k\) itself is unknown. This is formalized in the following result. **Theorem 4.3** (Subgroups of type \(S_{k}\)).: _Any \(S_{k}\)-invariant function (\(k\leq n\)) \(\psi\), can be realised using an \(S_{n}\)-invariant function and a linear transformation, in specific, it can be realised through the following form,_ \[\psi(x)=\left(\phi\cdot\hat{M}\right)(x)=\rho\left(\begin{bmatrix}(I-M)x\\ \sum_{i=1}^{n}\gamma\left(m_{i}^{T}x\right)\end{bmatrix}\right) \tag{12}\] _where \(\hat{M}=\begin{bmatrix}I-M\\ M\end{bmatrix}\) and_ \[\phi(y)=\left[y_{1},\ldots,y_{n},\ \sum_{i=1}^{n}\gamma(y_{n+i})\right]^{T}\] Proof.: Since \(S_{k}\) is conjugate to \(S_{k}^{0}\), it is enough to prove the result for \(S_{k}^{0}\)-invariant function. Hence, the goal is to show that \((I-M)X\times\sum_{i=1}^{n}\gamma\left(m_{i}^{T}X\right)\)is homeomorphic to \(\sum_{m=1}^{k}\gamma(X_{m})\times[0,1]^{n-k}\) (from Corollary 4.1.1 and Lemma 4.2) for some linear transformation \(M\). Suppose, \[M=\begin{bmatrix}I_{k\times k}&0\\ 0&0\end{bmatrix}, \tag{13}\] then, \[\begin{bmatrix}(I-M)x\\ \sum_{i=1}^{n}\gamma\left(m_{i}^{T}x\right)\end{bmatrix}=\begin{bmatrix} \textbf{0}^{(k)}\\ x_{k+1}\\ \cdot\\ \cdot\\ x_{n}\\ B+\sum_{i=1}^{k}\gamma\left(x_{i}\right)\end{bmatrix}, \tag{14}\] where \(B=(n-k)\gamma(0)\) and \(\textbf{0}^{(k)}\) is \(k\)-dimensional zero vector. Thus, from RHS of the eq. (14), the above claim follows. (Note that, the function \(Mx\mapsto\sum_{i=1}^{n}\gamma\left(m_{i}^{T}x\right)\) is \(S_{n}\)-invariant and \(\phi\) is \(S_{2n}^{n}\)-invariant function). We now extend our method to cyclic and dihedral subgroups of \(S_{n}\) and state the following result. **Theorem 4.4** (Cyclic and Dihedral subgroups).: _If \(k|n\), any \(\mathbb{Z}_{k}\)-invariant (or \(D_{2k}\)-invariant) function \(\psi\), can be realised using a \(\mathbb{Z}_{n}\)-invariant (or \(D_{2n}\)-invariant) function \(\phi\) and a linear transformation, in specific, it can be realised through the following form,_ \[\psi(x)=\left(\phi\cdot\hat{M}\right)(x) \tag{15}\] _where \(\hat{M}=\begin{bmatrix}M\\ I-L\end{bmatrix}\) for some \(M,L\in\mathbb{R}^{n\times n}\)._ Proof.: In this proof, without loss of generality, we prove the result for \(\mathbb{Z}_{k}^{(0)}\)-invariant function. Suppose, \[M=\begin{bmatrix}I_{k\times k}&0\\ I_{k\times k}&0\\ \vdots&\vdots\\ I_{k\times k}&0\end{bmatrix},\quad L=\begin{bmatrix}I_{k\times k}&0\\ 0&0\end{bmatrix}, \tag{16}\] Since \(k|n\), we can stack the \(I_{k\times k}\) matrices as shown in eq. (16). Then, \(M:X\to X\) is defined as, \[x=[x_{1},x_{2}\ldots x_{n}]^{T}\longmapsto Mx= [x_{1},x_{2},\ldots x_{k},\] \[x_{1},x_{2},\ldots,x_{k},\] \[\vdots\] \[x_{1},x_{2},\ldots,x_{k}]^{T} \tag{17}\] Under the action of \(\mathbb{Z}_{k}\) (\(h\cdot x\), for some \(h\in\mathbb{Z}_{k}\)), we get that, \[x\xrightarrow{h}x^{\prime}=[x_{u},x_{u+1},\ldots,x_{k},x_{1},\ldots,x_{u-1}]^ {T} \tag{18}\] which corresponds to (\(g\cdot(Mx)\), for some \(g\in\mathbb{Z}_{n}\)), \[Mx\xrightarrow{g} Mx^{\prime}=[x_{u},x_{u+1},\ldots,x_{k},x_{1},\ldots,x_{u-1}\] \[x_{u},x_{u+1},\ldots,x_{k},x_{1},\ldots,x_{u-1}\] \[\vdots\] \[x_{u},x_{u+1},\ldots,x_{k},x_{1},\ldots,x_{u-1}]^{T} \tag{19}\] Similarly, the converse is also true, i.e., \(\mathbb{Z}_{n}\)-action on \(Mx\) corresponds to \(\mathbb{Z}_{k}\)-action on \(x\). Hence, the \(\mathbb{Z}_{k}\)-invariant function of \(x\) corresponds to \(\mathbb{Z}_{n}\)-invariance of \(Mx\). Note that, the \(\mathbb{Z}_{n}\)-invariance of the function \(\phi\) is with respect to the first \(n\) elements (out of \(2n\)) of its input vector. Similar proof holds for dihedral groups (\(D_{2k}\) and \(D_{2n}\)). The above set of techniques can also be extended to other classes of subgroups. In this regard, we state the following general result. **Theorem 4.5**.: _Any \(H\)-invariant function \(\psi\) can be learnt through composing a \(G\)-invariant function \(\phi\) with a linear transformation \(M\), i.e., \(\psi=\phi\cdot M\) if the following conditions hold,_ 1. _For any_ \(h\in H,\,\exists g\in G\) _such that_ \(M(h\cdot x)=g\cdot(Mx)\,,\,\forall x\in X\)__ 2. _For any_ \(g\in G\) _such that_ \(g\cdot(Mx)\in R(M)\)_,_ \(\exists h\in H\) _such that_ \(M(h\cdot x)=g\cdot(Mx)\,,\,\forall x\in X\)_, where_ \(R(M)\) _is the range of_ \(M\)_._ Proof.: The claim directly follows from the following observations. Condition (1) states that, any action \(h\cdot x\) (action of \(H\) on \(X\)) corresponds to an action \(g\cdot(Mx)\) (action of \(G\) on \(R(M)\)). Similarly, condition (2) states that, any action \(g\cdot(Mx)\) corresponds to an action \(h\cdot x\). ## 5 Discussion The underlying theme from the results stated in the previous section is that we could discover any subgroup belonging to a particular class of subgroups by learning a \(G\)-invariant function and a linear transformation. Depending on the class, the chosen G varies. We further elaborate on these observations in the following subsections. ### Conjugate Groups In Lemma 4.2, the class of subgroups corresponds to those of type \(S_{k}\) (fixed \(k\)) and the corresponding \(G\) can be \(S_{k}^{0}\). We observe that, for a fixed \(k\), even if we don't know the exact underlying subgroup \(S_{k}\)(a total of \(\binom{n}{k}\) possibilities), we could learn this unknown subgroup. In addition, we also incorporate the canonical form of permutation invariant functions in the resulting architecture. Moreover, this result can be generalized to any class of conjugate subgroups, and the corresponding \(G\) is one of these conjugate groups. The significance of this result lies in the fact that a variety of subgroups are related through conjugation. For instance, all \(\mathbb{Z}_{k}\) form one conjugacy class for a given \(k\), and so does \(A_{k}\)'s. This result is not entirely helpful if the underlying subgroup is normal since it is not conjugate to any other subgroup. However, this is not much of a hindrance since the only non-trivial proper normal subgroup of \(S_{n}\) is \(A_{n},\ \forall n\geq 5\). ### \(S_{k}\), \(\mathbb{Z}_{k}\) and \(D_{2k}\) Subgroups Theorem 4.3 focuses on subgroups of type \(S_{k}\) (varying \(k\) and \(k\in\{1,2,\ldots,n\}\)), and the corresponding \(G\) is \(S_{n}\) itself. We incorporate the canonical form of permutation invariant functions here as well. We observe that the number of such subgroups is \(2^{n}-1\) for a given \(n\). Hence, we could learn any of these subgroups with the standard ar Figure 1: Generic framework for learning \(H\)-invariant function. The dotted arrows point towards specific examples of linear and \(G\)-invariant functions. The corresponding \(H\)-invariant functions are \(S_{k}\)-invariant and \(\mathbb{Z}_{k}\)-invariant. chitecture of an \(S_{n}\)-invariant function and a linear transformation. Note that if \(k\) is fixed, either of the architectural forms given by Lemma 4.2 and Theorem 4.3 is applicable. We will discuss the corresponding empirical results in the coming sections. Theorem 4.4 considers subgroups of the cyclic \(\mathbb{Z}_{k}\) and dihedral group \(D_{2k}\). The corresponding \(G\)-invariant functions are of \(\mathbb{Z}_{n}\) and \(D_{2n}\), respectively. ### Generalization Theorem 4.5 presents a general set of conditions to be satisfied to learn any \(H\)-invariant function using a \(G\) invariant function and a linear transformation. As such, the previous results are specific cases of this Theorem. However, they provide explicit structures of the linear transformation \(M\). These can help design appropriate training techniques to learn the optimum \(M\), while the general result of Theorem 4.5 can guide us towards discovering results for new classes of subgroups. ### Limitations The proposed framework presumes the knowledge of the underlying class of subgroups apriori (but not the exact subgroup) and an appropriate value of \(n\) for \(S_{n}\), \(\mathbb{Z}_{n}\) or \(D_{2n}\) invariant functions. The drawbacks mentioned here are interesting research directions to pursue in the future. ## 6 Experiments We evaluate the accuracy of our proposed method on image-digit sum and symmetric polynomial regression tasks. The problem of image-digit sum can be modified and cast as learning an \(S_{k}\)-invariant function, while the polynomial regression task intrinsically corresponds to learning a \(G\)-invariant function. These are summarized in the following subsections. ### Image-Digit Sum This task aims to find the sum of \(k\) digits using the MNISTm (Loosli et al. (2007)) handwritten digits dataset. It consists of \(8\) million gray scale \(28\times 28\) images of digits \(\{0,1,...,9\}\). We employ a training set of \(150k\) samples and a test set of \(30k\) samples. We consider the following approaches for evaluation. 1. **Deep Sets-\(S_{k}\)**- \(S_{k}\)-invariant neural network proposed by Zaheer et al. (2017). 2. **LSTM**- LSTM network as mentioned in Zaheer et al. (2017). 3. **Proposed method**-. A linear layer followed by an \(S_{n}\)-invariant network. For the LSTM network and the proposed method, the input is a random sample of \(n\) (\(n\) = 10) images, and the target is the sum of \(k\) (\(k\) less than \(n\)) digit labels. We run separate experiments for each of \(k\in\{1,3,5,7,9\}\). Since all \(n\) images are given as input, the two approaches are agnostic of the underlying subgroup. However, we feed only these \(k\) of these images as input for the first approach, while the target output remains the same. As such, this task is equivalent to learning an \(S_{k}\)-invariant function. ### Symmetric Polynomial Regression We evaluate the performance of our method on symmetric polynomial regression tasks as discussed in Kicki et al. (2020), primarily for subgroups of \(\mathbb{Z}_{10}\) and \(\mathbb{Z}_{16}\). For all our experiments, we utilize a \(\mathbb{Z}_{n}\)-invariant neural network with a Sum-Product layer as discussed in Kicki et al. (2020) and a linear layer. First, we run our experiments for subgroups of \(\mathbb{Z}_{10}\), i.e., \(\mathbb{Z}_{5}\) and the group itself (trivial subgroup). We then access the performance for subgroups of \(\mathbb{Z}_{16}\), namely \(\mathbb{Z}_{2}\), \(\mathbb{Z}_{4}\), \(\mathbb{Z}_{8}\), \(\mathbb{Z}_{16}\) using a similar architectural design. We consider the following approaches for evaluation. 1. **G-invariant**-: \(\mathbb{Z}_{k}\)-invariant neural network proposed by Kicki et al. (2020). In this context, \(G=\mathbb{Z}_{k}\). 2. **Simple-FC**-. A stack of fully-connected feedforward layers. 3. **Conv-1D**-. A simple convolutional neural network and feedforward layers. 4. **Proposed method**-. A linear layer followed by a \(\mathbb{Z}_{n}\)-invariant network. The architectural details of the models considered in our experiments are discussed in the appendix section. ## 7 Results ### Image-Digit Sum The test mean absolute errors (MAEs) for the image-digit sum task are shown in Table 2. We observe that the proposed method outperforms the LSTM baseline and is competitive with respect to the Deep Sets method (k input images) when the underlying subgroup \(S_{k}\) is known. In addition, our method converges faster when compared to the LSTM network, which is apparent from the plots for the training and validation errors in Figure 2. ### Symmetric Polynomial Regression In the \(\mathbb{Z}_{k}\)-invariant polynomial regression task, we train our models for 2500 epochs for each of the subgroups of \(\mathbb{Z}_{5}\) and \(\mathbb{Z}_{10}\). In Table 3, 4 and 5 we compare the given baselines with our proposed method for the task of discovering unknown subgroups. Our method outperforms the Simple-FC and Conv-1D baseline networks for each of the given subgroups. As expected, it does not match the baseline architecture, the \(\mathbb{Z}_{k}\)-invariant network (the subgroup is known apriori for this baseline) by a significant margin for each of the diverse set of subgroups we have considered in this task. However, in a few cases, we observe large standard deviations and attribute such values to outliers. A detailed version of our results and the mathematical definition of the polynomials is presented in the appendix section. From Figure 3, it is evident that the \(\mathbb{Z}_{5}\)-invariant function outperforms both our method and the baselines by a significant margin. The Simple FC and Conv-1D networks have very similar performances and show no prominent effect, even with an increase in data size. ### Effect of the data size on the performance This section aims to assess the effect of the dataset size in learning \(\mathbb{Z}_{k}\)-invariant functions using our proposed method and hope to gain a better understanding in such a setting. To analyze our model performance with respect to data size, we use 16, 32, and 64 data points for training (as mentioned in Kicki et al. (2020), we randomly sample these values from [0,1]) and use 480 and 4800 as validation and test sets respectively to assess the generalization ability for each of these methods as mentioned above. We report the mean and standard deviation values across 10 randomly initialized iterations. We also examine the Simple-FC and Conv-1D network by increasing its parameter count, i.e., varying the number of neurons in each layer. However, we observe no significant gains in doing so, as mentioned in the appendix section for at least a few subgroups. ### Interpretability #### 7.4.1 Image-Digit Sum The resulting M matrix is interpretable, and we consistently observe the expected pattern for the image-digit sum task. Note that any row-permuted version of the matrix structure, as shown in eq. (13) will work since the transformed space is still homeomorphic. The \(M\) matrices for \(S_{5}\) and \(S_{9}\) (extracted after training) are depicted in Figure 4. The columns with dark green squares match the actual indices. \begin{table} \begin{tabular}{||l|l|l|l||} \hline **Method** & **Train** & **Validation** & **Test** \\ \hline \(\mathbb{Z}_{5}\)-invariant & \(2.65\pm 0.91\) & \(7.32\pm 0.55\) & \(7.53\pm 0.576\) \\ Proposed & \(4.48\pm 1.25\) & \(24.56\pm 6.93\) & \(24.78\pm 6.45\) \\ Conv-1D & \(20.90\pm 4.91\) & \(32.96\pm 1.31\) & \(32.33\pm 1.18\) \\ Simple-FC & \(23.86\pm 3.87\) & \(33.57\pm 2.07\) & \(33.14\pm 2.11\) \\ \hline \end{tabular} \end{table} Table 3: MAE \([\times 10^{-2}]\) for \(\mathbb{Z}_{5}:\mathbb{Z}_{10}\) Figure 3: The MAE value comparisons using the test dataset for all the models we have considered for the \(\mathbb{Z}_{5}:\mathbb{Z}_{10}\) task. The \(X\)-axis represents the size of the training set \((16,32,64)\). \begin{table} \begin{tabular}{||l|l|l|l|l||} \hline **Method** & **Train** & **Validation** & **Test** \\ \hline \(\mathbb{Z}_{4}\)-invariant & \(1.21\pm 0.25\) & \(3.41\pm 0.4\) & \(3.54\pm 0.39\) \\ Proposed & \(3.32\pm 1.65\) & \(23.70\pm 4.87\) & \(24.69\pm 5.25\) \\ Conv-1D & \(8.39\pm 3.02\) & \(31.34\pm 0.77\) & \(31.10\pm 0.87\) \\ Simple-FC & \(7.27\pm 5.03\) & \(30.82\pm 1.74\) & \(30.83\pm 1.61\) \\ \hline \end{tabular} \end{table} Table 5: MAE \([\times 10^{-2}]\) for \(\mathbb{Z}_{4}:\mathbb{Z}_{16}\) \begin{table} \begin{tabular}{||l|l|l|l||} \hline **Method** & **Train** & **Validation** & **Test** \\ \hline \(\mathbb{Z}_{5}\)-invariant & \(2.65\pm 0.91\) & \(7.32\pm 0.55\) & \(7.53\pm 0.576\) \\ Proposed & \(4.48\pm 1.25\) & \(24.56\pm 6.93\) & \(24.78\pm 6.45\) \\ Conv-1D & \(20.90\pm 4.91\) & \(32.96\pm 1.31\) & \(32.33\pm 1.18\) \\ Simple-FC & \(23.86\pm 3.87\) & \(33.57\pm 2.07\) & \(33.14\pm 2.11\) \\ \hline \end{tabular} \end{table} Table 3: MAE \([\times 10^{-2}]\) for \(\mathbb{Z}_{5}:\mathbb{Z}_{10}\) Figure 2: Training and Validation loss (MAE) for Image-Digit Sum using MNIST dataset. #### 7.4.2 Polynomial Regression We observe that the \(M\)-matrix extracted after training (Figure (5.a)) does not exactly capture the expected pattern, i.e., a stack of identity matrices (Figure (5.b)), even though it nearly masks most of the irrelevant columns (\(n-k\)). The former behavior (lack of exact structure) explains the difference in performance with respect to the \(\mathbb{Z}_{k}\)-invariant network, while the latter (masking behavior) describes the superior model performance compared to other baselines. Also, the masking of irrelevant columns already conveys the underlying subgroup; thus, we use this information to estimate the true indices. We estimate the significant indices using the \(L1\)-norm of columns of \(M\) and the mean as the threshold. The results (for different number of training data points \(N\) and different \(\mathbb{Z}_{k}:\mathbb{Z}_{n}\)'s) of the success rate of the estimation are given in Table 6, where we count the estimation as success when the estimated indices exactly match the true indices; otherwise, as a failure. We run each experiment for \(10\) trials. We get high estimation accuracy in most of the cases except for \(N=16\). The estimated indices can be used to run a \(\mathbb{Z}_{k}\)-invariant network (or proposed method with fixed \(M\)) and obtain better performance on regression tasks. ## 8 Conclusion In this work, we studied the problem of discovering the underlying subgroup of \(S_{n}\), i.e., learning a \(H\)-invariant function where \(H\) is an unknown subgroup of \(S_{n}\). We proved that we could learn any \(H\)-invariant function using a \(G\)-invariant function and a linear transformation provided \(H\) belongs to a specific class of subgroups. We considered various subgroups, such as conjugate subgroups, permutation subgroups of \(k\) elements, and cyclic and dihedral subgroups, and illustrated unique structures of the corresponding linear transformations. We demonstrated the validity of our theoretical analysis through empirical results. We also discussed the limitations of our method, which may lead to exciting research directions in the future.
2309.05346
Learning Geometric Representations of Objects via Interaction
We address the problem of learning representations from observations of a scene involving an agent and an external object the agent interacts with. To this end, we propose a representation learning framework extracting the location in physical space of both the agent and the object from unstructured observations of arbitrary nature. Our framework relies on the actions performed by the agent as the only source of supervision, while assuming that the object is displaced by the agent via unknown dynamics. We provide a theoretical foundation and formally prove that an ideal learner is guaranteed to infer an isometric representation, disentangling the agent from the object and correctly extracting their locations. We evaluate empirically our framework on a variety of scenarios, showing that it outperforms vision-based approaches such as a state-of-the-art keypoint extractor. We moreover demonstrate how the extracted representations enable the agent to solve downstream tasks via reinforcement learning in an efficient manner.
Alfredo Reichlin, Giovanni Luca Marchetti, Hang Yin, Anastasiia Varava, Danica Kragic
2023-09-11T09:45:22Z
http://arxiv.org/abs/2309.05346v1
# Learning Geometric Representations ###### Abstract We address the problem of learning representations from observations of a scene involving an agent and an external object the agent interacts with. To this end, we propose a representation learning framework extracting the location in physical space of both the agent and the object from unstructured observations of arbitrary nature. Our framework relies on the actions performed by the agent as the only source of supervision, while assuming that the object is displaced by the agent via unknown dynamics. We provide a theoretical foundation and formally prove that an ideal learner is guaranteed to infer an isometric representation, disentangling the agent from the object and correctly extracting their locations. We evaluate empirically our framework on a variety of scenarios, showing that it outperforms vision-based approaches such as a state-of-the-art keypoint extractor. We moreover demonstrate how the extracted representations enable the agent to solve downstream tasks via reinforcement learning in an efficient manner. Keywords:Representation Learning Equivariance Interaction ## 1 Introduction A fundamental aspect of intelligent behavior by part of an agent is building rich and structured _representations_ of the surrounding world [10]. Through structure, in fact, a representation potentially leads to semantic understanding, efficient reasoning and generalization [17]. However, in a realistic scenario an agent perceives observations of the world that are high-dimensional and unstructured e.g., images. Therefore, the ultimate goal of inferring a representation consists of extracting structure from the observed data [3]. This is challenging and in some instances requires supervision or biases. For example, it is known that _disentangling_ factors of variation in data is mathematically impossible in a completely unsupervised way [18]. In order to extract structure, it is therefore necessary to design methods and paradigms relying on additional information and specific assumptions. In the context of an agent interacting with the world, a fruitful source of information is provided by the _actions_ performed and collected together with the observations. Based on this, several recent works have explored the role of actions in representation learning and proposed methods to extract structure from interaction [15; 22; 25]. The common principle underlying this line of research is encouraging the representation to replicate the effect of actions in a structured space - a property referred to as _equivariance_3. In particular, it has been shown in [20] that equivariance enables to extract the location of the agent in physical space, resulting in a lossless and geometric representation. The question of how to represent features of the world which are extrinsic to the agent (e.g., objects) has been left open. Such features are dynamic since they change as a consequence of interaction. They are thus challenging to capture in the representation but are essential for understanding and reasoning by part of the agent. Footnote 3: Alternative terminologies from the literature are _World Model_[15] and _Markov Decision Process Homomorphism_[26]. In this work we consider the problem of learning representations of a scene involving an agent and an external rigid object the agent interacts with (see Figure 1). We aim for a representation disentangling the agent from the object and extracting the locations of both of them in physical space. In order words, we aim for representations that are isometric w.r.t. to the geometry of the world. To this end, we focus on a scenario where the object displaces only when it comes in contact with the agent, which is realistic and practical. We make no additional assumption on the complexity of the interaction: the object is allowed to displace arbitrarily and its dynamics is unknown. Our assumption around the interaction enables to separate the problem of representing the agent - whose actions are known and available as a supervisory signal - from the problem of representing Figure 1: Our framework enables to learn a representation \(\varphi\) recovering the geometric and disentangled state of both an agent (\(z_{\text{int}}\), white) and an interactable object (\(z_{\text{ext}}\), brown) from unstructured observations \(o\) (e.g., images). The only form of supervision comes from actions \(a,b\) performed by the agent, while the transition of the object (question mark) in case of interaction is unknown. In case of no interaction, the object stays invariant. the object - whose displacement is unknown. Following this principle, we design an optimization objective relying on actions as the only form of supervision. This makes the framework general and in principle applicable to observations of arbitrary nature. We moreover provide a formalization of the problem and theoretical grounding for the method. Our core theoretical result guarantees that the representation inferred by an ideal learner recovers isometric representations as desired. We complement the theoretical analysis with an empirical investigation. Results show that our proposed representations outperform in quality of structure a state-of-the-art keypoint extractor and can be leveraged by the agent in order to solve control tasks efficiently by reinforcement learning. In summary, our contributions include: * A representation learning framework extracting representations from observations of a scene involving an agent interacting with an object. * A theoretical result guaranteeing that the above learning framework, when implemented by an ideal learner, infers an isometric representation for data of arbitrary nature. * An empirical investigation of the framework on a variety of environments with comparisons to computer vision approaches (i.e., keypoint extraction) and applications to a control task. We provide Python code implementing our framework together with all the experiments at the following public repository: [https://github.com/reichlin/GeomRepObj](https://github.com/reichlin/GeomRepObj). The repository additionally includes the Appendix of the present work. ## 2 Related Work **Equivariant Representation Learning**. Several recent works have explored the idea of incorporating interactions into representation learning. The common principle is to infer a representation which is equivariant i.e., such that transitions in observations are replicated as transitions in the latent space. One option is to learn the latent transition end-to-end together with the representation [15, 26, 33]. This approach is however non-interpretable and the resulting representations are not guaranteed to extract any structure. Alternatively, the latent transition can be designed a priori. Linear and affine latent transitions have been considered in [9], [22] and [25] while transitions defined by (the multiplication of) a Lie group have been discussed in [20], [21]. As shown in [20], for static scenarios (i.e., with no interactive external objects) the resulting representations are structured and completely recover the geometry of the underlying state of the agent. Our framework adheres to this line of research by modelling the latent transitions via the additive Lie group \(\mathbb{R}^{n}\). We however further extend the representation to include external objects. Our framework thus applies to more general scenarios and dynamics while still benefiting from the geometrical guarantees. **Keypoint Extraction**. When observations are images, computer vision offers a spectrum of classical approaches to extract geometric structure. In particular, extracting keypoints enables to identify any object appearing in the observed images. Popular keypoint extractors include classical non-parametric methods [19], [2] as well as modern self-supervised learning approaches [16], [8]. However, keypoints from an image provide a representation based on the geometry of the field of view or, equivalently, of the pixel plane. This means that the intrinsic three-dimensional geometry of states of objects is not preserved since the representation differs from it by an unknown projective transformation. In specific situations such transformation can still be recovered by processing the extracted keypoints. This is the case when images are in first person view w.r.t. the observer: the keypoints can then be converted into three-dimensional landmarks via methods such as bundle adjustment [31], [29]. Differently from computer vision approaches, our framework is data-agnostic and does not rely on specific priors tied to the nature of observations. It instead extracts representations based on the actions performed by the agent, which is possible due to the dynamical assumptions described in Section 3. **Interactive Perception**. The role of interaction in perception has been extensively studied in cognitive sciences and neuroscience [7, 12, 23]. Inspired by those, the field of interactive perception from robotics aims to enhance the understanding of the world by part of an artificial system via interactions [5]. Applications include active control of cameras [1] and manipulators [32] in order to improve the perception of objects [4, 13, 28]. Our work fits into the program of interactive perception since we crucially rely on performed actions as a self-supervisory signal to learn the representation. We show that the location of objects can be extracted from actions alone, albeit in a particular dynamical setting. Without interaction, this would require strong assumptions and knowledge around the data and the environment as discussed in Section 2. ## 3 Formalism and Assumptions In this section we introduce the relevant mathematical formalism together with the assumptions necessary for our framework. We consider the following scenario: an agent navigates in a Euclidean space and interacts in an unknown way with an external object. This means that the space of states \(\mathcal{S}\) is decomposed as \[\mathcal{S}=\mathcal{S}_{\mathrm{int}}\times\mathcal{S}_{\mathrm{ext}} \tag{1}\] where \(\mathcal{S}_{\mathrm{int}}\) is the space of states of the agent (_internal_ states) and \(\mathcal{S}_{\mathrm{ext}}\) is the space of states of the object (_external_ states). We identify both the agent and the object with their location in the ambient space, meaning that \(\mathcal{S}_{\mathrm{int}}\subseteq\mathbb{R}^{n}\supseteq\mathcal{S}_{ \mathrm{ext}}\), where \(n\) is the ambient dimension. The actions that the agent performs are displacements of its state i.e., the space of actions consists of translations \(\mathcal{A}=\mathbb{R}^{n}\). In our formalism we thus abstract objects as material points for simplicity of the theoretical analysis. The practical extension to volumetric objects together with their orientation is discussed in Section 4.3 while the extension of agent's actions to arbitrary Lie groups is briefly discussed in Section 6. Our first assumption is that the agent can reach any position from any other via a sequence of actions. This translates in the following connectivity condition: **Assumption 1**: (Connectedness) _The space \(\mathcal{S}_{\mathrm{int}}\) is connected and open._ When the agent performs an action \(a\in\mathcal{A}\) the state \(s=(s_{\mathrm{int}},s_{\mathrm{ext}})\) transitions into a novel one denoted by \(a\cdot s=(s^{\prime}_{\mathrm{int}},s^{\prime}_{\mathrm{ext}})\). Since the actions displace the agent, the internal state gets translated as \(s^{\prime}_{\mathrm{int}}=s_{\mathrm{int}}+a\).1 However, the law governing the transition of the object \(s^{\prime}_{\mathrm{ext}}=T(s,a)\) is assumed to be unknown and can be arbitrarily complex and stochastic. We stick to deterministic transitions for simplicity of explanation. Crucially, the agent does not have access to the ground-truth state \(s\). Instead it perceives unstructured and potentially high-dimensional observations \(o\in\mathcal{O}\) (e.g., images) via an unknown emission map \(\omega:\ \mathcal{S}\rightarrow\mathcal{O}\). We assume that \(\omega\) is injective so that actions induce deterministic transitions of observations, which we denote as \(o^{\prime}=a\cdot o\). This assumption is equivalent to total observability of the scenario and again simplifies the forthcoming discussions by avoiding the need to model stochasticity in \(\mathcal{O}\). Footnote 1: Whenever we write \(a\cdot s\) we implicitly assume that the action is valid i.e., that \(s_{\mathrm{int}}+a\in\mathcal{S}_{\mathrm{int}}\). The fundamental assumption of this work is that the dynamics of the external object revolves around _contact_ i.e., the object does not displace unless it is touched by the agent. This is natural and often satisfied in practice. In order to formalize it, note that when the agent in state \(s_{\mathrm{int}}\) performs an action \(a\in\mathcal{A}\) we can imagine it moving along the open segment \(\lfloor s_{\mathrm{int}},\ s_{\mathrm{int}}+a\rfloor=\{s_{\mathrm{int}}+ta\}_{ 0<t<1}\). Our assumption then translates into (see Figure 1 for a graphical depiction): **Assumption 2**: (Interaction Occurs at Contact) _For all agent states \(s_{\mathrm{int}}\in S\) and actions \(a\in\mathcal{A}\) it holds that \(s^{\prime}_{\mathrm{ext}}=s_{\mathrm{ext}}\) if and only if \(s_{\mathrm{ext}}\not\in\lfloor s_{\mathrm{int}},\ s_{\mathrm{int}}+a\rfloor\)._ As such, the dynamics of the external object can be summarized as follows: \[s^{\prime}_{\mathrm{ext}}=\begin{cases}s_{\mathrm{ext}}&\text{if }\ s_{ \mathrm{ext}}\not\in\lfloor s_{\mathrm{int}},\ s_{\mathrm{int}}+a\rfloor,\\ T(s,a)&\text{otherwise}.\end{cases} \tag{2}\] Finally, we need to assume that interaction is possible for every state of the object i.e., the latter has to be always reachable by the agent. This is formalized via the following inclusion: **Assumption 3**: (Reachability) _It holds that \(\mathcal{S}_{\mathrm{ext}}\subseteq\mathcal{S}_{\mathrm{int}}\)._ ## 4 Method ### Representations and Equivariance We now outline the inference problem addressed in the present work. Given the setting introduced in Section 3, the overall goal is to infer a _representation_ of observations \(\varphi:\ \mathcal{O}\rightarrow\mathcal{Z}=\mathcal{Z}_{\mathrm{int}}\times \mathcal{Z}_{\mathrm{ext}}\), where \(\mathcal{Z}_{\mathrm{int}}=\mathcal{Z}_{\mathrm{ext}}=\mathbb{R}^{n}\). Ideally \(\varphi\) recovers the underlying inaccessible state in \(\mathcal{S}\subseteq\mathcal{Z}\) and disentangles \(\mathcal{S}_{\mathrm{int}}\) from \(\mathcal{S}_{\mathrm{ext}}\). In order to achieve this, our central idea is to split the problem of representing the agent and the object. Since the actions of the agent are available, \(z_{\mathrm{int}}\in\mathcal{Z}_{\mathrm{int}}\) can be inferred geometrically by existing representation learning methods. The representation of the object \(z_{\mathrm{ext}}\in\mathcal{Z}_{\mathrm{ext}}\) can then be inferred based on the one of the agent by exploiting the relation between the dynamics of the two (Equation 2). In order to represent the agent, we consider the fundamental concept of (translational) _equivariance_: Definition 1: The representation \(\varphi\) is said to be _equivariant_ (on internal states) if for all \(a\in\mathcal{A}\) and \(o\in\mathcal{O}\) it holds that \(z^{\prime}_{\mathrm{int}}=z_{\mathrm{int}}+a\) where \((z_{\mathrm{int}},z_{\mathrm{ext}})=\varphi(o)\) and \((z^{\prime}_{\mathrm{int}},z^{\prime}_{\mathrm{ext}})=\varphi(a\cdot o)\). We remark that Definition 1 refers to internal states only, making our terminology around equivariance unconventional. As observed in previous work [20], equivariance guarantees a faithful representation of internal states. Indeed if \(\varphi\) is equivariant then \(z_{\mathrm{int}}\) differs from \(s_{\mathrm{int}}\) by a constant vector. This means that the representation of internal states is a translation of ground-truth ones and as such is lossless (i.e., bijective) and isometrically recovers the geometry of \(\mathcal{S}_{\mathrm{int}}\). The above principle can be leveraged in order to learn a representation of external states with the same benefits as the representation of internal ones. Since the external object displaces only when it comes in contact with the agent (Assumption 2), the intuition is that \(z_{\mathrm{ext}}\) can be inferred by aligning it with \(z_{\mathrm{int}}\). The following theoretical result formalizes the possibility of learning such representations and traces the foundation of our learning framework. Theorem 4.1: _Suppose that the representation \(\varphi:\ \mathcal{O}\to\mathcal{Z}\) satisfies:_ 1. \(\varphi\) _is equivariant_ _(Definition_ 1_),_ 2. \(\varphi\) _is injective,_ 3. _for all_ \(o\in\mathcal{O}\) _and_ \(a\in\mathcal{A}\) _it holds that either_ \(z^{\prime}_{\mathrm{ext}}=z_{\mathrm{ext}}\) _or_ \(z_{\mathrm{ext}}\in\lfloor z_{\mathrm{int}},z_{\mathrm{int}}+a\rfloor\) _where_ \((z_{\mathrm{int}},z_{\mathrm{ext}})=\varphi(o)\) _and_ \((z^{\prime}_{\mathrm{int}},z^{\prime}_{\mathrm{ext}})=\varphi(a\cdot o)\)_._ _Then \(\varphi\circ\omega\) is a translation i.e., there is a constant vector \(h\in\mathbb{R}^{n}\) such that for all \(s\in\mathcal{S}\) it holds that \(\varphi(\omega(s))=s+h\). In particular, \(\varphi\circ\omega\) is an isometry w.r.t. the Euclidean metric on both \(\mathcal{S}\) and \(\mathcal{Z}\)._ We refer to the Appendix for a proof. Theorem 4.1 states that if the conditions \(1.-3.\) are satisfied (together with the assumptions stated in Section 3) then the representation recovers the inaccessible state up to a translation and thus isometrically preserves the geometry of the environment. All the conditions from Theorem 4.1 refer to properties of \(\varphi\) depending on observations and the effect of actions on them, which are accessible in practice. The goal of the forthcoming section is to describe how these conditions can be enforced on \(\varphi\) by optimizing a system of losses. ### Learning the Representation In this section we describe a viable implementation of a representation learning framework adhering to the conditions of Theorem 4.1. We model the representation learner \(\varphi=(\varphi_{\mathrm{int}},\varphi_{\mathrm{ext}})\) as two parameterized functions \(\varphi_{\mathrm{int}}:\ \mathcal{O}\to\mathcal{Z}_{\mathrm{int}}\), \(\varphi_{\mathrm{ext}}:\mathcal{O}\to\mathcal{Z}_{\mathrm{ext}}\) e.g., two deep neural network models. In order to train the models, we assume that the dataset \(\mathcal{D}\) consists of transitions observed by the agent in the form of \(\mathcal{D}=\{(o,a,o^{\prime}=a\cdot o)\}\subseteq\mathcal{O}\times\mathcal{A }\times\mathcal{O}\). Such data can be collected by the agent autonomously exploring its environment and randomly interacting with the external object. This implies that the only form of supervision required consists of the actions performed by the agent together with their effect on the observations. First, we propose to enforce equivariance, condition 1 from Theorem 3.1, by minimizing the loss: \[\mathcal{L}_{\mathrm{int}}(o,a,o^{\prime})=d(z_{\mathrm{int}}^{\prime},z_{ \mathrm{int}}+a) \tag{3}\] where \(d\) is a measure of similarity on \(\mathcal{Z}_{\mathrm{int}}=\mathbb{R}^{n}\) and the notation is in accordance with Definition 1. Typically \(d\) is chosen as the squared Euclidean distance as described in previous work [15; 22]. Next, we focus on the representation of the external object. As stated before, the dataset consists of transitions either with or without interaction. When an interaction occurs, \(z_{\mathrm{ext}}\) should belong to the segment \(\lfloor z_{\mathrm{int}},z_{\mathrm{int}}+a\rfloor\). When it doesn't, the representation should be invariant i.e., \(z_{\mathrm{ext}}=z_{\mathrm{ext}}^{\prime}\). These two cases are outlined in condition 2 of Theorem 3.1 and can be enforced via the following losses: \[\mathcal{L}_{-}(o,a,o^{\prime})=d(z_{\mathrm{ext}},z_{\mathrm{ext}}^{\prime}) \qquad\quad\mathcal{L}_{+}(o,a,o^{\prime})=d(z_{\mathrm{ext}},\lfloor z_{ \mathrm{int}},z_{\mathrm{int}}+a\rfloor). \tag{4}\] The distance involved in \(\mathcal{L}_{+}\) represents a point-to-set metric and is typically set as \(d(z,E)=\inf_{x\in E}d(z,x)\). The latter has a simple explicit expression in the case \(E\) is a segment. However, the data contains no information on whether interaction occurs or not. It is, therefore, necessary to design a procedure determining when to optimize \(\mathcal{L}_{+}\) and \(\mathcal{L}_{-}\). To this end, we propose to train a parallel model \(\varphi_{\mathrm{cont}}:\mathcal{O}\to\mathcal{Z}_{\mathrm{ext}}\), where \(\mathcal{O}\) is the set of transitions in \(\mathcal{O}\). The model \(\varphi_{\mathrm{cont}}\) is a set of transitions in \(\mathcal{O}\). \(\mathcal{O}\to\mathcal{W}\) with latent _contrastive representation_\(\mathcal{W}\) (potentially different from \(\mathcal{Z}\)). This is trained to attract \(w=\varphi_{\mathrm{cont}}(o)\) to \(w^{\prime}=\varphi_{\mathrm{cont}}(o^{\prime})\) while forcing injectivity of \(\varphi\) (condition 2 from Theorem 4.1). To this end, we stick to the popular _InfoNCE_ loss from contrastive learning literature [6]: \[\mathcal{L}_{\mathrm{cont}}(o,o^{\prime})=d_{\mathcal{W}}(w,w^{\prime})+\log \mathbb{E}_{o^{\prime\prime}}\left[e^{-d_{\mathcal{W}}(w^{\prime},w^{\prime \prime})-d(z^{\prime}_{\mathrm{int}},z^{\prime\prime}_{\mathrm{int}})}\right] \tag{5}\] where \(o^{\prime\prime}\) is marginalized from \(\mathcal{D}\). The second summand of Equation 5 encourages the joint encodings \((z_{\mathrm{int}},w)\) to spread apart and thus encourages \(\varphi\) to be injective. Since subsequent observations where interaction does not occur share the same external state, these will lie closer in \(\mathcal{W}\) than the ones where interaction does not occur. This enables to exploit distances in \(\mathcal{W}\) in order to choose whether to optimize \(\mathcal{L}_{-}\) or \(\mathcal{L}_{+}\). We propose to partition (the given batch of) the dataset in two disjoint classes \(\mathcal{D}=C_{-}\sqcup C_{+}\) by applying a natural thresholding algorithm to the quantities \(d_{\mathcal{W}}(w,w^{\prime})\). This can be achieved via one-dimensional 2-means clustering, which is equivalent to Otsu's algorithm [24] (see Figure 2 for an illustration). We then optimize: \[\mathcal{L}_{\mathrm{ext}}(o,a,o^{\prime})=\begin{cases}\mathcal{L}_{-}(o,a,o^ {\prime})&\text{if }(o,a,o^{\prime})\in C_{-},\\ \mathcal{L}_{+}(o,a,o^{\prime})&\text{if }(o,a,o^{\prime})\in C_{+}.\end{cases} \tag{6}\] In summary, the total loss minimized by the models \((\varphi_{\mathrm{int}},\varphi_{\mathrm{ext}},\varphi_{\mathrm{cont}})\) w.r.t. the respective parameters is (see the pseudocode included in the Appendix): \[\mathcal{L}=\mathbb{E}_{(o,a,o^{\prime})\sim\mathcal{D}}[\mathcal{L}_{\mathrm{ int}}(o,a,o^{\prime})+\mathcal{L}_{\mathrm{ext}}(o,a,o^{\prime})+\mathcal{L}_{ \mathrm{cont}}(o,o^{\prime})]. \tag{7}\] ### Incorporating Volumes of Objects So far we have abstracted the external object as a point in Euclidean space. However, the object typically manifests with a body and thus occupies a volume. Interaction and consequent displacement (Assumption 3) occur when the agent comes in contact with the boundary of the object's body. The representation thus needs to take volumetric features into account in order to faithfully extract the geometry of states. In order to incorporate volumetric objects into our framework we propose to rely on _stochastic_ outputs i.e., to design \(z_{\mathrm{ext}}\) as a probability density over \(\mathcal{Z}_{\mathrm{ext}}\) representing (a fuzzy approximation of) the body of the object. More concretely, the output of \(\varphi_{\mathrm{ext}}\) consists of (parameters of) a Gaussian distribution whose covariance matrix represents the inertia ellipsoid of the object i.e., the ellipsoidal approximation of its shape. By diagonalizing the covariance matrix via an orthonormal frame, the orientation of the object can be extracted in the form of a rotation matrix in \(\mathrm{SO}(n)\). The losses of our model are naturally adapted to the stochastic setting as follows. The distance \(d\) appearing in Equation 4 is replaced with Kullback-Leibler divergence. The latter has an explicit simple expression for Gaussian densities which allows to compute \(\mathcal{L}_{-}\) directly. In order to compute \(\mathcal{L}_{+}\) we rely on a Monte Carlo approximation, meaning that we sample a point uniformly from the interval and set \(\mathcal{L}^{+}\) as the negative log-likelihood of the point w.r.t. the density defining \(z_{\mathrm{ext}}\). ## 5 Experiments We empirically investigate the performance of our framework in correctly identifying the position of an agent and of an interactive object. The overall goal of the experimental evaluation is to show that our representation is capable of extracting the geometry of states without relying on any prior knowledge of observations e.g., depth information. All the scenarios are normalized so that states lie in the unit cube. Observations are RGB images of resolution \(100\times 100\) in all the cases considered. We implement each of \(\varphi_{\text{int}}\), \(\varphi_{\text{ext}}\) and \(\varphi_{\text{cont}}\) as a ResNet-18 [11] and train them for 100 epochs via the Adam optimizer with learning rate 0.001 and batch-size 128. We compare our framework with two baselines: * _Transporter Network_[16]: a vision-based state-of-the-art unsupervised keypoint extractor. The approach heavily relies on image manipulation in order to infer regions of the pixel plane that are persistent between pairs of images. We train the model in order to extract two (normalized) keypoints representing \(z_{\text{int}}\) and \(z_{\text{ext}}\) respectively. * _Variational AutoEncoder_ (VAE) [14, 27]: a popular representation learner with a standard Gaussian prior on its latent space. We impose the prior on \(\mathcal{Z}_{\text{ext}}\) only, while \(\varphi_{\text{int}}\) is still trained via the equivariance loss (Equation 3). The decoder takes the joint latent space \(\mathcal{Z}\) in input. We set \(\dim(\mathcal{Z}_{\text{ext}})=32\). This makes the representations disentangled, so that \(z_{\text{int}}\) and \(z_{\text{ext}}\) are well-defined. The resulting representation of the object is generic and is not designed to extract any specific structure from observations. In order to evaluate the preservation of geometry we rely on the following evaluation metric \(\mathcal{L}_{\text{test}}\). Given a trained representation \(\varphi:\mathcal{O}\rightarrow\mathcal{Z}\) and a test set \(\mathcal{D}_{\text{test}}\) of observations with known ground-truth states, we define: \[\mathcal{L}_{\text{test}}=\mathbb{E}_{o\sim\mathcal{D}_{\text{test}}}\left[ \ d(z_{\text{int}}-z_{\text{ext}},s_{\text{int}}-s_{\text{ext}})\ \right] \tag{8}\] where \(d\) is the squared Euclidean distance. Since both our framework and (the encoder of) VAE have stochastic outputs (see Section 4.3), we set \(z_{\text{ext}}\) as the mean of the corresponding Gaussian distribution. Equation 8 measures the quality of preservation of the relative position between the agent and the object by part of the representation. When \(\mathcal{L}_{\text{test}}=0\), \(\varphi\) is an isometry (w.r.t. the Euclidean metric) and thus recovers the geometry of states. The translational invariance of \(\mathcal{L}_{\text{test}}\) makes the comparison agnostic to any reference frame eventually inferred by the given learner. ### Sprites For the first experiment we procedurally generate images of two sprites (the agent and the object) moving on a black background (see Figure 3, top-left). Between images, the agent (red figure) moves according to a known action. If the agent comes in contact with the object (green diamond) during the execution of the action (see Assumption 2) the object is randomly displaced on the next image. In other words, the object's transition function \(T(s,a)\) is stochastic with a uniform distribution. Such a completely stochastic dynamics highlights the independence of the displacement of the agent w.r.t. the one of the object. We generate the following two additional versions of the dataset: * A version with _dynamic background_. Images are now overlaid on top of a nine-times larger second image (blue squares in Figure 3, top-right). The field of view and thus the background moves together with the agent. The background behaves as a visual distractor and makes it challenging to extract structure (e.g., keypoints) via computer vision. * A version with _anisotropic object_. The latter is now a rectangle with one significantly longer side. Besides translating, the object rotates as well when Figure 3: **Top:** Visualization of the dataset from the Sprites experiment. On the left, an example of a datapoint \((o,a,o^{\prime})\in\mathcal{D}\). On the right, an example of an observation from the second version of the dataset where a dynamic background is added as a visual distractor. **Bottom:** Comparison of \(z_{\text{int}}\), \(z_{\text{ext}}\) (gray dots, with the ellipse representing the learned std) extracted via our model and the Transporter network on the three versions of the Sprites dataset: vanilla version (left), with dynamic background (middle) and with anisotropic object (right). interaction occurs. The goal here is showcasing the ability of our model in inferring the orientation of the object as described in Section 4.3. Figure 4 displays the analytic comparison of the performances between our model and the baselines in terms of the evaluation metric (Equation 8). The plot is in log-scale for visualization purposes. Moreover, Figure 3 (bottom) reports a qualitative comparison between our model and the Transporter network. As can be seen, for the simpler version of the experiment (plot on the left) both our model and the Transporter network successfully achieve low error and recover the geometry of both the agent and the object. Note that the Transporter network converges slowly and with high variance (Figure 4, left). This is probably due to the presence of a decoder in its architecture. Our framework instead involves losses designed directly in the latent space, avoiding an additional model to decode observations. As expected, VAE achieves significantly worse performances because of the lack of structure in its representation. As can be seen from Figure 3 (bottom-right), when the object is anisotropic our model correctly infers its orientation by encoding it into the covariance of the learned Gaussian distribution. The Transporter network instead places a keypoint on the barycenter of the object and is therefore unable to recover the orientation. For the more challenging version of the experiment with dynamic background, the transporter is not able to extract the expected keypoints. As can be seen from Figure 3 (bottom-middle), the distracting background causes the model to focus on regions of the image not corresponding to the agent and the object. This is reflected by a significantly higher error (and variance) w.r.t. our framework (Figure 4, right). The latter still infers the correct representation and preserves geometry. This empirically confirms that our model is robust to visual distractors since it does not rely on any data-specific feature or structure. Figure 4: Log-scale plots of the evaluation metric (Equation 8) as the training progresses for the Sprite experiment. The curves display mean and std (for 10 experimental runs). **Left**: vanilla version of the dataset. **Right:** version with a dynamic background. ### Soccer For the second experiment we test our framework on an environment consisting of an agent on a soccer field colliding with a ball (see Figure 5, left). The scene is generated and rendered via the Unity engine. The physics of the ball is simulated realistically: in case of contact, rolling takes gravity and friction into account. Note that even though the scene is generated via three-dimensional rendering, the (inaccessible) state space is still two-dimensional since the agent navigates on the field. We generate two datasets of 10000 triples \((o,a,o^{\prime}=a\cdot o)\) with observations of different nature. The first one consists of views in third-person perspective from a fixed external camera. In the second one, observations are four views in first-person perspective from four cameras attached on top of the agent and pointing in the 4 cardinal directions. We refer to Figure 5 (left) for a visualization of the two types of observations. In Figure 5 (right), we report visualizations of the learned representations. The extracted representation of our proposed method depends solely on the geometry of the problem at hand rather than the nature of the observation. The learned representation is thus identical when learned from the third-person dataset or the first-person one, as shown in 5 (right). Figure 6 (left) displays the comparison of the performances between our model and the baselines in terms of the evaluation metric (Equation 8). The Transporter network is trained on observations in third person and as can be seen, correctly extracts the keypoints on the _pixel plane_. As discussed in Section 2, such a plane differs from \(\mathcal{S}_{\text{int}}\) by an unknown projective (and thus non-isometric) transformation. This means that despite the successful keypoint extraction, the geometry of the state space is not preserved, which is reflected by the high error on the plot. This is a general limitation of vision-based approaches: they are unable to recover the intrinsic geometry due to perspective in the case of a three-dimensional scene. Differently from that, our framework extracts an isometric representation and achieves low error independently from the type of observations. ### Control Task In our last experiment we showcase the benefits of our representations in solving downstream control tasks. The motivation is that a geometric and low-dimensional representation improves efficiency and generalization compared to solving the task directly from observations. To this end we design a control task for the Soccer environment consisting in kicking the ball _into the goal_. The reward is given by the negative distance between the (barycenter of the) ball and the (barycenter of the) goal. Observations are views in third person perspective. In each episode the agent and the ball are initially placed in a random location while the ball is placed in the center. The maximum episode length is 20 steps. We train a number of models via the popular reinforcement learning method _Proximal Policy Optimization_ (PPO; [30]). One model (_End-to-End_) receives raw observations as inputs. The others operate on pre-trained representations \(\mathcal{Z}\) given by the Transporter network, the VAE and our method respectively. All the models implement a comparable architecture for a fair comparison. Figure 6 (right) displays the reward gained on test episodic runs as the training by reinforcement learning progresses. As can be seen, our geometric representation enables to solve the task more efficiently than both the competing representations (Transporter and VAE) and the end-to-end model. Note that the Transporter not only does not preserve the geometry of the state space, but has the additional disadvantage that the keypoint corresponding to the agent and the object can get swapped in the output of \(\varphi\). This causes indeterminacy in the representation and has a negative impact on solving the task. Due to this, the Transporter performs similarly to the end-to-end model and is outperformed by the generic and non-geometric representation given by the VAE. In conclusion, the results show that a downstream learner can significantly benefit from geometric representations of observations in order to solve downstream control tasks. Figure 5: **Left**: an example of the two types of observations (third and first person respectively) from the Soccer experiment. **Right**: visual comparison of \(z_{\text{int}}\), \(z_{\text{ext}}\) (red dots) extracted via our model (from third-person view and first-person view) and the Transporter network. For our model, we overlap the representation to a view of the scene from the top instead of the original observation. ## 6 Conclusions and Future Work In this work we proposed a novel framework for learning representations of both an agent and an object the agent interacts with. We designed a system of losses based on a theoretical principle that guarantees isometric representations independently from the nature of observations and relying on supervision from performed actions alone. We empirically investigated our framework on multiple scenarios showcasing advantages over computer vision approaches. Throughout the work we assumed that the agent interacts with a single object. An interesting line of future investigation is extending the framework to take multiple objects into account. In the stochastic context (see Section 4.3) an option is to model \(z_{\text{ext}}\) via multi-modal densities, with each mode corresponding to an object. As an additional line for future investigation, our framework can be extended to actions beyond translations in Euclidean space. Lie groups other than \(\mathbb{R}^{n}\) often arise in practice. For example, if the agent is able to rotate its body then (a factor of) the space of actions has to contain the group of rotations \(\text{SO}(n)\), \(n=2,3\). Thus, a framework where actions (and consequently states) are represented in general Lie groups defines a useful and interesting extension. ## Acknowledgements This work was supported by the Swedish Research Council, the Knut and Alice Wallenberg Foundation, the European Research Council (ERC-BIRD-884807) and the European Horizon 2020 CANOPIES project. Hang Yin would like to acknowledge the support by the Pioneer Centre for AI, DNRF grant number P1. Figure 6: **Left**: log-scale plot of the evaluation metric as the training progresses for the Soccer experiment. Observations are in third person. **Right**: plot of the reward gained via reinforcement learning on top of different representations. ## Ethical Statement We believe that the present work does not raise specific ethical concerns. Generally speaking, however, any system endowing artificial agents with intelligent behavior may be misused e.g., for military applications. Since we propose a representation learning method enabling an agent to locate objects in an environment, this can be potentially embedded into intelligent harmful systems and deployed for unethical applications.
2309.11333
You can have your ensemble and run it too -- Deep Ensembles Spread Over Time
Ensembles of independently trained deep neural networks yield uncertainty estimates that rival Bayesian networks in performance. They also offer sizable improvements in terms of predictive performance over single models. However, deep ensembles are not commonly used in environments with limited computational budget -- such as autonomous driving -- since the complexity grows linearly with the number of ensemble members. An important observation that can be made for robotics applications, such as autonomous driving, is that data is typically sequential. For instance, when an object is to be recognized, an autonomous vehicle typically observes a sequence of images, rather than a single image. This raises the question, could the deep ensemble be spread over time? In this work, we propose and analyze Deep Ensembles Spread Over Time (DESOT). The idea is to apply only a single ensemble member to each data point in the sequence, and fuse the predictions over a sequence of data points. We implement and experiment with DESOT for traffic sign classification, where sequences of tracked image patches are to be classified. We find that DESOT obtains the benefits of deep ensembles, in terms of predictive and uncertainty estimation performance, while avoiding the added computational cost. Moreover, DESOT is simple to implement and does not require sequences during training. Finally, we find that DESOT, like deep ensembles, outperform single models for out-of-distribution detection.
Isak Meding, Alexander Bodin, Adam Tonderski, Joakim Johnander, Christoffer Petersson, Lennart Svensson
2023-09-20T14:09:38Z
http://arxiv.org/abs/2309.11333v1
# You can have your ensemble and run it too - Deep Ensembles Spread Over Time ###### Abstract Ensembles of independently trained deep neural networks yield uncertainty estimates that rival Bayesian networks in performance. They also offer sizable improvements in terms of predictive performance over single models. However, deep ensembles are not commonly used in environments with limited computational budget - such as autonomous driving - since the complexity grows linearly with the number of ensemble members. An important observation that can be made for robotics applications, such as autonomous driving, is that data is typically sequential. For instance, when an object is to be recognized, an autonomous vehicle typically observes a sequence of images, rather than a single image. This raises the question, could the deep ensemble be spread over time? In this work, we propose and analyze Deep Ensembles Spread Over Time (DESOT). The idea is to apply only a single ensemble member to each data point in the sequence, and fuse the predictions over a sequence of data points. We implement and experiment with DESOT for traffic sign classification, where sequences of tracked image patches are to be classified. We find that DESOT obtains the benefits of deep ensembles, in terms of predictive and uncertainty estimation performance, while avoiding the added computational cost. Moreover, DESOT is simple to implement and does not require sequences during training. Finally, we find that DESOT, like deep ensembles, outperform single models for out-of-distribution detection. ## 1 Introduction In safety-critical applications, such as autonomous driving (AD), both the predictive performance and the uncertainty quantification performance of neural network models are essential [1, 2, 3, 4, 5]. For example, the posterior probabilities produced by a perception model are required to contain all the information needed for the downstream decision-making systems to make safe and efficient driving decisions. Even for a seemingly mundane task such as traffic sign classification one can imagine the problem that could result from a model misidentifying a speed limit sign for a stop sign on the highway, while nevertheless outputting a high confidence. The general problem of overconfidence for certain types of neural networks [3] is particularly troublesome in safety-critical applications. When it comes to modeling uncertainty, it is often divided into two types of uncertainty - aleatoric uncertainty and epistemic uncertainty [6]. Aleatoric uncertainty is due to inherent randomness in the process that generates the data. It is therefore not possible to decrease this type of uncertainty by improving the model. On the other hand, epistemic uncertainty is that which is caused by a lack of data or knowledge about the underlying process [7]. One example is to predict the outcome of a biased dice throw. It is impossible to predict what side the dice will land on for certain, no matter how accurate the model is. This constitutes irreducible aleatoric uncertainty in the prediction. However, more data can help us identify the bias of the dice and improve the model, thereby decreasing the epistemic uncertainty. A larger number of models reduces epistemic uncertainty, while a larger number of frames reduces aleatoric uncertainty. Non-Bayesian neural networks often display great predictive performance, but struggle with generating high-quality epistemic uncertainty estimates [3, 8, 9]. Bayesian neural networks on the other hand often yield high-quality epistemic uncertainty estimates but are difficult to train [10]. One way to improve on the epistemic uncertainty estimation of regular neural networks while improv ing predictive performance is to use ensembles [8, 9]. It has long been known that ensembles of neural networks can quantify uncertainty in their predictions [11] and increase the predictive performance compared to single models [12, 13]. Lakshminarayanan _et al._[10] show that neural network ensembles can produce uncertainty estimates that outperform Bayesian models, while also achieving high predictive performance. This allows for a practical and high-performance alternative to Bayesian methods. This type of ensemble is commonly referred to as Deep Ensembles (DEs) [9, 14, 15], and has been verified by other authors to produce high-quality uncertainty estimates [8, 9]. As previously mentioned, ensembles typically also improve predictive performance over single models. This is mathematically proven in the classification setting by Hansen and Salamon [12] under the assumption of independent classification errors between ensemble members. They show that if each model achieves an accuracy greater than 50%, adding more models results in perfect classification performance in the limit. The assumption of independent classification errors does not typically hold in the real world, but the intuition is valid. There are also other potential advantages of using ensembles. Wasay and Idreos [16] show empirically that for a set model parameter budget, using ensembles results in shorter training times and higher accuracy compared to single models. Because of all the benefits of DEs outlined above, applications of them in safety-critical systems, such as AD, would clearly be desirable. However, an important limiting factor for AD applications is the requirement for low latency, real-time processing on resource-limited embedded hardware - a fact that is typically incompatible with the linear compute increase of DEs in the number of members. In this paper, we leverage the fact that the sensor data in AD systems are in general sequential in nature and propose Deep Ensembles Spread Over Time (DESOTs). Instead of applying all ensemble members to the sensor data at each time step, as would be the procedure for a conventional DE, _DESOT only applies one member at each time step, but different members at different consecutive time steps_. Hence, DESOT uses the same number of computations as a single model, but the same amount of memory as a DE with an equal number of members. We illustrate the DESOT method by extensively studying the task of traffic sign classification with sequences of tracked patches containing the traffic signs. We choose this task due to its importance in AD applications and its simple formulation, but the method is applicable to a wide range of settings and tasks that use sequence data. We show that DESOTs are competitive with traditional DEs in our setting, both in predictive performance and uncertainty quantification performance. We also show that they outperform both single models and MC-dropout models. In summary, our contributions are the following: **(i)** We propose Deep Ensembles Spread Over Time (DESOT), an approach applicable to sequences that brings the benefits of Deep Ensembles (DE) without the additional computational cost. **(ii)** We thoroughly analyze the proposed approach on traffic sign classification, where a sequence of tracked patches are fed as input to the model. DESOT obtains the benefit of DEs at the computational cost of a single model. **(iii)** We thoroughly analyze the out-of-distribution detection performance, based on the entropy of the predictions, and find that DESOT, like deep ensembles, substantially outperforms a single model. ## 2 Related Work **Deep ensembles:** Deep ensembles have been shown to be superior to any other ensemble method in uncertainty quantification given a fixed computational budget [9]. Ovadia _et al._[8] also showed that DEs are some of the best-performing models in uncertainty estimation. The intuition behind why this might be is that ensembling is a variance Figure 1: Visualization of the different strategies in an example with three time steps and three ensemble members. The temporal fusion blocks combine predictions produced at each time step using some combination rule. At time step \(t\), \(t\in\{1,2,...,T\}\) with \(T=3\) in this example, the temporal fusion block produces a prediction based on the predictions at time steps \(1\) through \(t\). The ensemble fusion block combines individual ensemble member predictions into a final ensemble prediction at each time step. reduction technique, and therefore useful for increasing the quality of epistemic uncertainty estimates [17]. Furthermore, they have additional advantages in their simple implementation and high predictive performance. Though deep ensembles have been used for decades, Lakshminarayanan _et al._[10] show that they produce state-of-the-art uncertainty estimates. DEs are simple to train, with three basic steps involved: (i) ensure that a proper scoring rule is used as loss function; (ii) optionally use adversarial training to increase robustness; and (iii) train the ensemble using randomized initialization of model parameters to increase variety in the ensemble [10]. Many common loss functions, such as cross-entropy loss, are strictly proper scoring rules and can therefore be used in the deep ensemble framework. In practice, adversarial training is often omitted if improved robustness is not strictly necessary. Lakshminarayanan _et al._[10] also demonstrate that the model has the attractive property of decreasing its certainty of prediction in out-of-distribution examples, which was demonstrated using an ensemble trained on the MNIST dataset on examples from the NotMNIST dataset which contains letters instead of digits. It has later been verified that DEs are the SotA for UQ on OOD data [8, 9]. For these reasons, and that training time is not critically limited in the same way as computational capacity upon test-time inference, we chose DEs as the framework for generating our ensemble. It should be noted that we are not the first to attempt leveraging ensembles while keeping the test-time inference time down. Wortsman _et al._[18] propose to average the weights of ensemble members into a single model. Havasi _et al._[19] propose to divide a given neural network into sub-networks, which essentially constitute an ensemble. **Monte Carlo dropout:** Dropout was first introduced by Srivastava _et al._[20] as a regularization measure during training to limit overfitting and increase the generalizability of the learned representation. With dropout, each neuron is turned off at random during training according to a pre-specified probability, or dropout rate, \(p\). This helps the network not to overfit, and therefore generalize better, as it has to create a more robust representation when any neuron can be dropped at any time. Recognizing that using an ensemble of a set of models is usually beneficial for model performance, Srivastava _et al._[20] show that using dropout during inference is equivalent to sampling from an exponential set of possible smaller models, which yields higher overall performance. Gal and Ghahramani [21] later showed that performing a number of forward passes through a model with dropout enabled and averaging the results can be seen as a Bayesian approximation. They chose to call this MC-dropout and claimed that it enables superior uncertainty estimation performance in both regression and classification tasks compared to vanilla models. Of note is that since the introduction of MC-dropout, Ovadia _et al._[8], Ashukha _et al._[9], and Lakshminarayanan _et al._[10], have all claimed that deep ensembles are superior in uncertainty quantification. However, MC-dropout remains widely used due to its simple implementation and general improvement of performance compared to vanilla single models. This makes it a relevant mode of comparison for our proposed model. **Traffic sign recognition:** We evaluate the proposed method on Traffic Sign Recognition (TSR), a field with a decades-long history of development [22]. Lately, TSR has become an important part of AD systems, and high and reliable performance is important for safety. This makes it an interesting application for DESOT. There are two main subproblems of TSR, traffic sign detection and traffic sign classification [23]. This research concerns itself with the latter and assumes that regions of interest in the image have already been identified by another system (in this specific case, human annotators) earlier in the ML pipeline. The domain is characterized by a large number of classes with an imbalanced class distribution. Additionally, variations in illumination, perspective, and occlusions are common [22], making the problem distinctly long-tailed. Furthermore, many of the classes are very similar in shape and color, but with important differences in meaning, such as speed limit signs. Deep learning has recently started revolutionizing this domain, with many models achieving accuracies of over 95% in research settings [24]. This means that any differences in predictive performance between the models tested are likely to be small in absolute terms, and performance benefits might instead lie in the performance of the models on difficult examples such as short sequences or obscured scenes. ## 3 Deep Ensembles Spread over Time In this section, we introduce Deep Ensembles Spread Over Time (DESOT), a method to obtain the benefits of a Deep Ensemble (DE) without additional computational cost. In many real-world scenarios, the input data is a sequence of frames. In the literature, it is then common to apply a single model and average the predictions [25]. This strategy is illustrated in Fig. 1a. It is well-known that replacing the single model with a DE improves both predictive- and uncertainty estimation performance (Fig. 1b). Unfortunately, this multiplies the computational cost with a factor \(M\), where \(M\) is the number of ensemble members. DESOT (Fig. 1c) instead divides the ensemble over time, running one ensemble member per frame, and thus has the same computational cost as a single model. We hypothesize that DESOT still provides, despite the cheap computational cost, the benefits of DEs. ### Image Classification on Sequences We will now introduce the application of a single model or DE to _image classification on sequences_1. First, both the ensemble fusion block and the time fusion block are simple averaging operations. This is how Lakshminarayanan _et al._[10] implement ensemble fusion for deep ensembles in the classification setting. Now, define a sequence \(\mathbf{x}\in\mathbb{R}^{T\times H\times W\times 3}\) as \(T\) distinct still images, each with a height of \(H\) pixels, a width of \(W\) pixels, and three separate color channels. We will consider classification problems where a model should produce a categorical output distribution across \(C\) classes for such a sequence \(\mathbf{x}\). Assume there is a set of \(M\) different neural networks, each of which can conduct this classification. A single model \(m\in\{1,...,M\}\) produces a categorical output distribution \(\mathbf{p}_{m}(x^{t})\) for each single image \(x^{t}\) at time step \(t\in\{1,...,T\}\) in the sequence \(\mathbf{x}\). Then, the final output distribution for model \(m\) is defined as Footnote 1: Note that image classification on sequences is different from video classification, as the former does not need to model the dynamics of the scene. \[\mathbf{p}_{m}(\mathbf{x})=\frac{1}{T}\sum_{t=1}^{T}\mathbf{p}_{m}(x^{t})\enspace, \tag{1}\] which is the element-wise (class-wise) average across the output distributions for each image in the sequence at different time steps. This setup is what we will refer to as a single model (SM). Now imagine that all \(M\) models are used for classifying each image in the sequence, such that the final output distribution for the sequence is \[\mathbf{p}_{\text{DE}}(\mathbf{x})=\frac{1}{M}\sum_{m=1}^{M}\mathbf{p}_{m}( \mathbf{x})=\frac{1}{MT}\sum_{m=1}^{M}\sum_{t=1}^{T}\mathbf{p}_{m}(x^{t})\enspace, \tag{2}\] which is how we choose to apply deep ensembles (DEs) [10] to sequences - averaged across the images of the sequence \(\mathbf{x}\). \(\text{DE}_{M}\) will be used to denote an \(M\)-member deep ensemble. ### Desot Our proposed method, which we call DESOT, instead uses a single model \(m\in\{1,...,M\}\) for each image \(x^{t}\), \(t\in\{1,...,T\}\), to produce a categorical output distribution, but the models are alternated such that any given model \(m\) is used on average \(T/M\) times for a certain sequence of \(T\) images. Analogously to the notation used for DEs, DESOT\({}_{M}\) will denote an \(M\)-member deep ensemble spread over time. We let \(\sigma(t)\) denote the (repeated) mapping between each time step \(t\) and one of the ensemble members \(m\in\{1,...,M\}\), and write the final output distribution produced by DESOT across time steps, ensemble members, and classes as \[\mathbf{p}_{\text{DESOT}}(\mathbf{x})=\frac{1}{T}\sum_{t=1}^{T}\mathbf{p}_{ \sigma(t)}(x^{t})\enspace, \tag{3}\] As is clear from this definition, let us again stress that the DESOT method can only be applied to sequences since it fundamentally relies on alternating the ensemble member in use between neighboring frames. If DESOT was to be used on a sequence length of one, the model would be equivalent to a standard single model. In the real world, one would have a continuous stream of frames from the cameras of the car. In such a case, a set window size might be used such that equations 1-3 each constitute a moving average across some previous images. Due to the shortness of the sequences we use in this paper, which are only 11 frames, we have chosen to use the average across the entire sequence for a model. Figure 2: Examples of training and validation data. Training data is cropped from ZOD, and sequences are created by additional tracking across time using adjacent frames. ## 4 DESOT for Traffic Sign Recognition We apply DESOT to traffic sign classification, a challenging image classification problem that is a cornerstone to autonomous driving. While traffic sign classification is oftentimes studied as a single-frame problem, autonomous vehicles will in practice obtain a sequence of frames. ### Data We use the Zenseact Open Dataset (ZOD) [26], a multi-modal dataset collected by the autonomous driving software company Zenseact. The data has been collected across a number of countries in Europe. For this work, we use the single-frame data in ZOD, which has high-quality annotations for 446k unique signs. Using the annotations, the traffic signs are cropped and saved separately. The majority of these signs are withheld for single-frame training and validation. Due to the lack of temporal sign annotations, 30k randomly selected signs are extended into sequences of crops using an off-the-shelf tracker on the 10 preceding frames, sampled at 15 Hz. These sequences are only used for testing. See Figure 2 for samples from the datasets. ### Architecture and training We use Resnet18 as the base for all our models tested. Note however that, just like DEs, our approach is architecture agnostic. In addition to the single model and traditional DE, we compare DESOT to an MC-dropout implementation. We use a dropout layer after each non-linearity with a dropout rate of 0.2. The dropout-version is run once on each frame in a sequence and then aggregated by averaging in the same way as for the other models. All models are trained for 30 epochs using the AdamW optimizer [27] with a learning rate of 0.0005. Cosine annealing is used for more stable convergence. A batch size of 256 is used. Classes with fewer than 10 occurrences in the training- and validation datasets are omitted from training and evaluation, as well as crops smaller than 16 pixels along any dimension. In line with the definition of deep ensembles by Lakshminarayanan _et al._[10], all models are trained independently using a proper scoring rule. We use cross-entropy loss. For comparisons on uncertainty quantification (UQ), all models are temperature scaled. We use the procedure first introduced by Guo _et al._[3] where a separate validation set is used to find the temperature that optimizes negative log-likelihood. The validation set in question is a separate subset of single frames from ZOD. For the deep ensemble, a joint temperature scaling scheme is employed where the ensemble is treated as a single model with one temperature parameter. This was found to be superior to creating an ensemble of single models of perfect temperature. This finding agrees with research by Rahaman and Thiery [28]. We adopt the same temperature scaling for DESOT. ## 5 Results Our study comprises an analysis of predictive performance (Sec. 5.1) and out-of-distribution uncertainty quantification (Sec. 5.2). The latter is divided into two parts, focusing firstly on real out-of-distribution examples and secondly on systematically increasing the severity of augmentations for in-distribution examples. ### Predictive performance We assessed the predictive performance of various models on the sequence dataset. Throughout the later epochs, all models achieved high accuracy, yet DESOT\({}_{5}\) and DE\({}_{5}\) particularly outperformed their counterparts in early epochs. Importantly, DESOT matched the performance of traditional DEs, with both models slightly surpassing a single model with temporal fusion (SM). Comprehensive performance details are documented in Table 1. When assessed via F1-score, DESOT maintained parity with DEs, creating a wider gap to SM looking at averages across runs. However, do note that variance also increases compared to accuracy. The inclusion of MC-dropout in the single model significantly diminished its performance. Notably, all models, including SM, significantly outperformed our baseline - a single-frame model operating on a randomly selected sequence frame without temporal fusion. We also evaluated the calibration of each model using Brier reliability and Expected Calibration Error (ECE). Notably, a considerable gap emerged between the single model operating on a single frame and the one fused over the entire sequence. This disparity could be linked to the single-frame temperature scaling procedure used, and the tendency of averaging predictions over multiple images to reduce overall confidence, thereby negatively impacting calibration. However, both DESOT and DE substantially improved sequence calibration, even though they still fell significantly short of the single-frame calibration. These findings suggest the potential value of a sequence-aware fine-tuning step to create better-calibrated models for sequences. An essential aspect of traffic sign recognition, and by extension, classification, is the long-tailed distribution of classes. The fact that high performance on all classes is important accentuates the significance of performance on rare classes within the aggregated performance metrics. To evaluate the models' performance on these rare classes, we removed classes with more than 500 samples in the training dataset, leaving us with 625 sequences to evaluate performance on. The predictive performance results for this filtered dataset can be found in Table 2. Due to the smaller dataset (both in training and validation) the results are significantly more noisy. Nevertheless, the results indicate that both ensembling approaches improve performance for these difficult and rare cases. Overall, our method notably outperforms a single model with a similar computational footprint, while matching the performance of DE\({}_{5}\), which has a five-fold larger computational footprint. This superior performance could be attributed to the ensemble members collectively offering a more expressive representation of potential traffic signs than a single model. The distinct advantage of DESOTs lies in their capacity to leverage this richer representation while minimizing computational demands. ### Out-of-distribution uncertainty quantification performance In the real world, ML systems often encounter data absent from the training set, known as Out-of-Distribution (OOD) data. Thus, it's critical to evaluate how a system performs under such circumstances. Particularly, since machine learning systems can fail silently, confidently misclassifying OOD examples [1], we aim to characterize the behavior of various systems when confronted with OOD data. This is achieved by testing gradual OODness via augmentations, akin to Ovadia _et al._[8], and by assessing performance on completely unseen data, as per Lakshminarayanan _et al._[10]. This enables us to simulate common scenarios (e.g., rotated signs or changing lighting) and identify changes in output distribution for OOD detection. #### 5.2.1 Complete out-of-distribution data The Zenseact Open Dataset includes a class named NotListed, representing traffic signs not included in any other class, such as destination signs and rare special signs. This class serves as an unseen set of mixed traffic signs for OOD data testing. Since the models were not trained on this OOD data, previously used metrics like accuracy, F1-score, Brier score, and ECE aren't applicable. Instead, we use entropy as a metric to quantify if the models 'know what they don't know'. Higher entropy of the output distribution suggests higher model uncertainty. The results are illustrated via graphs showcasing the entropy scores of different models. The effect of varying ensemble sizes can be seen in Figure 3, whereas different approaches are more easily compared in Figure 4. As expected, all models exhibit a drastic increase in entropy for OOD data compared to in-distribution data. The single-frame model shows limited adjustment to its uncertainty on OOD data. On the contrary, single-frame DE\({}_{5}\) and DE\({}_{10}\) demonstrate a substantial entropy increase on OOD data, which is consistent with previous research [9, 10]. Interestingly, the increase from 5 to 10 ensemble members is relatively minor, suggesting a saturation effect. When evaluated over a sequence, the entropies of DEs and DESOTs exhibit remarkable similarity, with a slight advantage towards DEs. Surprisingly, the entropy distribution of single models, when aggregated over a sequence, nearly mirrors that of single-frame ensembles. Further, MC-dropout also escalates its entropy in a manner consistent with DEs and DESOTs, contradicting the findings of Lakshminarayanan _et al._[10]. However, we must emphasize that the adoption of MC-dropout results in degraded predictive performance, as highlighted in our prior results. Next, we used a simple thresholding strategy on the output entropy to test each model's OOD detection potential, rewarding methods that clearly separate in- and out-of-distribution entropy distributions. We fitted a threshold to each model using half of the available sequences to detect in- or out-of-distribution samples. The other half of the sequences dataset was then used to evaluate OOD detection performance for each model, with results presented in Table 3, including the entropy threshold value and metrics such as accuracy, precision, recall, and F1-score. While the optimal threshold value varied widely between models, the OOD detection performance of all models was relatively high. In a standard setting, DESOT and DE per \begin{table} \begin{tabular}{c c c c} \hline \hline **Model** & **Accuracy \(\uparrow\)** & **F1-score \(\uparrow\)** & **Brier reliability \(\downarrow\)** & **ECE \(\downarrow\)** \\ \hline SM (single frame) & 94.85 \(\pm\) 0.06 & 70.42 \(\pm\) 1.27 & **0.0069 \(\pm\) 0.0007** & **0.21 \(\pm\) 0.06** \\ SM & 97.34 \(\pm\) 0.06 & 81.12 \(\pm\) 1.75 & 0.0124 \(\pm\) 0.0003 & 2.88 \(\pm\) 0.06 \\ DESOT\({}_{5}\) & 97.60 \(\pm\) 0.03 & **83.26 \(\pm\) 0.93** & 0.0108 \(\pm\) 0.0002 & 2.25 \(\pm\) 0.05 \\ DE\({}_{5}\) & **97.64 \(\pm\) 0.01** & 82.73 \(\pm\) 1.01 & 0.0108 \(\pm\) 0.0002 & 2.25 \(\pm\) 0.05 \\ MC-dropout & 97.10 \(\pm\) 0.09 & 76.79 \(\pm\) 2.71 & 0.0165 \(\pm\) 0.0004 & 4.03 \(\pm\) 0.07 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of each strategy on the ZOD sequence dataset. The results include \(\pm\) one standard deviation of performance over 5 runs. All models are temperature scaled. Deep Ensemble (DE) improves over the single model (SM) in terms of predictive- and uncertainty estimation performance, but by multiplying the computational cost with the number of ensemble members. Our DESOT, in contrast, obtains the DE benefits while avoiding the added computational cost. \begin{table} \begin{tabular}{c c c} \hline \hline **Model** & **Accuracy \(\uparrow\)** & **F1-score \(\uparrow\)** \\ \hline SM (single frame) & 87.42 \(\pm\) 0.97 & 46.49 \(\pm\) 3.03 \\ SM & 87.70 \(\pm\) 0.58 & 43.65 \(\pm\) 2.84 \\ DESOT\({}_{5}\) & **89.53 \(\pm\) 0.71** & **47.53 \(\pm\) 1.58** \\ DE\({}_{5}\) & 89.41 \(\pm\) 0.38 & 46.57 \(\pm\) 2.19 \\ MC-dropout & 85.52 \(\pm\) 0.87 & 36.50 \(\pm\) 0.71 \\ \hline \hline \end{tabular} \end{table} Table 2: Predictive performance on a minority class version of the ZOD sequence dataset. Without extra computational cost beyond the single-model baseline, DESOT substantially outperform the single model. In contrast to the full dataset, the single-frame-single-model strategy yields competitive performance. form closely and clearly outperform the single model across all metrics. The introduction of temperature scaling, typically used to enhance the calibration of classification models, alters this picture. It marginally impacts the ensembles but dramatically improves the single model's performance, almost entirely closing the gap with the ensembles. Surprisingly ensembles usually outdo temperature scaling in OOD detection. As our focus is the comparison between DESOTs and DEs, we do not further explore temperature scaling. We also acknowledge insights from other research [29], highlighting potential pitfalls of this simple experiment, such as the risk of misidentifying ambiguous in-distribution samples as OOD - a problem we also encountered. Despite this, these tests offer a fundamental benchmark for more sophisticated OOD detection strategies, thereby illustrating the effectiveness of even such a straightforward method. #### 5.2.2 Shifted out-of-distribution data We generate increasingly out-of-distribution data by escalating the severity of augmentations, inspired by the works of Hendrycks and Dietterich [30] and Ovadia _et al._[8]. This experiment involved six distinct augmentations, each with a specific intensity range. As we increase the augmentation's intensity, we anticipate a decrease in model accuracy. Ideally, the models should adjust their certainty to correspond with this accuracy drop, thereby maintaining calibration. Reliable model uncertainty estimates offer actionable insights to downstream users. However, when augmentation intensities are high, calibration may not provide a meaningful measure of uncertainty quantification. Since accurate sequence classification often becomes challenging under intense augmentations, it's more crucial to identify such samples as out-of-distribution rather than obtaining a well-calibrated class distribution, as the ground truth class loses its relevance. Hence, we have also recorded the mean entropy across the dataset for each model and augmentation severity. Figure 5 illustrates how various augmentations affect accuracy, Brier reliability [31, 32], and mean entropy of models in sequence settings at different intensity levels. Lower Brier reliability indicates better model calibration, while higher mean entropy sug Figure 4: Histogram of the entropy for OOD data for the single-frame test dataset (left) and the sequences dataset (right). The \(+T\) suffix denotes temperature scaling. Similar to deep ensembles, DESOT increases the entropy for OOD data, facilitating OOD detection. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Threshold & Accuracy & Precision & Recall & F1 \\ \hline SM & 0.709 & 0.895 & 0.811 & 0.905 & 0.855 \\ DESOT\({}_{5}\) & 1.048 & **0.919** & 0.852 & **0.922** & **0.886** \\ DE\({}_{5}\) & 1.071 & **0.919** & **0.856** & 0.919 & **0.886** \\ MC-dropout & 1.428 & 0.910 & 0.845 & 0.900 & 0.872 \\ \hline SM + T & 1.463 & 0.916 & **0.856** & 0.906 & 0.880 \\ DESOT\({}_{5}\) + T & 1.143 & 0.919 & 0.850 & **0.928** & 0.887 \\ DE\({}_{5}\) + T & 1.178 & **0.920** & 0.853 & 0.925 & **0.888** \\ MC-dropout + T & 1.610 & 0.914 & 0.842 & 0.921 & 0.880 \\ \hline \hline \end{tabular} \end{table} Table 3: Results from applying an entropy threshold for OOD detection on the sequences dataset. Note that DEs and DESOTs perform the best out of all models. When subject to temperature scaling (rows with \(+T\) suffix), the difference in performance decreases. Figure 3: Histogram of the entropy for OOD data (red) and in-distribution data (blue) for the ZOD sequence dataset. We run various ensemble sizes \(M\in\{1,5,10\}\), which are differentiated by color shade. Again, note that DE\({}_{1}\) and DESOT\({}_{1}\) are special cases that are equivalent to a single model. The vertical dashed lines are the mean entropy for the model of the same color. Like standard deep ensembles, DESOT increases its entropy on OOD data as additional ensemble members are added. certainty. For brevity, we omit results for the single-frame setting. Observing the graphs, we notice a trend where a decrease in model accuracy corresponds with an increase in the Brier reliability score. This suggests that models become less calibrated as the data turns more out-of-distribution. Furthermore, entropy appears to rise with augmentation intensity, inversely correlating with accuracy. The strange trends in the rotation augmentation graphs are attributed to many signs having less than \(360^{\circ}\) rotational symmetry. Examining the results from augmented OOD data, it appears that all models show comparable robustness to the augmentations in terms of predictive performance, deteriorating at similar rates for increased intensities. However, there's a greater variation in model calibration upon augmentation. Single models struggle to maintain calibration at higher augmentation intensities, particularly for rotations, hue, motion blur, and Gaussian noise, a phenomenon also observed by Ovadia _et al._[8]. As in the earlier OOD study, MC-dropout delivers convincing robustness, albeit at the expense of predictive performance. It is important to note that our study and that of Ovadia _et al._[8] are performed on distinct datasets - our more complex sequence data versus their simpler MNIST dataset. In general, DEs and DESO's seem to offer similar calibration performances, measured in Brier reliability, for a given augmentation intensity, outperforming single models. Post temperature scaling, the disparity between the ensembling methods and single models is not as pronounced. An exception is Gaussian noise corruption, where single models distinctly fail to maintain calibration compared to others. ## 6 Conclusions We have introduced Deep Ensembles Spread Over Time (DESOT), a novel approach that distributes ensemble members across a sequence, achieving the predictive power and out-of-distribution robustness of running a full deep ensemble at each time step, with the computational cost of a single model. We extensively demonstrate the viability of this approach on the task of traffic sign classification, a highly relevant task where misinterpretations can lead to catastrophic outcomes and out-of-distribution signs are prevalent. Looking ahead, we see exciting potential for DESO's in more complex scenarios, such as 3D object detection where one could use a Kalman Filter for temporal fusion. We believe that our work opens up an avenue for new research on high-performing resource-efficient models, by demonstrating that ensembles can indeed be spread over time. **Limitations:** The proposed strategy, DESOT, is applicable only for sequence processing problems. Although, in theory, the computational resources should match those required for a single model, it's practical to anticipate that at least two members of the ensemble might need concurrent loading into memory. This scenario could potentially double the memory requirements. Moreover, while the improvements of DESOT (and deep ensembles) over SM are significant, we were surprised to see the large benefits of SM over the single-frame-single-model. A similar observation can be made for the out-of-distribution detection, where the effect of temperature scaling almost equals that of DESOT or deep ensembles. Though, combining DESOT or DEs with temperature scaling provides the best performance. We also note that the model calibration is actually better on a single frame than it is for multiple frames. One potential future research direction is thus to investigate how temperature scaling is best applied to the image classification on sequences problem. **Acknowledgements:** This work was partially supported by the Wallenberg AI, Autonomous Systems, and Software Programme (WASP). Figure 5: Uncertainty quantification performance for each model on augmented data of increasing intensity. The performance is measured in accuracy, Brier reliability, and mean entropy. Tested on the sequences dataset. All approaches tend to increase their entropy as augmentations become stronger. For some augmentations, the single model without temperature scaling and the MC-dropout increase entropy much more or much less than the other approaches.
2309.08662
Modeling the Past Hypothesis: A Mechanical Cosmology
There is a paradox in the standard model of cosmology. How can matter in the early universe have been in thermal equilibrium, indicating maximum entropy, but the initial state also have been low entropy (the "past hypothesis"), so as to underpin the second law of thermodynamics? The problem has been highly contested, with the only consensus being that gravity plays a role in the story, but with the exact mechanism undecided. In this paper, we construct a well-defined mechanical model to study this paradox. We show how it reproduces the salient features of standard big-bang cosmology with surprising success, and we use it to produce novel results on the statistical mechanics of a gas in an expanding universe. We conclude with a discussion of potential uses of the model, including the explicit computation of the time-dependent coarse-grained entropies needed to investigate the past hypothesis.
Jordan Scharnhorst, Anthony Aguirre
2023-09-15T18:00:02Z
http://arxiv.org/abs/2309.08662v2
# Modeling the Past Hypothesis: A Mechanical Cosmology ###### Abstract There is a paradox in the standard model of cosmology. How can matter in the early universe have been in thermal equilibrium, indicating maximum entropy, but the initial state also have been low entropy (the "past hypothesis"), so as to underpin the second law of thermodynamics? The problem has been highly contested, with the only consensus being that gravity plays a role in the story, but with the exact mechanism undecided. In this paper, we construct a well-defined mechanical model to study this paradox. We show how it reproduces the salient features of standard big-bang cosmology with surprising success, and we use it to produce novel results on the statistical mechanics of a gas in an expanding universe. We conclude with a discussion of potential uses of the model, including the explicit computation of the time-dependent coarse-grained entropies needed to investigate the past hypothesis. Past Hypothesis, Second Law of Thermodynamics, Cosmology, Relativistic Gas, Observational Entropy ## I Introduction In contrast to the existence of ubiquitous time-asymmetric phenomena, known microscopic physical laws are all time-reversal (or CPT) invariant. It has been argued that the boundary conditions of the theory can then explain the asymmetry. This is the foundation of the past hypothesis - the hypothesis that the early universe, or initial state of the universe, had a very low entropy compared to the entropy today. The past hypothesis is widely presumed to account for the arrow of time and the irreversible phenomena we observe in thermodynamic systems [1; 2]. It is also known from measurements of the cosmic microwave background and basic theoretical consistency that the constituents of the early universe were in thermal equilibrium. However, this combines with the past hypothesis to produce a paradox, as equilibrium states have maximum entropy by definition. Competing resolutions have been proposed in the literature, with two main ideas emerging [3; 4]: 1. The clustering explanation; 2. The expansion explanation. These resolutions argue that the early universe was actually quite _out_ of equilibrium when taking gravity into account. The first argument holds that a uniform matter distribution is actually _lower entropy_ than a clumped distribution, since matter clumps under the influence of Newtonian gravity. The second argument holds that low entropy, in the non-equilibrium sense, was due to cosmological expansion. This argument makes the point that expansion changes the equilibrium state and does so faster than the matter can attain its equilibrium. The early smallness of the scale factor acts as a constraint that leaves the matter degrees of freedom stuck in a state that has lower entropy than ones in which the constraint is removed. Rovelli [4] argues that the cosmology explanation can be seen explicitly in a suitable model calculation. Earman [5] has critiqued this competition and argues that "a resolution of the controversy is not to be obtained by means of intuition pumps but rather through precise model calculations." How might Earman's vision be realized? Such a model would first and foremost require a well-defined state space since one needs such a state space to rigorously discuss entropy and entropy increase. This space is usually a symplectic manifold of coordinates and momenta or a Hilbert space. With this space, a coarse-graining can be defined, which partitions the space into macrostates that are collections of microstates. 1 Beyond this, a model should truly capture the salient features of cosmology: expansion rates, equations of state, freeze-out, global geometry, dark energy, and structure formation. Footnote 1: An alternative approach [6; 7] to entropy increase in a system involves tracing or marginalizing over degrees of freedom outside the system, but in the case of our model cosmology there are none. Work has been done on aspects of calculating the entropy of matter during gravitational clustering [8] and in an expanding universe [9; 10; 11]. Many argue that the expansion of the universe is isentropic (or nearly so), meaning that there is no net entropy change in matter due to expansion. Some argue that entropy continues to increase throughout expansion, but at a rate that is too slow to keep up with the growth of the maximum entropy [12; 13]. Approaches to kinetic theory in an expanding universe based on the Boltzmann equation have been explored [14; 15; 16; 17; 18], but these treat spacetime and gravity separately and as background. The extent to which self-contained models have been constructed is minimal. We seek to provide an analog model for cosmology, in which the main modalities of entropy change can be studied explicitly and in a self-contained way. In this paper, we will define the model, show how it reproduces the key aspects of big-bang cosmology, interpret standard cosmological results in this lens, show an application to non-equilibrium statistical mechanics, and end with a discussion of how it can be used to compute coarse-grained entropy, which is left for later work. ## II Statistical mechanics and gravity The intersection of thermodynamics and gravity is a rich subject. (For a general review, see [19].) Since the development of general relativity, we have learned that black holes radiate, particles can be created in a gravitational field, there is a correspondence between Anti-de Sitter space and conformal field theories, and discovered the gravitational path integral. While these are all quantum effects, classical studies of cosmology and gravity are still well-motivated [20]. Some of gravity's weird features (potentially even including the relation between area and entropy [21]) are manifest in a classical setting due to gravity's long-range and unshielded nature. The thermodynamic nature of gravitational systems, even classical, is surely strange. It is well-known that gravitational systems have negative heat-capacities - which is why black holes get _hotter_ rather than _coldter_ as they evaporate and give off heat. Similarly, a gas with an attractive Newtonian gravitational potential has no equilibrium - it will form a 'core' that becomes increasingly hot in a runaway process, while emitting heat in the form of particles that have escaped the potential energy barrier, called a 'halo' [8]. In the absence of cutoffs, the entropy will diverge as a function of time. "Non-equilibrium" is the general description for the statistical physics of gravitational systems [20]. Our model will not fully tame this strangeness; but it can perhaps segregate types of strangeness from each other, in particular those that stem from quantum rather than classical gravity, and those that relate to clustering rather than those that would persist even in the absence of gravitational structure growth. ## III The model Our model, defined by Eq. (1) below, is closed, well-behaved, and consists only of a single, classical Hamiltonian with \(6N+1\) dynamical degrees of freedom, where \(N\) is the number of particles. The \(6N\) degrees come from the particle positions and momenta, and \(2\) degrees come from \(a\) and \(p_{a}\), with one degree removed through the "Hamiltonian constraint" that \(H=0\), which should be satisfied in theories with dynamical degrees of freedom for gravity. (See [22] for a comprehensive and [23] for a modern review of the ADM formalism, in which the same point arises.) In reducing the metric degrees of freedom to just the scale factor, our model is like a Minisuperspace model. These models describe isotropic and homogenous gravitational systems in which fields couple to the FLRW scale factor \(a(t)\). They then take the scale factor as a single degree of freedom and attempt canonical or path-integral quantization; this is the foundation of quantum cosmology [24]. The dynamics for \(a\) are induced via a dependence of the Lagrangian on \(\dot{a}\). Unlike a minisuperspace model, our model does not assume homogeneity of the matter, but it does treat the matter and scale factor degrees of freedom on the same footing.2 Footnote 2: Although the interpretation of the scale factor becomes unclear if the particle distribution is highly nonuniform, there is nothing formally wrong with the model in this regime. All particle coordinates \(x_{i}\) and momenta \(p_{i}\) refer to the comoving coordinates and comoving momenta, compared to the physical coordinates \(ax_{i}\) and physical momenta \(p_{i}/a\). For convenience we set \(a(0)=1\), else the physical momenta would read \(p_{i}/(a/a(0))\). The Hamiltonian reads: \[H=\mathcal{N}\Big{[}\frac{1}{8\pi G}\left(-\left(8\pi G\right)^{ 2}\frac{{p_{a}}^{2}}{12a}-3ka-\Lambda a^{3}\right)\] \[+\sum_{i}\!\sqrt{m_{i}^{2}+\frac{p_{i_{i}}^{2}(1-kr_{i}^{2})}{a^ {2}}+\frac{p_{\theta_{i}}^{2}}{a^{2}r_{i}^{2}}+\frac{p_{\varphi_{i}}^{2}}{a^{ 2}r^{2}\sin^{2}(\theta_{i})}} \tag{1}\] \[+\sum_{i,j}V_{ij}(a|\mathbf{r}_{i}-\mathbf{r}_{j}|)\Big{]},\] where \(k\) is either \(\pm 1\) or \(0\) and defines the global geometry of the spacetime, and \(\Lambda\) is the cosmological constant. Put simply, \[H \sim \text{Kinetic Term for }a\,+\text{ Potential terms for }a\] \[+\text{ Relativistic Particles }+\text{ Interactions.}\] From the corresponding Lagrangian (derived in the Appendix), we can compute the conjugate momenta \(p_{a}=-3a\dot{a}/(4\pi G\mathcal{N})\) and \(p_{x_{i}}\sim(mv_{i}/a^{2})/\sqrt{1-a^{2}v^{2}}\). The \(p_{x_{i}}\) have a dependence on \(k\), and the \(\sim\) becomes an equality when \(k=0\). We immediately notice a few things. First, the kinetic term for \(a\) is both negative and non-separable (there is a coupling between a coordinate and momentum). In minisuperspace models (which have the same Hamiltonian structure), gravitational energy has the opposite sign of the energy in matter - which is necessary so that the Hamiltonian constraint \(H=0\) can hold. There is an extraneous variable \(\mathcal{N}\), called the _lapse function_, used as a Lagrange multiplier to enforce this constraint, and the lack of time dependence indicates that energy is conserved. The inter-particle interactions depend on the physical distance between particles, \(a|\mathbf{r}_{i}-\mathbf{r}_{j}|\), rather than the comoving distance \(|\mathbf{r}_{i}-\mathbf{r}_{j}|\). The model is a good approximation for cosmology as \(N\to\infty\), assuming the spatial distributions become uniform. Inflationary physics is, in principle, simple to include as the homogenous inflaton is a single degree of freedom that can be treated mechanically. However, at the time and energy scales of inflation, classical mechanics does not apply. The model does not capture chemical effects, e.g. particle conversion, but it does capture kinetic equilibrium and decoupling, which underpin the chemical effects. The thermal and energetic effects of massless particles 3 are simple to include by taking the limit as \(m\to 0\). Footnote 3: Although the model treats relativistic energies correctly, it is manifestly _not_ a special- or general-relativistic model. ## IV Equations of motion Here and in the following, unless otherwise stated, \(|\mathbf{p}_{i}|^{2}\) is understood to depend on \(k\) and reduces to the "flat" definition \(|\mathbf{p}_{i}|^{2}=p_{x_{i}}^{2}+p_{y_{i}}^{2}+p_{z_{i}}^{2}\) upon setting \(k=0\). \(\mathcal{N}\) is non-dynamical but treated as a coordinate for the purpose of enforcing the constraint \(H=0\). Its conjugate momentum \(p_{N}\) is 0 via \(p_{\mathcal{N}}=\frac{\partial L}{\partial\mathcal{N}}\), where \(L\) is the corresponding model Lagrangian, derived in the appendix. Trivially, \(\dot{p}_{\mathcal{N}}=0\) and \(\dot{\mathcal{N}}=0\). ### Hamilton's Equations Hamilton's equations read: \[\begin{split}\dot{p}_{\mathcal{N}}&=-\frac{ \partial H}{\partial\mathcal{N}}=\frac{1}{8\pi G}\left(-\left(8\pi G\right)^{2 }\frac{{p_{a}}^{2}}{12a}-3ka-\Lambda a^{3}\right)\\ &\quad\quad+\sum_{i}\sqrt{m_{i}^{2}+\frac{|\mathbf{p}_{i}|^{2}}{a^{2 }}}+\sum_{i,j}V_{ij}(a|\mathbf{r}_{i}-\mathbf{r}_{j}|)=0,\end{split} \tag{2}\] where the last equality follows from the fact that \(p_{\mathcal{N}}\) is a constant. Figure 1: Left: Paths of \(N=27\) non-interacting particles in physical coordinates. Partial trails are joined by _comoving_ periodic boundary conditions. Right: Paths of \(N=27\) non-interacting particles in comoving coordinates, in which the slowing down of the comoving velocities can be seen. Bottom: The scale factor \(a(t)\) with the color-mapped time dependence. \[\dot{a}=\frac{\partial H}{\partial p_{a}}=\mathcal{N}\Big{[}-\frac{4}{3}\pi G \frac{p_{a}}{a}\Big{]} \tag{3}\] \[\dot{p}_{a}=-\frac{\partial H}{\partial a}=\mathcal{N}\Big{[}-\frac {2}{3}\pi G\frac{p_{a}^{2}}{a^{2}}+\frac{3k}{8\pi G}+3\Lambda a^{2}+ \tag{4}\] \[\frac{1}{a^{3}}\sum_{i}\frac{|\mathbf{p}_{i}|^{2}}{\sqrt{m^{2}+\frac{ |\mathbf{p}_{i}|^{2}}{a^{2}}}}-\sum_{i,j}\frac{\partial V_{ij}}{\partial(a|\mathbf{r}_ {i}-\mathbf{r}_{j}|)}|\mathbf{r}_{i}-\mathbf{r}_{j}|\Big{]}\] Using Eq. (3), we can write the Hubble parameter as \(h=\dot{a}/a=-(4\pi G\mathcal{N}/3a^{2})p_{a}\). ### Friedmann Equations The well-known Friedmann equations are a set of two equations governing the dynamics of spacetime and matter for the FLRW metric. Any reasonable model of cosmology should reproduce the Friedman equations, or something equivalent. We will see that this is the case. Combining Hamilton's equations for \(a\) upon gauging \(\mathcal{N}=1\), we have \[2\frac{\ddot{a}}{a}+\frac{\dot{a}^{2}}{a^{2}}+\frac{k}{a^{2}}=-8\pi GP+\Lambda, \tag{5}\] where \[P=\frac{1}{3}\Bigg{(}\frac{1}{a^{3}}\sum_{i}\frac{|\mathbf{p}_{i}|^{ 2}/a^{2}}{\sqrt{m_{i}^{2}+\frac{|\mathbf{p}_{i}|^{2}}{a^{2}}}}- \tag{6}\] \[\sum_{i,j}\frac{\partial V_{ij}}{\partial(a|\mathbf{r}_{i}-\mathbf{r}_{j }|)}|\mathbf{r}_{i}-\mathbf{r}_{j}|\Bigg{)},\] and \(|\mathbf{r}_{i}-\mathbf{r}_{j}|\) is the comoving interparticle distance. This is a linear combination of the standard two Friedmann equations. The definition of \(P\) in the non-interacting case agrees with that derived for a relativistic gas thermodynamically [25]. Rearranging Eq. (2) and using the relation \(p_{a}=-3a\dot{a}/(4\pi G)\), we derive \[\frac{k}{a^{2}}+\frac{\dot{a}^{2}}{a^{2}}=\frac{8\pi G\rho+\Lambda}{3}, \tag{7}\] where \[\rho=\frac{1}{a^{3}}\left(\sum_{i}\sqrt{m_{i}^{2}+\frac{|\mathbf{p}_{i}|^{2}}{a^{ 2}}}+\sum_{i,j}V_{ij}(a|\mathbf{r}_{i}-\mathbf{r}_{j}|)\right). \tag{8}\] The interpretation is quite clear, \(\rho=E/V\), where \(V\) is the physical volume and \(E\) is the total energy of the particles. Together, these let us derive \[\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}\left(\rho+3P\right)+\frac{\Lambda}{3}, \tag{9}\] the other Friedmann equation. Eq. (5) is exactly the Euler-Lagrange equation of the model Lagrangian, and needs to be combined with the 00 component equation of the Einstein Equations (conservation of energy), Eq. (7), to obtain Eq. (9). ### Particles In the absence of interactions, particles follow the standard FLRW geodesics. For \(k=0\) and with interactions, they have the following Hamilton's equations: \[\dot{\mathbf{x}}_{i}=\mathcal{N}\frac{\mathbf{p}_{i}/a^{2}}{\sqrt{m^{2}+\frac{|\mathbf{p }_{i}|^{2}}{a^{2}}}} \tag{10}\] and \[\dot{\mathbf{p}}_{i}=-\mathcal{N}a\sum_{j}\frac{\partial V_{ij}}{\partial(a|\mathbf{r }_{i}-\mathbf{r}_{j}|)}. \tag{11}\] The peculiar velocity of a particle, which enters directly into the Lagrangian (see Appendix), is \(\dot{\mathbf{x}}_{i}a\), and the recessional velocity is \(\dot{a}\mathbf{x}_{i}=(\dot{a}/a)(a\mathbf{x}_{i})\). The total velocity then is the sum of the two, equal to the time derivative of the physical distance: \(v_{tot}=\frac{d}{dt}(a\mathbf{x}_{i})=\dot{a}\mathbf{x}_{i}+\dot{\mathbf{x}}_{i}a\). In FIG. 1, it is evident that the particles slow down in comoving coordinates; for non-relativistic (NR) particles, the peculiar velocity has a \(1/a\) scaling, which acts as friction. For highly-relativistic (HR) particles, the peculiar velocity is approximately constant. So far, we have seen that the equations of motion are the Friedmann equations and the particles follow their (interacting) geodesics. The particles have built-in red Figure 2: Example time dependence of the terms in Eq. (1) for \(k=\Lambda=0\). \(K\) is the particle kinetic energy, \(V\) is particle potential energy, \(T\) is total gravitational energy. In red is the total energy, which is 0. \(V(r)\) is an inverse power law. shift/blueshift, as the comoving momenta \(p_{i}\) couple directly to \(a\), and thus energy is able to be transferred between the gravitational field and matter. ## V Reproducing Cosmology Numerically Although many results can be derived directly from the model, it is useful to explore (and verify) aspects of the model numerically. For our purposes, \(N<1000\) suffices. We primarily explore repulsive potentials; with an attractive potential, the calculation produces clustering, but to study this in detail would require more sophisticated (but well known) \(N\)-body methods.4 Footnote 4: The caveat is that the momenta also need to be treated, as they couple to \(a\). The numerical price to pay is small, with a number of “momentum” computations of order \(N\), compared to the number of particle-particle interaction computations, which is of order \(N^{2}\). ### Simulation Methods To simulate an example model, we evolve Hamilton's equations Eqs. (2) - (4) and (10) - (11) with \(2N\) particles on a flat space with no cosmological constant. The constraint that \(H=0\), equivalent to Eq. (2), is imposed via an initial condition on \(p_{a}\). We impose periodic boundary conditions on the comoving coordinates, making the space topologically \(\mathbb{T}^{3}\). Units are chosen such that \(a(0)=1\) and \(c=1.\) The positions are initialized on a uniform lattice and the momenta are initialized with random directions and sampled from a Maxwell-Juttner distribution, the equilibrium distribution of a relativistic gas. (See Sec. VII) To study thermalization, a suitable short-ranged potential must be chosen. In FLRW universes, particles interact via the _physical distance_ between them, not the comoving distance, so the interaction potential should depend on \(a\) in addition to the comoving coordinates. The potential must be periodic, with comoving period equal to the box size, and the forces should be continuous across the boundary. Beyond these requirements, we are free to choose any potential as the thermodynamic details depend weakly on the interactions. In particular we choose \(V\sim 1/(ar)^{\alpha}\), with \(\delta x_{i}\to\sin\left(\frac{\pi\delta x_{i}}{2}\right)\) to satisfy periodicity, so \[V_{ij}=\frac{q}{\left(a\sqrt{\sin(\frac{\pi\delta x}{2})^{2}+\sin(\frac{\pi \delta y}{2})^{2}+\sin(\frac{\pi\delta z}{2})^{2}}\right)^{\alpha}}, \tag{12}\] where \(\delta x=x_{i}-x_{j}.\) This potential is periodic in the three comoving coordinates, depends correctly on \(a\), is time independent, and approaches the corresponding central potential \(V\sim 1/(ar)^{\alpha}\) rapidly near \(r=0\), and is a good approximation across the entire region. See the appendix for a comparison with the corresponding central potential for \(\alpha=4\). ### Numerical Cosmology We can see how a time slice of the simulation looks in FIG. 1. It show a universe with particle tracks in both physical and comoving coordinates, color-coded according to time. The color-coding is also seen in the data for \(a(t)\) below, so the value of the \(a\) at any instance in a particle's trajectory can be read off. The data for \(a\) are fitted to a power law, \(a(t)\sim t^{b}\), from which \(b\) is extracted. As seen in FIG. 3, the system displays the expected scaling behavior of \(a(t)\) for radiation and matter domination, \(a(t)\sim t^{2/3(1+w)}\). The eras are created by initializing the matter as either HR or NR. Additionally, we see the correct behavior for closed and de Sitter universes, corresponding to \(k=1\) and \(\Lambda>0\) respectively. FIG. 4 shows the scaling exponent \(2/3(1+w)\) for a universe with \(k=\Lambda=0\) as a function of the initial inverse temperature and mass \(\beta m\) of the matter. In this scenario, the momentum distribution of the particles of mass \(m\) is initialized as a random sample of the relativistic equilibrium distribution, Eq. (22). We will see in the following sections that \(w\) runs with time outside of the \(w=1/3\) and \(w=0\) fixed points - \(a(t)\sim t^{2/3(1+w)}\) is only a solution of the Friedmann equations when \(w\) is a constant, or slow varying compared to the Hubble rate. FIG. 4 retains validity since the time scale over which \(a\) is fitted is small compared to the time scale over which \(w\) runs. \(w\) decreases with expansion, corresponding to the matter losing both kinetic energy to redshift and potential energy as it spreads out. FIG. 2 shows how the terms in Eq. (1) behave with time for a growing universe with \(k=\Lambda=0\). Generically, all the terms decrease in magnitude with time as long as the interparticle potential decreases with increasing r. \(K\) tends to the sum of the rest masses and \(T\) tends to \(-K\), both constant, while \(V\) always goes to \(0\) and crosses \(K\) if \(V(0)>K(0)\). ## VI A Cosmological Gas The overall goal of this study is to construct theoretical models and numerics to study competing resolutions to the past hypothesis paradox. How, then, would entropy computations look? We argue in Sec. VIII that an definition of entropy suitable for non-equilibrium systems is required. However, it will be useful to study equilibrium entropy to the extent that we can, in order to connect with arguments discussed in the introduction [8; 9; 10; 11; 12; 13]. In this section, we consider the equilibrium thermodynamics of a relativistic ideal gas in cosmology - a "cosmological gas." For the moment, we will treat \(a\) as a parameter and attempt to calculate the canonical partition function for the particle terms of Eq. (1). The true partition function should be microcanonical, as the energy of the universe is fixed to be \(0\). The thermodynamics of a relativistic gas are known historically [26], with a small resurgence in interest in the cosmological context [25]. For a fully covariant treatment, see [27]. de Berredo-Peixoto et. al. [25] compute results for the "reduced relativistic gas" in cosmology, assuming all particles have equal kinetic energies. We will relax this assumption and see that this simplified (non-covariant and collisionless) framework allows us to derive well-known freeze-out results and an explicit formula for the equation of state parameter \(w\) as a function of \(\beta m\), where \(m\) is the mass of the gas particles. Specializing to \(k=0\), we have the following Hamiltonian for a relativistic free gas in cosmology: \[H=\sum_{i=1}^{N}\sqrt{m_{i}^{2}+\frac{p_{r_{i}}^{2}}{a^{2}}+\frac{p_{\theta_{i }}^{2}}{a^{2}r_{i}^{2}}+\frac{p_{\varphi_{i}}^{2}}{a^{2}r^{2}\sin^{2}(\theta_ {i})}}. \tag{13}\] We compute the one-particle canonical partition function after setting \(k_{B}=\hbar=1\), \[\begin{split} Z_{1}&=\int d^{3}x\,d^{3}p\,e^{-\beta \sqrt{m^{2}+\frac{\mathrm{i}\mathrm{i}\mathrm{p}^{2}}{a^{2}}}}\\ &=4\pi Va^{3}m^{2}\frac{K_{2}(\beta m)}{\beta},\end{split} \tag{14}\] where \(K_{2}\) is a modified Bessel function of the 2nd kind and the integral is over comoving quantities (V is the comoving volume). This has the following HR (\(\beta m\to 0\)) and NR (\(\beta m\to\infty\)) limits, computed via asymptotic expansion: \[Z_{1}^{HR}=8\pi V\frac{a^{3}}{\beta^{3}} \tag{15}\] and \[Z_{1}^{NR}=Va^{3}\left(2\pi m\frac{1}{\beta}\right)^{3/2}e^{-\beta m}, \tag{16}\] which are the correct partition functions for HR and NR gases. The latter has a factor of \(e^{-\beta m}\) due to contributions of the rest mass, which does not affect the entropy. It only affects the free and internal energies by the addition of a factor of \(m\). The equation of state of the gas (with \(Z=Z_{1}^{N}/N!\)) is \(PVA^{3}=NT\). Then, \(\overline{E}=-\partial\log Z/\partial\beta\), so \[\overline{E}=N\left(m\frac{K_{3}(\beta m)}{K_{2}(\beta m)}-\frac{1}{\beta} \right)=N\overline{E}_{1}, \tag{17}\] where \(K_{3}\) is also a modified Bessel functions of the 2nd kind. We write the equation of state in the form of a perfect fluid \(P=w\rho\), where \(\rho=N\overline{E}_{1}/(a^{3}V)\), and equate \(w\rho=NT/(Va^{3})\) to solve for the temperature dependence of the equation of state parameter \(w\). This implies \(\overline{E}_{1}=T/w\). \(T(\overline{E}_{1})\) exists, but it is not possible to write explicitly. It is, however, exactly correct to write the equation of state as \(PVA^{3}=NT(\overline{E}_{1})\), or equivalently \(PVA^{3}=Nw\overline{E}_{1}\). We then have \[w=\frac{1}{\beta m\frac{K_{3}(\beta m)}{K_{2}(\beta m)}-1}, \tag{18}\] with the limits \(w\to\frac{1}{3}\) as \(\beta m\to 0\), and \(w\to 0\) as \(\beta m\to\infty\). This function nicely fits the data in FIG. 4, which was simulated for short enough times such that \(w\) is approximately constant. Once in thermal equilibrium, this gas is isotropic and homogeneous. It can be triv Figure 3: Representative simulated \(a(t)\) for \(k=1\), de Sitter, radiation dominated, and matter dominated universes. \(k=-1\) omitted for neatness. Figure 4: The simulated scaling exponent of \(a(t)\) vs. the initial \(\beta m\) of an \(N=250\) particle non-interacting relativistic gas in equilibrium in units of \(a(0)=c=G=1\). The simulation time for each is small such that \(w\) is approximately constant. The horizontal lines denote the asymptotic values \(1/2\) and \(2/3\), corresponding to radiation and matter domination. ially shown that, for the collisionless case, the viscosity vanishes and the thermal conductivity vanishes to first order [28]. The gas can therefore be considered a perfect fluid, and the \(w\) in the gas equation of state corresponds to the \(w\) in the perfect fluid equation of state. The equilibrium entropy is \(S=\beta(\overline{E}-F)\), which we can compute as \(S=\log Z+\frac{N}{w}-\log N!\), which is \[\begin{split}& S=N\log\,\left(4\pi Va^{3}m^{2}\frac{K_{2}(\beta m )}{\beta}\right)\\ &+N\left(\beta m\frac{K_{3}(\beta m)}{K_{2}(\beta m)}-1\right)- \log N!.\end{split} \tag{19}\] It has been proven that a relativistic gas cannot undergo adiabatic expansion, in which the entropy is conserved [29]. This non-conservation can also be shown numerically using the relationship Eq. (25) between \(\beta\) and \(a\) derived in the following section. However, it has also been argued that this effect is small and not of critical importance in cosmology [30; 31], and thus likely not a large contributor to the story of entropy in an expanding universe. ## VII Studying non-equilibrium statistical mechanics The most critical feature in both of the arguments in the introduction is the non-equilibrium nature of thermodynamics in gravitational systems. In this section, we will show how this behavior emerges in the cosmological context via our model and how some known thermodynamic results arise from the cosmological gas. The interplay of how fast \(a\) changes, versus how fast matter interacts, is the basis for the cosmological phenomenon of _freeze-out_, which describes the process of an equilibrium system becoming pseudo-equilibrium or unable to equilibrate due to expansion. FIGS. 6, 9 and 10 depict this process. The condition defining freeze-out is \[\frac{\dot{a}}{a}\approx\frac{N}{Va^{3}}\langle\sigma v\rangle, \tag{20}\] where \(N/(Va^{3})\) is the number density, \(\sigma\) is the interaction cross-section, and \(v\) is the comoving velocity distribution. The classical scattering cross section is defined as \[\int\sin\theta\,d\theta\,d\phi\,\frac{b}{\sin\theta}\left|\frac{db}{d\theta} \right|, \tag{21}\] where \(b\) is the impact parameter. The term \(\dot{a}/a\) scales as \(t^{-1}\) and the right hand side decays faster than \(t^{-3/2}\). At early times, the Hubble rate is smaller than the interaction rate, and the freeze-out happens when these rates intersect. After this, the matter is increasingly unable to interact effectively due to the distance between the particles increasing and the speeds decreasing. In the following we consider _kinetic freeze-out_, in which the elastic scattering that transfers particle momenta ceases. This is in contrast to _chemical freeze-out_, which occurs at a higher temperature, when the inelastic processes that change particle species cease [32]. A cosmological gas at equilibrium has the following 1-particle momentum distribution: \[\begin{split} f\,d^{3}p&=\frac{e^{-\beta\sqrt{m^{2} +\frac{|\mathbf{p}|^{2}}{a^{2}}}}}{Z_{1}/V}d^{3}p\\ &=\frac{4\pi|\mathbf{p}|^{2}e^{-\beta\sqrt{m^{2}+\frac{|\mathbf{p}|^{2}}{ a^{2}}}}}{Z_{1}/V}d|\mathbf{p}|,\end{split} \tag{22}\] and for a given \(a\), there is only 1 parameter: \(\beta\). For times when the particle scattering rate is much greater than the expansion rate, this is the momentum distribution and the equilibrium distribution. The corresponding HR and NR distributions are \[f^{NR}\,d^{3}p=\left(\frac{\beta}{a^{2}}\right)^{3/2}\frac{4\pi|\mathbf{p}|^{2}e^ {-\beta\frac{|\mathbf{p}|^{2}}{2ma^{2}}}}{(2\pi m)^{3/2}}d|\mathbf{p}| \tag{23}\] and \[f^{HR}\,d^{3}p=\left(\frac{\beta}{a}\right)^{3}\frac{4\pi|\mathbf{p}|^{2}e^{-\beta \frac{|\mathbf{p}|}{a}}}{8\pi}d|\mathbf{p}|. \tag{24}\] After freeze-out, particles the keep the same comoving momentum distribution, Eq. (22) if in equilibrium. The condition for maintaining the distribution is \(\frac{df}{dt}=0.\) A particle species that started in equilibrium will retain an equilibrium distribution, but will be out of equilibrium with other matter. We are now able to complete common lore regarding temperature scaling during expansion. We see that Eqs. (23) and (24) can be written in terms of a new variable \(\beta/a^{\alpha}\), where \(\alpha\) is 1 or 2. After freeze-out, the distri butions do not change, so \(\beta(t)/a(t)^{\alpha}=\beta_{*}/a_{*}^{\alpha}\), where the star denotes the values at freeze-out. This means that \(T=T_{*}a_{*}^{\alpha}/a^{\alpha}\) (here, \(T\) is really a sort of "effective" temperature: that which the distribution matches the thermal equilibrium distribution at that temperature.) In the intermediate regime, the temperature has a more complicated relationship with \(a\), via \[\frac{d}{dt}\left(\frac{\beta\,e^{-\beta\sqrt{m^{2}+\frac{1}{a^{2}}}}}{a^{3}m^ {2}K_{2}(\beta m)}\right)=0. \tag{25}\] This intermediate regime is exactly where \(w\) runs. As we saw, the \(w\) in the HR limit \(\beta m\to 0\) is exactly \(1/3\) and _loses_ all of its \(\beta\) dependence, so any change of \(\beta\), like the one derived above, is ignored. The same is true for the NR limit, as \(w=0\) without any \(\beta\) dependence. The details of this running can be computed with Eq. (18) and Eq. (25). As seen in FIG. 5, \(w(\beta m)\) has fixed points at \(\beta m=0\) and \(\infty\), corresponding to \(w=\frac{1}{3}\) and \(w=0\), respectively. In between, \(\frac{dw}{d(\beta m)}\) is negative, so as \(\beta m\) increases, as it does monotonically with \(a\), \(w\) decreases to \(0\). The above arguments can easily be applied to a collapsing universe, in which NR matter becomes relativistic. We have come to the conclusion, then, that when a cosmology includes matter that goes from relativistic to non-relativistic, it is correct and possibly necessary to include a source for the Friedmann equations that scales with \(a\) as \((1/a^{3})\sqrt{m^{2}+\left|\mathbf{p}\right|^{2}/a^{2}}\). Following Eq. (8), we can see the full source for a given particle species with mass \(m\) would require knowing the comoving momentum distribution of those particles, as in \[\begin{split} H^{2}\sim\frac{1}{a^{3}}\int dp\,f(p,a)\sqrt{m^{2 }+\frac{\left|\mathbf{p}\right|^{2}}{a^{2}}}\\ +\,\Omega_{r,0}a^{-4}+\Omega_{m,0}a^{-3}+...\end{split} \tag{26}\] where we have written the \(a\) dependence of \(f\) explicitly, which acts as a time. As pure radiation and pure matter are fixed points of the flow of \(w\), \(\Omega_{r}\) and \(\Omega_{m}\) will be time independent in the usual writing of the equation, up to particle conversion effects. However, there is no account for the flow of \(w\) into the \(w=0\) regime from matter with \(1/3>w>0\). This is accounted for with the suggested term, and both the radiation and matter terms can be written as the HR and NR limits of the new term. The standard way of writing the Friedmann equations works when the sources are close to the HR and NR limits, meaning that the variation in \(w\) is small, as can be seen from FIG. 5. ### Numerical Results To study freeze-out, the Hamiltonian is modified to include an additional species of particle. The momenta are histogrammed at regular time steps and fitted with the equilibrium distribution, Eq. (22), from which \(\beta\) is extracted. The form of the potential affects the thermalization time, but qualitatively the results are independent of the interaction Hamiltonian, due to the periodicity of the simulated system. Periodicity implies that the position dependent integrals that enter the partition function will cancel explicitly with those in the equilibrium distribution, since each is finite. We consider a scenario analogous to the early universe: the interaction of two species, initially out of equilibrium, but each in self-equilibrium. FIGS. 7 and 8 show two species converge (to a temperature \(T\sim(T_{A}+T_{B})/2\)) on a much shorter timescale than that of the expansion, and FIGS. 9 and 10 show the converse scenario - that of freeze-out. The temperatures are drawn to each other, but we can see they start to diverge around the freeze-out time, \(t_{*}\approx 0.4\). After this, the temperatures scale according to the relation Eq. (25). It should be stressed that freeze-out is a _dynamical Figure 6: Momentum histogram showing the freeze-out of a single species as it starts out of equilibrium and tries to thermalize but fails. The distribution starts as 2 gaussians and attempts to reach the equilibrium distribution, which is boxed below. process, with no strict cut-off between equilibrium and non-equilibrium states. Namely, there is a smooth transition between the regimes of the left-hand and right-hand sides dominating the relation Eq. (20). The process is highly-nonlinear, and a complete computation to capture non-equilibrium effects would require the Boltzmann equation. It can be seen in FIGS. 7 - 10 that the freeze-out and equilibrium times are not strictly well-defined, following the reasoning above. ## VIII Discussion The end goal of this work is a complete framework with which to compute the time-dependent entropies needed to provide numerical support for proposed solutions to the past hypothesis paradox. Due to the non-equilibrium nature of both cosmology and gravitational clustering, a suitable definition of entropy is needed. Coarse-grained entropies are a general class of such non-equilibrium entropies and a proper candidate would be useful for studying classical or quantum gravitational models like the one proposed. A coarse-graining is a partitioning of the state space into sub-regions called macrostates, such that the union of all macrostates is the entirety of the state space. The Figure 8: Time-dependent inverse temperature plot of the system in FIG. 7 Equilibrium appears to be achieved around \(t\sim 0.03\), afterwhich fluctuations around the equilibrium occur. Figure 7: Time-dependent momentum histogram showing the equilibriation of two species, which appears to be around \(t\sim 0.045\), afterwhich fluctuations around the equilibrium occur. The corresponding inverse temperatures can be seen in FIG. 8 exact state of the system is called the microstate, and is generally not known. The macrostates are defined according to whichever external observables are measurable. In classical mechanics, microstates are in a unique macrostate, but this is no longer true in quantum mechanics. It is critical to note that entropy can mean many things, and it generally depends on the coarse-graining of the system. For a given state of a system, the entropy will be high with respect to some coarse-grainings and low with respect to others. It is thus required to coarse-grain in physically relevant observables if one wants to obtain physically relevant results. In recent years, observational entropy has emerged as a promising framework to unify and understand various entropies in both classical and quantum contexts [33; 34; 35]. We propose that observational entropy would be helpful for the proposed calculation and, in general, helpful for understanding other aspects of gravitational entropy and entropies in gravitational systems, which is currently limited to Bekenstein-Hawking and holographic entropies. Figure 10: Time-dependent inverse temperature plot of the system in FIG. 9 Freeze-out can be seen near \(t\sim 0.40\), near when a rate of change of \(\beta\) can be seen for both species. Figure 9: Time-dependent momentum histogram showing the freeze-out of two species as they starts out of equilibrium and try to thermalize but fail. Freeze-out can be seen near \(t\sim 0.55\), after which the histograms cease to change. The corresponding inverse temperatures can be seen in FIG. 10 ### Phase Space Volumes and Measure Ambiguities In some cases, Observational and other coarse-grained entropies require integrating over \(a\) in order to construct the necessary volume contributions. However, in most cosmological models, such integration is known to cause regularization issues. Famous examples of the these include [36; 37; 38]. It has been shown that, for a homogenous scalar field in a FLRW universe, \(a\) explicitly decouples from the rest of the Hamiltonian, and thus is is a "gauge direction" in the phase space [39]. Several solutions have been proposed, but few consider the problem fully solved. For recent literature, see [40; 41]. The integral over \(a\) can be avoided, however, by reducing the measure to a surface of constant Hubble rate, as in [41]. This would produce an entropy as a function of the Hubble rate, which is monotonic for an expanding universe, and thus can be rewritten in terms of the time. This approach could be useful for future computations. This is in accord with [4], in which Rovelli argues that the scale factor is "macroscopic" in the thermodynamic sense, and gives a definition for a macroscopic observable in terms of external interactions. We propose a similar understanding - as the number of particle degrees of freedom increases, the solutions of particle motion stay chaotic, while \(a(t)\) becomes _less_ so. This is a consequence of \(\dot{a}(t)\) being only dependent on \(a\) and the _thermodynamic_ variable \(\rho\) in this limit. Similarly, \(\ddot{a}(t)\) only depends on \(a\) and the _thermodynamic_ variables \(\rho\) and \(P\). ### Other applications The proposed model could be used to more accurately probe the matter-radiation transition using Eq. (26). Similarly, it is possible to compute freeze-out times much more accurately using this equation. All of the above work and arguments can be applied to contracting universes, or to other symmetry-reduced solutions of general relativity, like Bianchi models. ## IX Conclusion In this work, we have derived and shown properties of a cosmological model in the context of classical mechanics without general relativity. This model reproduces all features of cosmology relevant for the study of statistical mechanics, in addition to providing clarity to known thermodynamic results. New results on the thermodynamics of the equation of state were derived, in addition to how the equation of state evolves with time. The model has a phase space, on which the scale factor and its conjugate momentum are on equal footing with the matter, so that computations of entropy may be done. As cosmology is a non-equilibrium system, the phase space approach to computing entropies is necessary as non-equilibrium entropies are necessary. The model provides an opportunity to rigorously compare the entropy changes in cosmology associated with different gravitational effects, in order to make progress on the long-standing debate regarding what happens to entropy in the post-inflationary universe, once "gravity is taken into account." Our future work includes: * Constructing the volume form; * Constructing a coarse-graining; * Compute the observational or coarse-grained entropy as a function of time/Hubble rate; * Do the above steps with and without clustering, implemented through the Newtonian gravitational potential \(V\sim-Gm_{1}m_{2}/r\). It has been known since early days of cosmology that "Newtonian" cosmology can be surprisingly accurate and provide an alternative perspective on well-known cosmological phenomena. [42] Here we have shown that a slightly more sophisticated mechanical model can do even better, reproducing cosmological basics in a surprisingly accurate and elegant way. We hope that this can be a useful tool for shedding light on the past hypothesis paradox and, moreover, that observational entropy can be useful for understanding gravitational systems. ## X Appendix ### Model Lagrangian The proposed model (\(N=1\) for simplicity; \(N\neq 1\) follows trivially) is derived from the Einstein-Hilbert and geodesic actions as follows: \[\begin{split} S&=S_{EH}+S_{Geo}\\ &=\int d^{4}x\sqrt{-g}R-m\int d\tau\sqrt{g_{\mu\nu}\dot{x}^{\mu }\dot{x}^{\nu}},\end{split} \tag{27}\] where the dot denotes \(\frac{d}{d\tau}\). Choose coordinates such that \(\tau=t\) and insert the FLRW metric \(ds^{2}=\mathcal{N}^{2}dt^{2}-a^{2}(t)\Big{(}(1-kr^{2})^{-1}dr^{2}+r^{2}(d\theta ^{2}+\sin^{2}\theta\;\varphi^{2})\Big{)}\). Then, \[\begin{split} S&=V\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! which is a boundary term on the time integration, to arrive at \[\begin{split}& L=V\frac{1}{8\pi G}\left(-\frac{3\dot{a}^{2}a}{ \mathcal{N}}+3ka\mathcal{N}+\Lambda a^{3}\mathcal{N}\right)\\ &-m\sqrt{\mathcal{N}^{2}-a^{2}\left(\frac{|\dot{\boldsymbol{r}}|^ {2}}{1-k|\boldsymbol{r}|^{2}}+r^{2}\dot{\theta}^{2}+r^{2}\sin^{2}\theta\dot{ \phi}^{2}\right)}.\end{split} \tag{30}\] The Hamiltonian is obtained via a Legendre transformation, and in the discussion of the it and its equations of motion, we have set the comoving volume V to be 1 for simplicity. ### Mechanical Equation of State To compare the thermodynamic equation of state with the mechanical equations of motion, consider the case with negligible interactions: \[\rho=\frac{1}{a^{3}}\left(\sum_{i}\sqrt{m_{i}^{2}+\frac{|\boldsymbol{p}_{i}|^{ 2}}{a^{2}}}\right) \tag{31}\] \[P=\frac{1}{3}\left(\frac{1}{a^{3}}\sum_{i}\frac{|\boldsymbol{p}_ {i}|^{2}/a^{2}}{\sqrt{m_{i}^{2}+\frac{|\boldsymbol{p}_{i}|^{2}}{a^{2}}}}\right). \tag{32}\] In the HR limit, \(\frac{|\boldsymbol{p}|/a}{m}\gg 1\), so \[\rho\sim\sum\frac{1}{a^{3}}\frac{|\boldsymbol{p}|}{a}\sim\sum\frac{| \boldsymbol{p}|}{a^{4}} \tag{33}\] \[P\sim\sum\frac{1}{3}\left(\frac{|\boldsymbol{p}|^{2}}{a^{5}}\frac{a}{| \boldsymbol{p}|}\right)\sim\sum\frac{1}{3}\frac{|\boldsymbol{p}|}{a^{4}}. \tag{34}\] The equation of state is \(P=w\rho\), indicating correctly that \(w=1/3\). In the NR limit, \(\frac{|\boldsymbol{p}|/a}{m}\ll 1\), so to first order, \[\rho\sim\sum\frac{m}{a^{3}} \tag{35}\] and \(P\ll 1\), indicating that \(w\sim 0\). ###### Acknowledgements. We express deep thanks to David Sloan, Joshua Deutsch, Joey Schindler, and Marcell Howard for the useful discussions.
2309.07551
Achieving 45% efficiency of CIGS/CdS Solar Cell by adding GaAs using optimization techniques
This paper proposes an efficient three-layered p-GaAs/p-CIGS/n-CdS (PPN), a unique solar cell architecture. Copper indium gallium selenide (CIGS)-based solar cells exhibit substantial performance than the ones utilizing cadmium sulfide (CdS). On the contrary, CIGS-based devices are more efficient, considering their device performance, environmentally benign nature, and reduced cost. Therefore, our paper proposes a numerical analysis of the homojunction PPN-junction GaAs solar cell structure along with n-ZnO front contact that was simulated using the Solar Cells Capacitance Simulator (SCAPS-1D) software. Moreover, we investigated optimization techniques for evaluating the effect of the thickness and the carrier density on the performance of the PPN layer on solar cell architecture. Subsequently, the paper discusses the electronic characteristics of adding GaAs material on the top of the conventional (PN) junction, further leading to improved values of the parameters, such as the power conversion efficiency (PCE), open-circuit voltage (VOC), fill factor (FF) and short-circuit current density (JSC) of the solar cell. The most promising results of our study show that adding the GaAs layer using the optimised values of thickness as 5 ({\mu}m) and carrier density as 1*1020 (1/cm) will result in the maximum PCE, VOC, FF, and JSC of 45.7%, 1.16V, 89.52% and 43.88 (mA/m2), respectively, for the proposed solar cell architecture.
Satyam Bhatti, Habib Ullah Manzoor, Ahmed Zoha, Rami Ghannam
2023-09-14T09:27:40Z
http://arxiv.org/abs/2309.07551v1
# Achieving 45% efficiency of CIGS/CdS Solar Cell by adding GaAs using optimization techniques ###### Abstract This paper proposes an efficient three-layered p-GaAs/p-CIGS/n-CdS (PPN), a unique solar cell architecture. Copper indium gallium selenide (CIGS)-based solar cells exhibit substantial performance than the ones utilizing cadmium sulfide (CdS). On the contrary, CIGS-based devices are more efficient, considering their device performance, environmentally benign nature, and reduced cost. Therefore, our paper proposes a numerical analysis of the homojunction PPN-junction GaAs solar cell structure along with n-ZnO front contact that was simulated using the Solar Cells Capacitance Simulator (SCAPS-1D) software. Moreover, we investigated optimization techniques for evaluating the effect of the thickness and the carrier density on the performance of the PPN layer on solar cell architecture. Subsequently, the paper discusses the electronic characteristics of adding GaAs material on the top of the conventional (PN) junction, further leading to improved values of the parameters, such as the power conversion efficiency (PCE), open-circuit voltage (VOC), fill factor (FF) and short-circuit current density (JSC) of the solar cell. The most promising results of our study show that adding the GaAs layer using the optimised values of thickness as 5 (\(\mu\)m) and carrier density as \(1\times 10^{20}\) (1cm) will result in the maximum PCE, VOC, FF, and JSC of 45.7%, 1.16 V, 89.52% and 43.88 (\(mA/m^{2}\)), respectively, for the proposed solar cell architecture. GaAs, CIGS, CdS, ZnO, SCAPS-1D, Optimization, Simulation, Efficiency, Homojunction, Thin film solar cells Semiconducting GaAs compounds. ## I Introduction With the rapid increase in electricity demand, solar cells are playing a vital role in producing green, reliable, and efficient energy sources to meet the United Nation's sustainable development goals-7 [1]. One of the promising solar cell architectures involves the thin film CIGS (Copper Indium Gallium Selenide) and CdS (Copper Suffide) solar cell, which is well known for yielding higher Power Conversion Efficiencies (PCE) and generates efficient Levelized Cost of Electricity (LCOE) with manufacturing costs minimal as compared to other solar cells architectures [2, 3]. This paper proposes simulation-based modelling of thin film layer CIGS/CdS solar cells followed by the optimization of the nanowire GaAs (Gallium Arsenide) layer, which was added to the top layer of the baseline multi-junction solar cell architecture. Moreover, the simulations presented in this paper were performed using the Solar Cells Capacitance Simulator (SCAPS-1D), a one-dimensional solar simulation tool that provides an in-depth investigation of electronics and information systems [4, 5, 6]. Moreover, SCAPS-1D evaluates one of the vital electrical characteristics such as \(PCE\), Fill factor \((FF)\), Current Density curves \((J_{SC})\) and the open circuit voltage \((V_{OC})\) of the solar cells [7]. In the recent past, an exponential interest in CIGS and CdS solar cells has recently increased owing to their attractive properties and applications [8]. For the CIGS semiconductor material, the wide direct bandgap range lies between 1.0 to 1.7 eV, whereas that of the CdS lies within the 2.2 and 2.4 eV range and is an important parameter in terms of determining the range wavelengths of light that could be possibly absorbed by the solar cell and help in predicting the PCE of converting the sunlight into electricity [9]. These factors, specifically the CIGS and CdS semiconductor materials highly favourable materials for photovoltaic energy conversion applications [10, 11]. Furthermore, CIGS and CdS materials contribute towards high efficiency, low cost of manufacture, thin-film fabrication technique, environmentally friendly, varied spectral response, high radiation tolerance and absorption coefficient [12]. Therefore, research in these materials has led to the advanced development and fabrication of semiconductor devices, optoelectronics, and nanotechnology from photovoltaic to biosensors and are widely used as catalysts in chemical reactions involving hydrogen production from water [13]. On the contrary, one limiting factor in developing more efficient solar cells using semiconductor materials lies in their incapability to absorb light from the solar spectrum [14]. Following this, CdS also led to a concern about toxicity and stability because of the availability of Cadium in the solar cells [15]. However, mechanical stacking of multijunction of semiconductor materials can help to mitigate these concerns whilst enhancing the overall solar cell efficiency [16]. In our paper, we run a number of simulations involving the traditional single-junction CIGS/CdS solar cell layers and then later add the GaAs layer to study the properties of the multijunction solar cell. From the literature, several attempts have been made to optimize the overall efficiency of the solar cell and were conducted by [17, 18, 19]. The main idea behind this analysis is the improvement of the device efficiency using materials cheaper than conventional CIGS [20]. A 5 \(\mu\)m of a new layer p-GaAs has been added for that purpose. Various thicknesses of the CIGS absorber layer ranging from 0.5 to 5 \(\mu\)m have been used. Accordingly, the findings of our study showcase that an increase in the absorber layer thickness improves the perfor mance and overall power conversion efficiency of the new CIGS solar cells. Initially, the study incorporated a window layer of ZnO, a buffer layer (CdS), an absorbent layer (CIGS) and a GaAs layer with varied values of thicknesses between, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5 and 5 \(\mu\)m. Also, the optimisation of the GaAs material is performed with the help of the heatmap confusion matrix in order to analyse the most optimized thickness and carrier density of different layers of solar cells. In addition, a comparison of results including the performance and efficiency of the PN-junction (optimized CIGS and CdS) solar cell was estimated. The most promising results of the paper revealed that a thin top layer of p-GaAs on the conventional solar cell (p-CIGS and n-CdS) with an optimized thickness layer and high carrier density had a considerable influence on the performance of the solar cell architecture. Adding a p-GaAs layer as thin as 5 \(\mu\)m on the top of the PN-junction solar cell substantially improved the conversion efficiency of the solar cell from 20.07% (unoptimized PN) to 45.47 % (optimized PPN). The results showed that the new ultra-thin CIGS solar cells structure has performance parameters that are comparable to those of the conventional ones with reduced cost. Therefore, our paper proposes a simple three-layered p-GaAs/p-CIGS/n-CdS (PPN) solar cell with a practical thickness and a high conversion efficiency. The PPN solar cell comprises a high p-GaAs composition on top of p-CIGS and n-CdS multi-junction solar cells. In addition, computing a thin p-GaAs layer created a graded energy bandgap and the solar PN junction could only absorb photons having an energy equal to or greater than the energy bandgap. Herein, it is worth mentioning that a graded energy bandgap has a wider energy bandgap to absorb more photons. Moreover, adding an extra thin layer increased the number of holes in the solar cells, ultimately increasing power conversion efficiency. The behaviour and performance of the solar cells were evaluated by performing a comprehensive set of simulations under different configurations (i.e., thickness and carrier density of the layer) using the SCAPS-1D software. ### _Organisation of the Paper_ Our paper is divided into 8 Sections. Section 2 of the paper discusses the significance, need and limitations of the CIGS/CdS multijunction solar cell architecture. Followed by, Section 3 describes the process of setting up our simulation environment in the SCAPS-1D. Section 4 includes the efficiency optimization of the CIGS/CdS solar cell with the help of thickness and carrier density optimization using the heatmap confusion matrix. Section 5 presents the results of electrical parameters after adding a GaAs layer on top of the CIGS/CdS solar cell. Next, Section 6 discusses a critical comparison of IV, PV, and QE characteristics and the high-temperature effect. Lastly, section 7 discusses the obtained results in detail, and concluding remarks are presented in Section 8. ### _Contributions to the Research Paper_ The key contributions to the research article are presented as follows: * Discussed the significance, need and limitations along with a thorough literature review of the CIGS and CdS multijunction solar cell. * Implemented simulations on the SCAPS-1D tool for designing the most optimized, high-efficiency and robust solar cell architecture. * Performed efficiency optimization for the CIGS/CdS multi-junction solar cell architecture by adjusting the thickness and carrier density using the heatmap confusion matrix. * Whilst performing simulations, critical optimization techniques are applied to the CIGS/CdS solar cell as a first step and then, for CIGS/CdS/GaAs as a second step with the help of the heatmap confusion matrix ranging from 0.5 \(\mu\)m to 5 \(\mu\)m for thickness and 10 (1.00En) (1/\(cm^{3}\)) and 20 current density, respectively. * In the second step, introduced a novel multi-junction solar cell architecture by adding an n-GaAs layer on top of the pre-existing CIGS/CdS multi-junction solar cell and carried out the thickness and carrier density optimization to achieve the highest PCE value. * Further, investigated the electronic and electrical characteristics such as PCE, fill factor, current density and open circuit voltage of the p-CIGS/p-CdS/n-GaAs multi-junction solar architecture. * Lastly, a thorough comparative analysis is presented, showing the IV, PV and QE characteristics graphs for Fig. 1: Figure represents the baseline structure of the proposed multi-junction solar cell architecture, with simple three p-CIGS/n-CdS/n-ZnO, where the ZnO layer acts as a window/buffer layer. The solar light in the structure is incident from the right contact (from) using the simulation tool SCAPS-1D. The baseline values for the given solar cell architecture are set as mentioned in the simulation setup whilst the thickness and current density (doping) values for the baseline are set as 0.5 \(\mu\)m and \(1\times 10^{10}\) (1/\(cm^{3}\)), respectively. the most optimized solar cell architecture with their respective optimized thickness and current density values. ## II CIGS and CdS Multijunction Solar Cell Initially, a multijunction solar cell architecture using the CIGS and CdS semiconductor materials is designed. CIGS and CdS materials are widely used for manufacturing purposes, mainly because of key parameters such as low production cost and high solar energy yield. The literature suggests that the materials have achieved higher PCE values than their counterpart Silicon solar cells installed on household rooftops. Moreover, the CIGS solar cells use a thin layer of CIGS as an absorbing layer and have high absorption coefficient values, which eventually results in high-efficiency solar cells. The flexible nature of the CIGS semiconductor materials broadens their application in various industries, research and industry perspectives such as photovoltaic, biosensors, portable electronic devices, off-grid power systems, utility-scale solar power plants, for powering remote sensing and for communication devices [21]. Subsequently, the semiconductor CdS is used as the window or also known as the buffer layer in the solar cells. The window layer is defined as a thin, transparent allowing the sunlight to pass through the absorber layer and also restricts the unwanted recombination of the holes and electrons generated by the p-CIGS (absorbed photons) layer. Moreover, the window layer is typically made of a transparent conducting oxide (TCO), such as indium tin oxide (ITO), fluorine-doped tin oxide (FTO), or zinc oxide (ZnO). These materials have high transparency to visible light and low electrical resistance, which allows them to efficiently collect and transport the electrons generated by the absorbed photons. Furthermore, the choice of window layer material depends on several factors, including the specific absorber material used in the solar cell, the desired efficiency, and the cost of the materials. Different TCOs have different properties that can affect the performance of the solar cell, such as their work function, surface roughness, and doping level. Therefore, the window layer plays a critical role in the performance of a solar cell by controlling the flow of charge carriers and minimizing energy losses due to recombination, while also allowing sunlight to pass through to the absorber layer. Accordingly, in the multijunction solar cell of p-CIGS/p-CdS, CdS is often used as the window layer in the respective solar cell architecture. Additionally, the transparent window layer sits on top of the absorbing layer and allows light to pass through to reach the CIGS layer. CdS has a high optical transmission, which allows it to efficiently transmit light to the CIGS layer, while also helping to protect the CIGS layer from degradation due to exposure to air and moisture. However, it's important to note that cadmium is a toxic substance, raising concerns about CdS solar cells' environmental impact. Subsequently, efforts are being made to develop alternative materials for window layers, such as zinc oxide (ZnO), a non-toxic alternative to CdS. ### _CIGS/CdS: Significance and Need_ Previous research studies indicate that the combination of CIGS and CdS multijunction solar cells is known to provide promising electrical and electronic characteristics even in harsh environments along with achieving considerably higher efficiencies. CIGS and CdS solar cell layers are used due to the fact that they offer numerous advantages over the widely existing materials manufactured in the solar industry. Not only, CIGS has high efficiency in converting sunlight into electricity but also it has the capacity to generate more power (Ws) per unit of surface area (A) in comparison with other materials. Accordingly, this makes CIGS and CdS materials ideal for applications in solar cells where either the space is limited or reduced size is targeted, for example, rooftops or portable devices. In addition, CIGS materials are known to be flexible, which allows them to be used in various applications, including curved or irregular surfaces. Moreover, this property also makes the design of solar cells suitable for building-integrated photovoltaics (BIPV) where they are integrated into the design of buildings and other complex structures. Lastly, CdS materials proved to be an excellent window or buffer layer material due to their high optical transmission, which allows them to efficiently transmit light to the CIGS layer while also providing protection from environmental factors that can degrade the solar cell [22]. Overall, the combination of CIGS and CdS offers a high-efficiency, flexible, and durable solution for solar cells that can be used in a wide range of applications. ### _CIGS/CdS: Limitations_ While CIGS and CdS solar cells offer many advantages, there are also some challenges associated with their use. One challenge is the cost of producing CIGS solar cells. The materials used to make CIGS solar cells are relatively expensive, making the manufacturing process more costly than other types of solar cells. This has limited the widespread adoption of CIGS solar cells, especially in utility-scale applications. Another challenge is the potential environmental impact of using CdS as a window layer material. Cadmium is a toxic substance, and concerns have been raised about the possibility of environmental contamination from CdS solar cells during their production, use, and disposal. Efforts are being made to develop non-toxic alternatives to CdS for use in window layers, such as zinc oxide (ZnO), but these materials are not yet widely used in commercial applications. Finally, CIGS solar cells are susceptible to degradation over time due to exposure to moisture and air. This can reduce the efficiency of the solar cell and limit its lifespan. However, research is ongoing to develop new encapsulation methods and materials that can protect the CIGS layer from degradation and extend the lifespan of CIGS solar cells. Overall, while there are some challenges associated with using CIGS and CdS solar cells, ongoing research and development efforts are aimed at addressing these challenges and improving the efficiency and durability of these solar cell technologies [23]. The overall efficiency of CIGS and CdS solar cells has improved significantly over the past few decades, making them competitive with other types of solar cells in terms of efficiency. The current record for CIGS solar cell efficiency is around 23.35 which is close to the efficiency of traditional silicon-based solar cells. The high efficiency of CIGS solar cells is due to their ability to absorb a broad range of wavelengths of light, including the blue and green parts of the spectrum, which traditional silicon solar cells cannot absorb efficiently. The CdS window layer in CIGS solar cells also plays a critical role in their overall efficiency. CdS has high Fig. 2: Figure represents the optimization technique used to evaluate the most efficient value of the thickness for the p-CIGS and n-CdS semiconductor materials using the heatmap confusion matrix. In addition, (a) Outputs the Efficiency, \(\eta\) (%), (b) Outputs the Fill Factor, FF (%), (c) Current Density (\(\Lambda\)/\(cm^{3}\)), (d) Open Circuit Voltage (\(V_{OC}\)), (e) Comparison of the IV and PV characteristics after the thickness optimization, (f) Quantum Efficiency (%) Curve. optical transmission, which allows it to efficiently transmit light to the CIGS layer, resulting in higher conversion efficiencies. However, as mentioned earlier, the use of CdS raises concerns about its potential environmental impact. Overall, the efficiency of CIGS and CdS solar cells is highly competitive with other types of solar cells, and ongoing research and development efforts are focused on improving their efficiency and reducing their cost. The efficiency of CIGS and CdS solar cells can be evaluated in both simulation-based environments and real-world environments. In simulation-based environments, the efficiency of solar cells can be modeled using computer simulations that take into account various factors, such as material properties, cell design, and environmental conditions. These simulations can provide insights into the theoretical efficiency of solar cells and can help guide the design and optimization of solar cell materials and structures. In real-world environments, the efficiency of solar cells can be evaluated by measuring their performance under actual operating conditions. Factors such as temperature, humidity, and shading can affect the performance of solar cells in real-world environments, leading to variations in efficiency compared to simulation-based results. Generally, the efficiency of CIGS and CdS solar cells in real-world environments is lower than in simulation-based environments due to various factors such as partial shading, thermal losses, and other environmental factors. However, ongoing research and development efforts are focused on improving the real-world performance of CIGS and CdS solar cells, including developing new encapsulation techniques, optimizing the cell design, and developing new materials for use in the window layer. Overall, the efficiency of CIGS and CdS solar cells is evaluated in both simulation-based and real-world environments, and ongoing research is focused on improving their efficiency and performance in both environments. ## III Simulation Setup For the analysis of the different solar cell architectures, a simulation environment was developed on the SCAPS-1D tool for examining the electronic and electrical parameters along with measuring the PCE, FF, VOC and JSC values. Initially, a solar cell was designed using the three-layered P-N-N semiconductor layers. With light incident from the right contact (front), followed by a layer of n-ZnO, n-CdS and p-GaAs to the left contact (back) formed a multi-junction solar cell architecture. The incident light plays a pivotal role in determining the efficiency of the solar cell and thus, in our simulation setup, we introduced the incident light from the right contact (front) throughout the simulations settings. Furthermore, the initial working point of the simulations was set as follows: Temperature - 300K, Voltage - 0V, Frequency - \(10^{6}\) Hz and the number of points as 5. In addition to this, for plotting the electrical characteristics, the settings for pause at each top were applied as V1 (0V to -0.8V), V2 (0.8V to 0.8V), frequency (f1: \(10^{2}\) Hz to f2: \(10^{6}\) Hz) and the wavelength (WL1: 300nm to WL2: 900nm). Moreover, at each step, the number of points was set as 41, 81, 21, and 61 with an increment of 0.02V, 0.02V, 5 points per decade, and 10nm, respectively. It is worth mentioning that the author later in the study also introduced a layer of GaAs with the aim of achieving the highest efficiency and accordingly, all the settings of the working point remain the same for different multi-junction solar cell architecture. Subsequently, figure 1 represents the direction of the incident light along with Fig. 3: Figure represents the optimization technique used to evaluate the most efficient value of the thickness for the p-CIGS and n-CdS semiconductor materials using the heatmap confusion matrix. In addition, (a) Outputs the Efficiency, \(\eta\) (%), (b) Outputs the Fill Factor, FF (%), (c) Current Density (A/\(cm^{3}\)), (d) Open Circuit Voltage (\(V_{OC}\)), (e) Comparison of the IV and PV characteristics after the thickness optimization, (f) Quantum Efficiency (%) Curve. the p-CIGS/n-CdS/n-ZnO multi-junction solar cell architecture and Table 1 depicts the input electrical parameters such as the bandgap, electron affinity (eV), dielectric permittivity (relative), conduction band effective density of states, valence band effective density of states, electron thermal velocity, hole thermal velocity, electron and hole mobility of the respective, 4 layered solar cell architecture. ## IV Efficiency Optimization After incorporating the necessary input electrical values and setting up the simulation environment, in the next step, the authors conducted the efficiency optimization consisting of three important steps of the study. Accordingly, to meet the objective of this paper, a critical optimization of different solar cell architectures is performed. The first step of optimization involves varying the thickness of the CIGS and CdS layers in the multi-junction solar cell of p-CIGS/n-CdS/n-ZnO. Followed by, the second step included the same solar cell architecture, however, the efficiency optimization is evaluated with the help of changing the doping values. Lastly, the third step introduces a GaAs layer to the existing solar cell architecture and accordingly, both the thickness and doping optimization is performed simultaneously. ### _STEP 1: Optimization of Thickness for CIGS and CdS layer_ Initially, a baseline solar cell architecture is proposed consisting of p-CIGS/n-CdS/n-ZnO and as a first step, the thickness optimization is performed for estimating the overall efficiency of the solar cell at different values of thickness that range from 0.5 to 5.0 \(\mu\)m, at an increment of 0.5 \(\mu\)m. Accordingly, the increment in the values of thickness is performed in such a manner that an increment is made for each 0.5 \(\mu\)m value for both the CIGS and CdS semiconductor material of the solar cell independently. Therefore, to present Fig. 4: Figure represents an optimization of the carrier charge density, also known as doping concentration (\(1/cm\)) for the p-CIGS/n-CdS/n-ZnO multi-junction solar cell architecture incorporating the Heatmap confusion matrix. Likewise to the thickness optimization, Doping optimisation values are as follows: (a) Outputs the Efficiency, \(\eta\) (%), (b) Outputs the Fill Factor, FF (%), (c) Current Density (A/\(cm^{3}\)), (d) Open Circuit Voltage (\(V_{OC}\)), (e) Comparison of the IV and PV characteristics after the thickness optimization, (f) Quantum Efficiency (%) Curve. a relation between the two parameters' increment of thickness, the authors used the heatmap correlation matrix to provide a clear indication of the values of the overall performance of the solar cell at varied values of thickness of both materials. The estimation of thickness is highly essential for designing the most optimized solar cell due to cost saving of materials and subsequently, the solar cell manufacturers could use the materials optimally. The results from the heatmap confusion matrix showcase four parameters as Efficiency, fill factor, current density and open circuit voltage. For thickness optimization of the materials p-CIGS/n-CdS, the heatmap confusion matrix indicates a thickness value of 0.5 \(\mu\)m for CdS and 5.0 \(\mu\)m for CIGS, giving a maximum efficiency of solar cell design as 20.07%. Followed by a thickness of 1 \(\mu\)m for CdS and 5.0 \(\mu\)m for CIGS yields a maximum Fill factor of 80.88%. Whereas the maximum values of parameters current density and open circuit voltage output a 43.59 \(A/m^{2}\) and 0.57 V, respectively for thickness values of 4.5 \(\mu\)m for CdS and 5.0 \(\mu\)m for CIGS. In addition, for the electrical performance of the proposed solar cell, PV, IV and QE characteristics are plotted to measure the maximum power point tracking of the solar cell. Accordingly, the maximum value as observed from the IV curve is 45 \(A/m^{2}\) and the maximum power as analysed from the PV curve is 32 W. Subsequently, the maximum quantum efficiency of the baseline proposed solar cell architecture is 95%. ### _STEP 2: Optimization of Carrier Density for CIGS and CdS layer_ After optimizing the thickness values of the p-CIGS/n-CdS/n-ZnO multi-junction solar cell architecture, the authors conducted the optimization of the carrier density parameter of the solar cell using the same input values as set in the working simulation environment. However, the authors used the most optimized value of the thickness of the solar cell as calculated in the above subsection, i.e., 0.5 \(\mu\)m for CdS semiconductor material and 5.0 \(\mu\)m for CIGS material. It is worth mentioning that the authors used this value of thickness for the remaining simulations of the study. Furthermore, the optimization of the acceptor density of the CIGS \(1.0\times 10^{n}\) (\(A/m^{2}\)) and the donor density of the CdS \(1.0\times 10^{n}\) (\(A/m^{2}\)) is evaluated using the heatmap confusion matrix which gives the values of the doping concentration for both the CdS and CIGS. Likewise, to the previous subsection, herein, also the authors measured the electrical characteristics such as Efficiency (%), Fill Factor (%), Current Density (\(A/m^{2}\)) and the open circuit voltage (V) for the critical optimization of the doping concentration. Accordingly, figure 4 (a), (b) and (d) indicate the optimization of the doping concentration at \(1\times 10^{20}\) (1/cm) for both CIGS and CdS materials yields a maximum value of efficiency, fill factor and open circuit voltage as 32.12 %, 86.93 % and 0.87 V, respectively. On the contrary, the optimized value of doping concentration for donor density of CdS lies in the range of \(1\times 10^{10}\) (1/cm) to \(1\times 10^{20}\) (1/cm) at an acceptor density of CIGS as \(1\times 10^{10}\) (1/cm) resulted in a maximum value of current density equivalent to 43.89 (\(A/m^{2}\)). In addition to this, the electrical characteristics are calculated from the SCAPS-1D simulation tool. Figure 5 showcases the IV, PV and quantum efficiency for the most optimized values of the doping concentration values at \(1\times 10^{10}\) (1/cm) Fig. 5: Figure represents an optimization of the carrier charge density, also known as doping concentration (\(1cm\)) for the p-CIGS/n-CdS/n-ZnO multi-junction solar cell architecture incorporating the Heatmap confusion matrix. Likewise to the thickness optimization, Doping optimisation values are as follows: (a) Outputs the Efficiency, \(\eta\) (%), (b) Outputs the Fill Factor, FF (%), (c) Current Density (A/\(cm^{3}\)), (d) Open Circuit Voltage (\(V_{OC}\)), (e) Comparison of the IV and PV characteristics after the thickness optimization, (f) Quantum Efficiency (%) Curve. for the donor density of CdS and \(1\times 10^{20}\) (1/cm) for the CIGS semiconductor materials. Accordingly, the curves indicate that the maximum value of current density is 43.2 \(A/m^{2}\) and the maximum power is 32.91 W. Subsequently, the maximum value of the quantum efficiency at the optimized inputs of doping concentration results in 96%. ## V Introducing GaAs Layer One of the most common semiconductor materials used in the multi-junction solar cell is Gallium Arsenide (GaAs) which is primarily stacked on the top of either each layer or at the top of all layers proposed in a solar cell architecture. Each layer is designed to absorb a specific portion of the solar spectrum, allowing the cell to capture a broader range of sunlight and convert it into electricity more efficiently. One common material used in multi-junction solar cells is Gallium Arsenide (GaAs). GaAs is a semiconductor material with unique properties that make it well-suited for solar cell applications. Here's why GaAs are used as a semiconductor layer in multi-junction solar cells: * **High energy conversion efficiency:** GaAs have a relatively high energy conversion efficiency compared to other semiconductor materials used in solar cells. It has a direct bandgap, which means it can efficiently convert sunlight into electricity without losing much energy as heat. * **Wide bandgap:** GaAs has a wide bandgap, which allows them to absorb higher-energy photons from the solar spectrum. By incorporating GaAs in the solar cell stack, it can absorb photons from the blue and green regions of the spectrum, which are not efficiently absorbed by other materials such as silicon (commonly used in single-junction solar cells). * **Tandem cell configuration:** In a multi-junction solar cell, the semiconductor layers are arranged in a tandem configuration, with each layer tuned to absorb a specific part of the solar spectrum. GaAs are often used as the top layer in the stack because it has a higher bandgap than other materials, making it suitable for capturing the higher-energy photons. The layers beneath the GaAs layer can be designed to absorb lower-energy photons, ensuring efficient use of the entire solar spectrum. * **Temperature stability:** GaAs have excellent temperature stability, allowing them to maintain their high performance even at elevated temperatures. This characteristic is crucial for solar cells, as they can heat up under intense sunlight. * **Mature technology:** GaAs has been extensively researched and developed for various applications, including solar cells. It benefits from a well-established manufacturing process and has a proven track record in high-performance photovoltaic devices [24]. * **High electron mobility:** GaAs have a higher electron mobility compared to other common semiconductor materials like silicon. This property makes GaAs suitable for high-speed electronic devices, such as field-effect transistors (FETs) and integrated circuits, where fast switching and high-frequency operation are required. * **Low noise characteristics:** GaAs exhibit low noise characteristics, making them ideal for applications in low-noise amplifiers and microwave devices. This property is particularly advantageous in high-frequency communication systems and radar technology. * **Wide frequency range:** GaAs exhibit excellent performance across a wide frequency range, including microwave and millimetre-wave frequencies. It enables the development of devices and circuits for wireless communications, satellite communications, radar systems, and high-frequency electronics. * **High power handling capability:** GaAs materials can handle high power levels without significant degradation in performance. This property makes GaAs suitable for power amplifiers and other high-power electronic devices, including those used in telecommunications and defence applications. * **Optoelectronic applications:** GaAs is widely used in optoelectronic devices such as light-emitting diodes (LEDs), laser diodes, and photodetectors. GaAs-based LEDs and laser diodes have superior performance in terms of efficiency, brightness, and wavelength range, making them valuable for applications in lighting, optical communications, and optical sensing. * **Compatibility with complementary metal-oxide-semiconductor (CMOS) technology:** GaAs can be integrated with CMOS technology, allowing for the devel Fig. 6: Figure represents the introduction of the p-GaAs layer to the proposed baseline multi-junction solar cell architecture. The GaAs layer is added on top of the p-CIGS layer next to the left contact (back) of the proposed solar cell architecture for analysing the electrical and electronic performance in the real-world environment with the help of the SCAPS-1D simulation tool. Moreover, the light in this scenario is incident from the right contact (front) to achieve the maximum possible efficiency of the proposed solar cell architecture. opment of hybrid circuits and systems that leverage the advantages of both GaAs and CMOS. This integration enables the fabrication of high-performance, mixed-signal devices and integrated circuits with diverse functionality. * **Radiation hardness:** GaAs exhibit inherent radiation hardness, meaning they can withstand the effects of ionizing radiation without significant degradation in performance. This characteristic makes GaAs suitable for applications in space technology, nuclear power plants, and high-energy physics experiments. By incorporating a GaAs semiconductor layer in a multi-junction solar cell, the overall efficiency of the cell can be significantly increased. GaAs help capture a broader range of sunlight, including higher-energy photons, and convert them into electricity more effectively, leading to improved solar cell performance. While GaAs materials offer numerous advantages, there are also some limitations associated with their use. One limitation is the higher cost of GaAs compared to other semiconductor materials, primarily due to the complex manufacturing processes involved. This cost factor restricts the widespread adoption of GaAs in certain applications where cost-effectiveness is a critical consideration. Additionally, GaAs is a brittle material, making it more prone to cracking and breakage during handling and fabrication. Moreover, GaAs-based devices may face challenges in scaling down to smaller dimensions due to material properties and technological constraints. Despite these limitations, ongoing research and advancements aim to address these issues and further enhance the capabilities and cost-effectiveness of GaAs materials for broader application domains. Fig. 7: Figure represents the output results for the simulations performed by introducing the GaAs layer to the proposed baseline solar cell architecture. Herein, the optimization for the thickness and the charge current density (doping concentration) is evaluated simultaneously using the same heatmap confusion matrix. Accordingly, electrical characteristics are presented as: (a) Outputs the Efficiency, \(\eta\) (%), (b) Outputs the Fill Factor, FF (%), (c) Current Density (A/\(cm^{3}\)), (d) Open Circuit Voltage (\(V_{OC}\)), (e) Comparison of the IV and PV characteristics after the thickness optimization, (f) Quantum Efficiency (%) Curve, that includes the GaAs layer to the proposed multi-junction solar cell. ### _STEP 3: Optimization of Thickness and Carrier Density for GaAs layer_ Furthermore, figure 6 represents the proposed solar cell architecture which includes another layer of the p-GaAs layer to the baseline multi-junction solar cell. Subsequently, the GaAs layer is added on top of the p-CIGS layer next to the left contact (back) of the proposed solar cell architecture for analysing the electrical and electronic performance in the real-world environment with the help of the SCAPS-1D simulation tool. Moreover, the light in this scenario is incident from the right contact (front) to achieve the maximum possible efficiency of the proposed solar cell architecture. In addition to this, all working environments of the simulation setup were kept to same so as to avoid any discrepancy in the analysis of the output results and thus, the overall performance of the multi-junction solar cell. Consecutively, the optimization of the GaAs semiconductor material was performed in terms of the thickness and doping concentration with the value ranging between 0.5 \(\mu\)m to 5.0 \(\mu\)m and \(1\times 10^{11}\) (1/cm) to \(1\times 10^{20}\) (1/cm), respectively. Accordingly, figure 7 showcases that the maximum efficiency and open circuit voltage of the proposed solar cell architecture is achieved at 45.47 and 1.16 V, respectively, at a thickness of 5.0 \(\mu\)m and acceptor density of GaAs material of \(1\times 10^{20}\) (1/cm). One of the most promising results in the heatmap confusion matrix indicates that the current density of the solar cell remains unaffected i.e., 43.88 (\(A/m^{2}\)) even by changing the thickness and doping concentration values of the solar cell. On the contrary, the percentage of the maximum fill factor is 89.52% at the thickness and doping concentration values of 5.0 \(\mu\)m and \(1\times 10^{16}\) (1/cm), respectively. Additionally, the current density (blue curve) as obtained from the IV curve outputs a peak value of 44.8 (\(mA/Cm^{2}\)) as shown in Figure 8 and thus, the maximum power observed is equivalent to the 46 W (the orange curve) at a thickness and doping concentration value of 5.0 \(\mu\)m and \(1\times 10^{20}\) (1/cm), respectively. Followed by, the maximum quantum efficiency achieved for the proposed solar cell with GaAs with the most optimized values of thickness and doping concentration is equal to 99.2 % as shown in Figure 8. It is worth mentioning that the quantum efficiency curve consists of the quantum efficiency (%) vs the wavelength curve (nm) with a range between 300 (nm) to 1130 (nm). ## VI Discussions In this section of the manuscript, we discuss the results of the comparison of various IV, PV and QE characteristics of the proposed baseline solar cell, thickness optimization curve, and doping concentration curve and a thorough comparison is made for the electrical properties of the solar cell after adding the p-GaAs layer on top of the proposed baseline solar cell architecture. The comparison gives a clear indication of the most optimized thickness and doping concentration values that need to be set whilst manufacturing the different solar cell architectures at a mass level. ### _Comparison of IV Characterisitcs_ Figure 9 represents the IV characteristics in terms of the current density, JSC (\(mA/cm^{2}\)) of the baseline - after thickness Fig. 8: Figure represents the output results for the simulations performed by introducing the GaAs layer to the proposed baseline solar cell architecture. Herein, the optimization for the thickness and the charge current density (doping concentration) is evaluated simultaneously using the same heatmap confusion matrix. Accordingly, electrical characteristics are presented as: (a) Outputs the Efficiency, \(\eta\) (%), (b) Outputs the Fill Factor, FF (%), (c) Current Density (A/\(cm^{3}\)), (d) Open Circuit Voltage (\(V_{OC}\)), (e) Comparison of the IV and PV characteristics after the thickness optimization, (f) Quantum Efficiency (%) Curve, that includes the GaAs layer to the proposed multi-junction solar cell. optimization (blue), the optimized curve for doping concentration (orange) and the optimization results after introducing the GaAs layer (green) on the proposed solar cell with varied values of the open circuit voltage, i.e., 0.5V, 0.83V and 1.17V, respectively. The curve indicates that the optimization of the solar cell architecture along with the thickness and current density led to an increase in the open circuit voltage of the solar cell architecture. In addition, an enhancement in a solar cell's performance or a change in its operating circumstances is often indicated by an increase in the VOC of the solar cell on the IV curve. This improvement of VOC is attributable to circumstances like better material quality, less charge carrier recombination, lower series resistance, or different operating environment like temperature. However, to appropriately evaluate the effectiveness and overall performance of the solar cell, it's crucial to take into account the entire IV curve, including elements like the short circuit current (Isc), fill factor (FF), and power output which is discussed in the subsequent subsection of the manuscript. ### _Comparison of PV Characteristics_ Another comparison of power output is presented in Figure 10 showing a relation of the power density (\(mW/cm^{2}\)) vs voltage (V) characteristics, where like the previous step, the thickness optimization (blue), doping concentration (orange) and the p-GaAs layer optimization (green) are evaluated. The maximum of the power curve, represented as \(P_{MP}\), is the point at which the solar cell should be operated to produce the most electricity. It occurs at a voltage of \(V_{MP}\) and a current of \(I_{MP}\) and is also referred to as \(P_{MAX}\) or maximum power point (MPP). Figure 10 showcases that the power output for the thickness optimization of the proposed baseline solar cell architecture has a peak of 25 W, whereas that of the doping concentration is measured as 32 W. On the contrary, introducing the p-GaAs layer on the top of the proposed solar cell yields a peak value of power at 45.47 W. Therefore, adding a p-GaAs layer highly affects the output power of the solar cell and accordingly, it also results in an increased highest efficiency of the solar cell to 45.47% which is thus so far the maximum efficiency recorded for a multi-junction solar cell consisting of p-GaAs/p-CIGS/n-CdS semiconductor materials. Moreover, the solar cell shows an overall improvement in functionality and effectiveness. The increased power output is due to the solar cell's material quality has improved, allowing for better light absorption and electrical conversion with optimized thickness and doping concentration values performed on the SCAPS-1D simulation tool. Additionally, improved operating conditions, such as temperature and illumination levels, decreased recombination losses and lower series resistance are the other miscellaneous reasons that contribute to an increase in the overall power output. ### _Comparison of QE Characteristics_ Subsequently, an improvement in the proposed solar cell's overall capacity to convert photons of various wavelengths into the electrical current is evaluated using the quantum efficiency vs. lambda (wavelength) curve. The solar cell is growing more effective at absorbing a larger variety of photons across the electromagnetic spectrum as QE rises. The increased light-trapping techniques, improved material characteristics, Fig. 10: Figure represents the comparison of PV Characteristics of the proposed baseline multi-junction solar cell (p-CIGS/n-CdS/n-ZnO) architecture after the thickness optimization (blue). The orange curve shows the results after the optimization of the doping concentration of the proposed solar cell and the grey curve shows the results of the optimization of both thickness and doping concentration after introducing the GaAs layer to the solar cell. Fig. 9: Figure represents a critical comparison of IV Characteristics of the proposed baseline multi-junction solar cell (p-CIGS/n-CdS/n-ZnO) architecture after the thickness optimization (blue). Followed by, the orange curve shows the results after the optimization of the doping concentration of the proposed solar cell. Lastly, the grey curve shows the results of the optimization of both thickness and doping concentration after introducing the GaAs layer to the proposed solar cell architecture. decreased recombination losses, and improved device topologies are one of the major contributing factors that led to a sudden rise in the overall quantum efficiency. Therefore, an increase in the quantum efficiency (%) in the quantum efficiency vs Lambda (wavelength in nm) curve depicts that the solar cell is performing efficiently in regards to capturing a wide range of wavelength spectrum which further results in an exponential rise in the total power conversion efficiency of the proposed solar cell. Accordingly, the authors presented a comparative analysis of the quantum efficiency (%) for baseline thickness optimization (blue), doping concentration optimization (orange) and the optimization after introducing the GaAs layer. As analysed from Figure 11, the quantum efficiency for baseline thickness, doping concentration and the GaAs layer is 82%, 94% and 99.48% at the 1020 nm of wavelength. ## VII Conclusions This paper proposes an efficient three-layered p-GaAs/p-CIGS/n-CdS (PPN), a unique solar cell architecture. Copper indium gallium selenide (CIGS)-based solar cells exhibit substantial performance than the ones utilizing cadmium sulfide (CdS). On the contrary, CIGS-based devices are more efficient, considering their device performance, environmentally benign nature, and reduced cost. Therefore, our paper proposes a numerical analysis of the homojunction PPN-junction GaAs solar cell structure along with n-ZnO front contact that was simulated using the Solar Cells Capacitance Simulator (SCAPS-1D) software. Moreover, we investigated optimization techniques for evaluating the effect of the thickness and the carrier density on the performance of the PPN layer on solar cell architecture. Subsequently, the paper discusses the electronic characteristics of adding GaAs material on the top of the conventional (PN) junction, further leading to improved values of the parameters, such as the power conversion efficiency (PCE), open-circuit voltage (VOC), fill factor (FF) and short-circuit current density (JSC) of the solar cell. The most promising results of our study show that adding the GaAs layer using the optimised values of thickness as 5 (\(\mu\)m) and carrier density as \(1\times 10^{20}\) (1/cm) will result in the maximum PCE, VOC, FF, and JSC of 45.7%, 1.16 V, 89.52% and 43.88 \((mA/m^{2})\), respectively, for the proposed solar cell structure.
2308.00117
Probing non-perturbative QED and new physics with a LUXE-type experiment at the ILC
The proposed LUXE experiment (LASER Und XFEL Experiment) at DESY, Hamburg, using the 16.5 GeV electron beam from the European XFEL, aims to probe QED in the non-perturbative regime created in collisions between high-intensity laser pulses and high-energy electron or photon beams. In this strong-field regime, where the electromagnetic field of the laser is above the Schwinger limit, physical electron-positron pairs will be created from the QED vacuum, similar to Hawking radiation from black holes. LUXE intends to measure the positron production rate in an unprecedented intensity regime, in and beyond the regime expected in the beam-beam interaction of future electron-positron colliders. This setup also provides a unique opportunity to probe physics beyond the standard model by leveraging the large photon flux generated at LUXE, probing axion-like particles (ALPs) at a reach comparable to FASER2 and NA62. In this contribution, we will give an overview of the LUXE experimental setup and its challenges and explore the sensitivity of a LUXE-type experiment using the ILC$'$s or another future Higgs factory$'$s electron beam instead of the EU.XFEL one.
A. Irles
2023-07-31T19:36:25Z
http://arxiv.org/abs/2308.00117v1
# Probing non-perturbative QED and new physics with a LUXE-type experiment at the ILC. ###### Abstract The proposed LUXE experiment (LASER Und XFEL Experiment) at DESY, Hamburg, using the 16.5 GeV electron beam from the European XFEL, aims to probe QED in the non-perturbative regime created in collisions between high-intensity laser pulses and high-energy electron or photon beams. In this strong-field regime, where the electromagnetic field of the laser is above the Schwinger limit, physical electron-positron pairs will be created from the QED vacuum, similar to Hawking radiation from black holes. LUXE intends to measure the positron production rate in an unprecedented intensity regime, in and beyond the regime expected in the beam-beam interaction of future electron-positron colliders. This setup also provides a unique opportunity to probe physics beyond the standard model by leveraging the large photon flux generated at LUXE, probing axion-like particles (ALPs) at a reach comparable to FASER2 and NA62. In this contribution, we will give an overview of the LUXE experimental setup and its challenges and explore the sensitivity of a LUXE-type experiment using the ILC's or another future Higgs factory's electron beam instead of the EU.XFEL one. 2 August 2023 LUXE-PROC-2023-003 _Talk presented at the International Workshop on Future Linear Colliders (LCWS 2023), 15-19 May 2023. C23-05-15.3._ LUXE and the study of strong-field QED The LUXE (Laser Und XFEL Experiment) aims to study Quantum Electrodynamics, QED, in uncharted regimes with very strong fields above the critical QED field strength, also known as the _Schwinger limit_ (in case of an electric field \(E_{\rm cr}=m_{e}^{2}c^{3}/(e\hbar)\approx 1.32\times 10^{18}\,{\rm V/m}\))1[1]. For a recent review of strong-field QED, SFQED, processes and effects, see [2]. Two key parameters for LUXE and the study of SFQED are the classical non-linearity parameter or laser intensity parameter, \(\xi\), and the quantum non-linearity parameter, \(\chi\). The former measures the work done by the EM field over an electron Compton wavelength (\(\lam=\hbar/(m_{e}c)\)) in units of the laser photon energy \(\hbar\omega\). Whenever it is larger than unity, calculating processes at any given order in the QED coupling, \(\alpha\), requires a resummation at all orders of \(\xi\). The quantum non-linearity parameter characterises the field strength experienced by an electron in its rest frame and the recoil experienced by the electron-emitting a photon. Both parameters2 are dependent on the beam and laser parameters. Footnote 1: Here, \(m_{e}\) denotes the electron mass, \(c\) the speed of light in vacuum, \(e\) the electron charge and \(\hbar\) the reduced Planck constant. Footnote 2: The field intensity parameter is defined as \(\xi=\frac{m_{e}\mathcal{E}_{L}}{\omega_{L}\mathcal{E}_{crit}}\) where \(\omega_{L}\) is the laser wavelength and \(\mathcal{E}_{L}\) is the laser electromagnetic field strength. The quantum non-linearity parameter is defined as \(\chi=\frac{e\mathcal{E}_{L}\lam}{m_{e}c^{2}}\). LUXE main goal is the study of the SFQED, particularly the study of non-linear Compton scattering and non-linear Breit-Wheeler and non-linear trident pair production (diagrams shown in Fig. 1. Non-linear Compton scattering refers to the absorption of multiple laser photons by an electron, which results in the emission of a single energetic photon. This process is examined by measuring the displacement of the Compton edge as the laser intensity parameter changes. On the other hand, the Breit-Wheeler process involves a high-energy photon absorbing multiple laser photons and producing an electron-positron pair. The scaling of this process with laser intensity is direct evidence of the transition from perturbative to non-perturbative QED and has no classical equivalent. Figure 1: LUXE main Strong Field QED candidate processes. ### Experimental setup SFQED fields will be reached at LUXE by creating electron-laser and photon-laser interactions with the 16.5 GeV electron beam of the European XFEL and a laser beam with a power of up to 350 TW. A staged approach is planned, using an upgradable laser system, which will deliver a power of 40 TW in phase-0 (\(\xi_{\rm max}=7.9\), \(\chi_{\rm max}=1.5\)) and subsequently will be upgraded to 350 TW for phase-1 (\(\xi_{\rm max}=23.6\), \(\chi_{\rm max}=4.45\)). During the data-taking in LUXE, \(\xi\) and \(\chi\) are varied by de-focusing and re-focusing the laser pulse at the interaction point. The reach of LUXE at different stages of its operation is compared with the other present, past or future experiments in Fig. 2. LUXE will run in two modes of operation: the \(e\)-laser mode, in which the intense laser collides directly with the electron beam, and the \(\gamma\)-laser mode, in which the laser interacts with secondary photons generated by the electron beam in a high-Z target upstream of the interaction point. These two operation modes are shown in Fig. 3. In these interactions, a broad range of fluxes of electrons, positrons and photons will be produced: the expected ranges are \(10^{-3}\) to \(10^{9}\) per 1 Hz bunch crossing, depending on the laser power and focus. In addition, low-energy high radiation backgrounds will be present at LUXE. To overcome such challenges and study the SFQED in uncharted regimes with precision, LUXE foresees the use of dedicated physics-driven detectors at specific locations downstream of the interaction point. Saludos,Providing a detailed description of these systems is out of the scope of this contribution. For a more comprehensive picture, we refer the reader to the Conceptual Design Figure 2: Quantum parameter \(\chi\) as a function of the intensity parameter \(\xi\) for LUXE and a selection of experiments and facilities. Figure extracted from the LUXE Conceptual Design Report [3] report of LUXE [3] and the Technical Design report of LUXE, which will soon appear. ## 2 LUXE-NPOD: new physics searches with an optical dump at LUXE The LUXE experiment will provide an intense secondary beam of hard photons through the interaction between the high-energy electron beam and the laser pulses as optical dump. This dump behaves as thick target for the incoming electron, through non-linear Compton scattering but with negligible interaction with the photon beam. Fig. 4 shows a schematic illustration of the optical dump. Therefore, the intense and collimated hard photon beam flux can be efficiently used for specific new physics searches beyond the Standard Model, BSM. LUXE plans to use this beam to search for weakly interacting new particles that couple to photons. In particular, LUXE will provide access to direct searches of new spin-0 (scalar or pseudo-scalar) particles with coupling to photons, as the axion-like particles (ALPs). This proposal is denoted as LUXE-NPOD: New Physics at Optical Dump. Figure 4: Schematic illustration of the optical dump. Figure 3: Sketch of the LUXE experimental setups. The most relevant production mechanism for these new physics particles, NP, is the so-called _secondary_ NP production. In this case, the photons produced in the electron-beam and laser pulse collisions freely propagate through the beam pipe until they reach a sizeable thick dump made of heavy nuclei. These NP are produced via the Primakoff production mechanism during the photon-nuclei interaction. The other production mode is the _primary_ NP production, in which the NP is produced directly in the electron-laser interaction region via electron coupling. This latter mode of production is considerably suppressed compared with the _secondary_ NP production, and it allows the inspection of only low mass NP \(\mathcal{O}(keV)\) instead of \(\mathcal{O}(GeV)\). These production mechanisms and their kinematics are schematically represented in Fig. 5. The number of NP produced and decayed in front of the calorimeter, \(N_{X}\), depends directly on effective luminosity (\(\mathcal{L}_{eff}\)), the Primakoff production cross section (\(\sigma_{X}\)), the energy spectrum of the photon beam \(\frac{dN_{\gamma}}{dE_{\gamma}}\) and the acceptance \(\mathcal{A}\) and geometrical characteristics of the setup and detector: \[N_{X}\simeq\mathcal{L}_{eff}\int dE_{\gamma}\frac{dN_{\gamma}}{dE_{\gamma}} \sigma_{X}(E_{\gamma})(e^{-\frac{L_{D}}{L_{DX}}}-e^{-\frac{L_{V}+L_{D}}{L_{DX }}})\mathcal{A} \tag{1}\] The expected \(\frac{dN_{\gamma}}{dE_{\gamma}}\) distribution for LUXE operating in different modes are shown in Fig. 6. The physical dump (or photon dump) at the end of the beamline is made of a block of tungsten with a length of \(L_{D}=1\) m in the baseline design, and it is positioned at Figure 5: An illustration of the LUXE-NPOD concept and the different search modes. Shown are schematics of the _secondary_ (top) and _primary_ (middle) production mechanisms realisation in the experimental setup. The relevant background topologies are also shown (bottom). The charged particles are deflected by a magnet placed right after the interaction chamber. 13 m of the interaction point. The NP produced in the photon-dump interaction are long-lived. Therefore, an empty distance \(L_{V}=2.5\) m (baseline design) is left between the physical dump and the detector. \(L_{X}\) is the propagation length of the \(X\) particle. The detector should consist of an electromagnetic calorimeter with good energy and spatial resolution. It must be able to separate photon showers to reconstruct the \(\gamma\gamma\) vertex and the two photons' invariant mass with precision. In addition, it should feature photon-neutron discrimination capabilities and timing of \({\cal O}(0.1)\) ns for the background rejection. The detailed design and technological choice are still under discussion. The effective luminosity is directly proportional to the density and radiation length of the physical dump and to the multiplication of the number of electrons per bunch and total bunches collected. With these ingredients and assuming one year of data taking and a negligible amount of backgrounds, the LUXE collaboration has estimated the projected reach of LUXE-NPOD for the direct searches of new physics. The reach of LUXE in the plane of the NP mass, \(m_{X}\), versus the effective coupling, \(1/\Lambda_{X}\), is shown in Fig. 7 and compared with other existing or coming experiments. More details on this plot can be found in [4]. This figure shows that already in the phase-0 LUXE will probe a never reached parameter space in the mass range of 50 MeV \(\lesssim m_{X}\lesssim 250\) MeV and \(1/\Lambda_{X}>4\times 10^{-6}\) GeV\({}^{-1}\). For the phase-1, the parameter space is increased up to 40 MeV \(\lesssim m_{X}\lesssim 350\) MeV and \(1/\Lambda_{X}>2\times 10^{-6}\) reaching the naturalness limit for the scalar model. Figure 6: The emitted photon spectrum for phase-0 (1) in blue (black) compared to the perturbative Bremsstrahlung spectrum with \(E_{e^{-}}=16.5\) GeV and target length of 0.01 \(X_{0}\) in red. ## 3 LUXE-type experiment at Higgs Factories In this contribution, we focus on the prospects of having a LUXE-like experiment at the International Linear Collider (ILC), as studied in [5]. The ILC and Eu.XFEL beams feature similar characteristics, being both based on 1.3 GHz superconducting radio-frequency cavities producing pulsed electron and positron beams. However, the ILC and all other Higgs Factories proposals will operate electron and positron beams of the order of 120 GeV in their baseline operation modes. This is one order of magnitude larger than what is foreseen at LUXE with the EU.XFEL electron beam. With this energy, the Lorentz boost of the photon in the reference system of the accelerated electron will be more than ten times larger than at the EU.XFEL, resulting in a reach for \(\chi\) of about 40 times more than LUXE. The tentative timeline of the ILC expects collisions in the mid-2030s. It is realistic to assume that at that time, 100 PW lasers at wavelengths of 1\(\mathrm{\SIUnitSymbolMicro}\)will be on reach. Assuming a pulses size of 1\(\mathrm{\SIUnitSymbolMicro}\)diameter, and a pulses length of 120\(\mathrm{\SIUnitSymbolMicro}\), we would expect values of the SFQED quantum non-linearity parameter of \(\chi\sim 250\), which is 40 times larger than at LUXE in EU.XFEL. Moreover, the combined usage of such powerful laser Figure 7: The projected reach of LUXE-NPOD phase-0 (1) in a solid blue (black) compared to the currently existing bounds (gray regions) and projections (dotted) on ALPs-couplings from other experiments. Details on this plot and references are to be found in [4]. and energetic electron beams will allow us to study SFQED phenomena unreachable until now as can be the creation and dynamics of coherent and incoherent \(e^{+}e^{-}\) plasma. The direct searches of new scalar or pseudo-scalar particles will also benefit from the higher beam energy and higher luminosity foreseen at Higgs Factories. The most critical beam parameters for a LUXE experiment at Higgs Factories are summarised in Table 13, including prospects for linear and circular Higgs factories proposals. The table assumes the baseline designs of four different accelerator scenarios and \(10^{7}\) seconds of data taking per year. Of course, for ILC or FCC-ee, the beams can only be used for a LUXE-type experiment once they have been used for their main purpose (high energy \(e^{+}e^{-}\) collisions). For the ILC, this study assumes the usage of all spent beams, which have a larger energy spectrum than the primary ones. For FCC-ee, two cases have been considered: the usage of the dump beams (3 times per day) or the usage of dedicated FCC-ee booster cycles for a beam dump every 10 seconds. In all cases, the same laser as in phase-0 for LUXE is assumed. The last row shows the enhancement of the signal yield for ALPs production. Footnote 3: Compiled by F. Meloni, J. List and F. Zimmerman for the \(1^{st}\)_ECFA Workshop on \(e^{+}e^{-}\) Higgs/EW/Top Factories_ A very important aspect, granted by the higher beam energy, is the harder photon beam spectrum obtained in the laser-beam interaction, with an average \(E_{\gamma}\) of 40 GeV, in contrast with the expected at LUXE. This is shown in Fig. 8 and is to be compared with Fig. 6. This alone, without an upgrade of the laser setup, will allow access to the production of larger masses for the ALPs. The study presented in Section 2 has been extended to the ILC case. Similarly to the LUXE case, we assumed a background-free scenario but doubled the depth of the physical dump. The same laser setup as in LUXE phase-0 is assumed, and only secondary production by the Primakoff process. This leads to an enhancement of the reach in direct searches of NP, up to masses of 0.5 GeV. The result of this study is summarised in Fig. 9 Another exciting possibility is to use the photon beam generated by the ILC positron source to produce and detect the non-standard scalar or pseudo-scalar particles through the concept of "light-shining-through-the-wall". This idea and that described above are under investigation by the LUXE and ILC collaborations [5]. \begin{table} \begin{tabular}{c|c c c c} \hline & \multicolumn{5}{c}{**Accelerators**} \\ \hline **Beam parameters** & **Eu.XFEL** & **ILC250** & **FCC-ee** & **FCC-ee (booster)** \\ \hline Electron beam energy [GeV] & 16.5 & 125 & 120 & 120 \\ Number of electrons per bunch & \(1.5\times 10^{9}\) & \(2\times 10^{10}\) & \(1.8\times 10^{11}\) & \(0.5\times 10^{10}\) \\ Number of bunches in one year & \(10^{7}\) & \(6.6\times 10^{10}\) & \(1.1\times 10^{5}\) & \(3.3\times 10^{8}\) \\ Signal Yield (w.r.t. Eu.XFEL) & 1 & \(8.8\times 10^{4}\) & 1.3 & \(3.3\times 10^{8}\) \\ \hline \end{tabular} \end{table} Table 1: Beam parameters for four accelerator scenarios. ## 4 Conclusion The main objective of the LUXE experiment is to explore the realm of Strong Field Quantum Electrodynamics. This will be achieved by analyzing the collisions between a high-energy photon beam or electron beam and a high-intensity optical laser. The experiment will be conducted in a continuous data-taking mode, enabling the measurement of strong-field QED processes such as non-linear Compton scattering and Breit-Wheeler pair creation with high precision. The laser system and particle detectors in LUXE are explicitly designed to cater to the physics requirements of the experiment. LUXE is poised to become the first experiment to venture into the uncharted territory of QED under conditions free from external interference, making it a significant milestone for the scientific community. Additionally, with the \(\gamma\)-laser setup LUXE will be the first experiment to investigate collisions between high-intensity laser and real high-energy gamma photons. The LUXE-NPOD extension is a novel approach to detecting feebly interacting spin-0 scalar or pseudoscalar particles. This proposal has the potential to explore a challenging region of parameter space. It will use an intense GeV photon beam generated through the interactions between high-energy electrons, and a highly intense laser pulse, which can be directed towards a target dump to create new BSM states through the Primakoff process. The produced non-standard particles would have a lifetime long enough to traverse the target dump; they can decay into photon pairs that a calorimeter system can detect in an experiment that is effectively free of background noise. This experiment could help Figure 8: The emitted photon spectrum for a LUXE-like experiment at the ILC. For comparison, in blue, we show the LUXE phase-0 spectrum. In red is the perturbative Bremsstrahlung spectrum at ILC. In orange, the expected spectrum with \(E_{e^{-}}=125\) GeV. exclude new scalar or pseudoscalar states with masses ranging between 40 MeV and 350 MeV for an effective coupling of \(1/\Lambda_{X}>2\times 10^{-6}\). Finally, we presented first studies of the promising prospects of LUXE and LUXE-NPOD types of experiments using \(\mathcal{O}(100\text{GeV})\) electron beams from a future Higgs Factory as the International Linear Collider. ## Acknowledgements We thank the DESY technical staff for continuous assistance and the DESY directorate for their strong support and the hospitality they extend to the non-DESY members of the collaboration. This work has benefited from computing services provided by the German National Analysis Facility (NAF) and the Swedish National Infrastructure for Computing (SNIC). AI is funded by the Generalitat Valenciana (Spain) under the grant number CIDEGENT/2020/21. AI also acknowledges the financial support from the MCIN with funding from the European Union NextGenerationEU and Generalitat Valenciana in the call Programa de Planes Complementarios de I+D+i (PRTR 2022), reference ASFAE/2022/015 Figure 9: The projected reach of a LUXE-NPOD proposal operating with ILC beams. This plot is an update of Fig. 7 with the addition of the study performed in [5].
2309.17024
HoloAssist: an Egocentric Human Interaction Dataset for Interactive AI Assistants in the Real World
Building an interactive AI assistant that can perceive, reason, and collaborate with humans in the real world has been a long-standing pursuit in the AI community. This work is part of a broader research effort to develop intelligent agents that can interactively guide humans through performing tasks in the physical world. As a first step in this direction, we introduce HoloAssist, a large-scale egocentric human interaction dataset, where two people collaboratively complete physical manipulation tasks. The task performer executes the task while wearing a mixed-reality headset that captures seven synchronized data streams. The task instructor watches the performer's egocentric video in real time and guides them verbally. By augmenting the data with action and conversational annotations and observing the rich behaviors of various participants, we present key insights into how human assistants correct mistakes, intervene in the task completion procedure, and ground their instructions to the environment. HoloAssist spans 166 hours of data captured by 350 unique instructor-performer pairs. Furthermore, we construct and present benchmarks on mistake detection, intervention type prediction, and hand forecasting, along with detailed analysis. We expect HoloAssist will provide an important resource for building AI assistants that can fluidly collaborate with humans in the real world. Data can be downloaded at https://holoassist.github.io/.
Xin Wang, Taein Kwon, Mahdi Rad, Bowen Pan, Ishani Chakraborty, Sean Andrist, Dan Bohus, Ashley Feniello, Bugra Tekin, Felipe Vieira Frujeri, Neel Joshi, Marc Pollefeys
2023-09-29T07:17:43Z
http://arxiv.org/abs/2309.17024v1
# HoloAssist: an Egocentric Human Interaction Dataset ###### Abstract Building an interactive AI assistant that can perceive, reason, and collaborate with humans in the real world has been a long-standing pursuit in the AI community. This work is part of a broader research effort to develop intelligent agents that can interactively guide humans through performing tasks in the physical world. As a first step in this direction, we introduce HoloAssist, a large-scale egocentric human interaction dataset, where two people collaboratively complete physical manipulation tasks. The task performer executes the task while wearing a mixed-reality headset that captures seven synchronized data streams. The task instructor watches the performer's egocentric video in real time and guides them verbally. By augmenting the data with action and conversational annotations and observing the rich behaviors of various participants, we present key insights into how human assistants correct mistakes, intervene in the task completion procedure, and ground their instructions to the environment. HoloAssist spans 166 hours of data captured by 350 unique instructor-performer pairs. Furthermore, we construct and present benchmarks on mistake detection, intervention type prediction, and hand forecasting, along with detailed analysis. We expect HoloAssist will provide an important resource for building AI assistants that can fluidly collaborate with humans in the real world. Data can be downloaded at [https://holoassist.github.io/](https://holoassist.github.io/). ## 1 Introduction Recent years have witnessed incredible progress in general-purpose AI agents that assist humans with various open-world tasks, especially in the digital world. AI systems powered by large language models (LLMs) like Chat-GPT [29] can answer users' questions and assist them with various text-based tasks. However, these AI assistants do not have sufficient first-hand experience in the physical world and thus cannot perceive world states and actively intervene in the task completion procedure. Building an AI assistant that can perceive, reason and interact in the physical world has attracted attention from researchers across different fields in computer vision [7, 25, 39, 40], human-computer interaction [6, 8, 16, 31], robotics [5, 34], and industrial practitioners. For example, AR Guides [1], which aims to guide users to complete complex tasks, has become popular with the development of augmented reality (AR) devices. However, existing systems often rely on pre-defined instructions or formulate the virtual assistant as a question answering [39, 40] or video under Figure 1: HoloAssist features a two-person interactive assistive task completion setting. The task performer wears an AR device and completes the tasks while the captured data is streamed over the network to a remote instructor watching it on the laptop. The instructor provides verbal guidance to the student. HoloAssist includes seven modalities captured live and human annotated text descriptions as the 8th modality. standing problem [7, 25] without real-world interaction. In another line of work, researchers have developed simulation environments like Habitat [23, 38], VirtualHome [28], and AI2-Thor [19] to build AI agents that can interact with the physical world and collaboratively achieve new tasks [44]. Still, a large gap remains in transferring these agents to the real world, and the interaction between agents is largely simplified compared to real-world human interaction. In this work, we focus on the challenges of developing intelligent agents that share perspectives with humans and interactively guide human users through performing tasks in the physical world. As a first step, we introduce HoloAssist, a large-scale egocentric human interaction dataset to explore and identify the open problems in this direction. As shown in Figure 1, the task _performer_ wears an AR headset+ to capture data while completing the tasks. An _instructor_ watches the real-time egocentric video feed remotely and verbally guides the performer. We have developed and open-sourced a data capture tool [3] using a distributed server-client setup to enable data streaming and multimodal data capture. Footnote †: We use HoloLens 2 [2] for data capture in this work. HoloAssist contains 166 hours of data captured by 222 diverse participants forming 350 unique instructor-performer pairs and carrying out 20 object-centric manipulation tasks. The objects range from common electronic devices to rare objects in factories and specialized labs. The tasks are generally challenging for first-time participants, requiring instructor assistance for successful completion. Seven raw sensor modalities are captured, including RGB, depth, head pose, 3D hand pose, eye gaze, audio, and IMU, to aid in the understanding of human intentions, estimating world states, predicting future actions, and so on. Finally, the dataset is augmented with third-person manual annotations consisting of a text summary, intervention types, mistake annotation, and action segments of the videos as illustrated in Figure 2. We have observed several characteristics demonstrated by human instructors from HoloAssist. First, instructors are often proactive with precisely timed interventions. Instead of waiting until mistakes happen, instructors provide follow-up instructions when the task performer appears confused. Second, the verbal guidance from the instructors tends to be concise and grounded in the task performer's environment. The instructions are often framed as spatial deictics to aid the task performer in spatial directions and distances in the 3D world. Moreover, instructors often have a good world model estimation and can detect whether mistakes disrupt task completion and then adjust the guidance. We take a step further and introduce new tasks and benchmarks on mistake detection, intervention type prediction, and 3D hand pose forecasting, which we conjecture are essential modules for an intelligent assistant. Additionally, we benchmark the dataset on action classification and anticipation tasks and provide empirical results to understand the role of different modalities in various tasks. We hope our dataset, findings, and tooling can inspire and provide rich resources for future work on designing interactive AI assistants and situated AI assistance applications in the real world. ## 2 Related Work Our work closely connects with several lines of work in computer vision, especially egocentric vision, embodied AI, and human-computer interaction. **Interactive AI assistants.** Building interactive agents that can assist humans to carry out tasks in the world--real or virtual--has been a long-standing problem in different areas of AI and HCI [6, 11, 16, 24, 27, 31, 32]. As far back as 1997, Johnson and Rickel introduced "Steve" [16], an early pedagogical agent that aims to help students learn procedural tasks in VR. Recent efforts have focused on new modeling approaches and data collection techniques for training conversational task guidance assistants, such as model-in-the-loop wizard-of-oz [24] and human-human interaction to mimic robot actions in simulated environments [27]. In this work, we revisit this problem and provide a systematic study of real-world human interaction, and we also provide rich sensor information to push the frontiers of the research. **Egocentric video datasets.** Egocentric perspectives often convey rich information about the users' intentions. A shared perspective between the users and the human or AI assistants is useful for the assistants to provide more timely and grounded guidance. In computer vision, several egocentric video datasets [12, 14, 20, 22, 30, 33, 41] have emerged in Figure 2: HoloAssist includes action and conversational annotations, in addition to text summaries of the videos, to indicate the mistakes and interventions in task completion. _mistake_ or _correct_ attributes are associated with each fine-grained action. A purpose label is associated with every utterance to indicate the type of verbal intervention. the community. EPIC-KICHENS [12] is a widely adopted egocentric video dataset capturing kitchen activities. The recent Ego4D dataset [14] is the largest egocentric video dataset in the wild that provides a comprehensive database for egocentric perception in the 3D world. In contrast to earlier egocentric video datasets, HoloAssist features a multi-person interactive task completion setting, where human interaction during the procedure provides a rich source for designing AI assistants to be more proactive and grounded in the environment. Yet our work can benefit from the rich knowledge and representation learned from existing datasets like Ego4D and is complementary in nature. **Mistake detection.** One of the key observations in human interaction is that human assistants tend to correct mistakes and proactively intervene in the task completion procedure. While there has been a large body of work for video-based anomaly detection [26, 42, 45, 46], mistake detection in procedural settings has been under-explored. The Assembly101 dataset [33] proposes a mistake detection task to predict if a coarse-grained action segment is a mistake or correction. By contrast, HoloAssist emphasizes fine-grained actions since instructors may intervene when they spot a student's mistake in an active intervention setting rather than wait until the whole step (_i.e_., coarse-grained action) is completed. In addition, we propose a new intervention prediction task and, in combination, enable a more comprehensive understanding of interactions in an assistive task completion setting. **Multimodality and interaction.** Human interaction with the world is multimodal as we see, speak, and touch objects in the environment. In HoloAssist, we collect seven raw sensor modalities that might help understand humans' intentions, estimate the world states, predict future actions, etc. Previous datasets [12, 14, 20, 41] often provide a limited subset of modalities. Although not every sensor may currently be relevant for the downstream tasks, the seven synchronized sensor modalities provided in HoloAssist will give practitioners more potential for designing multimodal agents and models even beyond the scope of this work. **Embodied simulation platforms.** There is an emerging interest in embodied agents that can perceive, reason, and act in the 3D world. Researchers [19, 23, 28, 36, 38, 44] build various simulation environments to learn such embodied agents. IGLU [44] aims to build interactive agents that learn to solve a task while being provided with grounded natural language instructions in a collaborative environment based on Minecraft, a popular video game. HoloAssist complements this line of work by providing more realistic human interaction and real-world sensor perception. ## 3 HoloAssist: Human Assistance Dataset In this work, we introduce HoloAssist which features a two-person collaboration scenario and can be used to situate AI assistance in the physical world. We will start by describing the data collection and statistics in Section 3.1 and annotations in Section 3.2, before diving into the observations and benchmarks in the following sections. ### Data Collection and Statistics **Tasks and objects.** We consider multi-step goal-oriented tasks involving 16 objects ranging from familiar objects often used in daily life to rare objects sometimes used in labs and factories as summarized in Table 1. We consider small electronics like a GoPro, DSLR camera, and Nintendo, office appliances like a Nespresso machine and printer, IKEA furniture, and objects in labs such as a laser scanner, motor cycle, and circuit breaker. We have designed 20 tasks involving physical manipulation of these objects, _e.g_., changing batteries, changing belts, furniture assembly, machine setup, etc. There is one task per object except for the IKEA furniture, which has assembly and disassembly tasks. Detailed task instructions are in supplementary materials. **Participants and collection procedure.** We recruited 222 participants to form 350 unique pairs of instructors and performers for data collection. Figure 3 shows the demograph \begin{table} \begin{tabular}{p{142.3pt} p{284.5pt}} \hline \hline Object Scales & Object Categories \\ \hline Small & GoPro, Nintendo Switch, DSLR \\ \cline{2-3} Medium & Portable printer, Computer, Nespresso machine \\ \cline{2-3} Big & Standalone printer, big coffee machine, IKEA furniture (stool, utility cart, tray Table, nightstand) \\ \cline{2-3} Rare & NavVis laser scanner, ATV motor cycle, wheel belt, circuit breaker \\ \hline \hline \end{tabular} \end{table} Table 1: HoloAssist includes 16 objects with diverse scales. Apart from common objects used in daily life, HoloAssist includes rare equipment from mechanical labs. 20 tasks are object-centric manipulation tasks for each object and the 4 IKEA furniture has both assembly and disassembly tasks. Figure 3: HoloAssist was collected by participants diverse in ages, occupations, genders, and geography. This helps us to study a diverse set of users with different backgrounds. ics of the participants. Before data collection, the participants review the IRB forms to acknowledge the privacy and ethics standards (more details in supplementary materials). The instructors are informed about the task in detail. The participants playing the role of performers are only given a rough description of the tasks and scenarios beforehand and interacted with the objects based on their understanding. The instructors provide verbal guidance as the performers set out to complete the tasks. In Figure 4 (top), we show the distribution of the performers' familiarity with the tasks measured by a self-reported score (0-10) by the participants. We show the average length and the outliers of the recorded sessions in Figure 4 (bottom) to give a rough idea of how the participant's skill levels may lead to increased session variance. The participants' diverse skill levels and backgrounds provide rich information about the user behaviors and diverse interaction between the instructors and performers. **Data capture tool.** We leveraged the Platform for Situated Intelligence framework [10] to develop and open-source a distributed application for data capture using HoloLens 2 [3]. A client process running on the device captured the sensor data while displaying a rectangular hologram frame around the user's visual field of view to guide their attention downward and keep the task actions in view of the sensors. Sensor data was streamed live over the network to a server application that ran on a PC and persisted the data to disk. This distributed setup allows for collecting longer uninterrupted sessions without reaching the device storage capacity limits. **Comparison with other datasets.** While there is no direct comparison of datasets with the same setup with HoloAssist, we list out different aspects of our dataset and compare it with related datasets in Table 2. HoloAssist is among the largest egocentric video datasets and features a multi-person collaboration setting, which is a unique addition to the field. In addition, HoloAssist relates to work on multi-agent collaborative simulation environments with a distinct characteristic of real-world sensor data and real-world human interaction. ### Annotations To better understand the actions and interactions in the dataset, we provide several sets of third-person manual annotations for text summaries, action segments, mistake attributes, and intervention attributes. **Language annotations.** We asked the annotators to watch the video and write a paragraph to describe the activities in the videos. The description focuses on describing the hand actions in the procedure. The third-person post hoc summary provides insights into the key moments during the interactive task completion. These could be used to build a comprehensive set of instructions for task completion. We \begin{table} \begin{tabular}{l|l|c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Settings} & Collaborative & Instructional & \# real \\ & & \(\&\)Interactive & \(\&\)Procedural & video hours \\ \hline Epic-Kitchen-100 [12] & Cooking & ✗ & ✗ & 100 \\ Assembly101 [33] & Toy assembly & ✗ & ✓ & 167 \\ Ego4D [14] & Daily-life task & + & + & 3,670 \\ \hline VirtualHome [28] & Household task & ✗ & ✓ & § \\ ALFRED [35] & Household task & ✗ & ✓ & § \\ Habitat [38] & Home assistance & ✗ & ✓ & § \\ BEHAVIOR [36] & Daily-life task & ✓ & ✓ & § \\ IGLU [18] & Collaborative & ✓ & ✓ & § \\ & building* & ✓ & ✓ & § \\ TEACh [27] & Household task & ✓ & ✓ & § \\ \hline HoloAssist (ours) & Assistive task & ✓ & ✓ & 166 \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparison to related datasets and simulation platforms.** HoloAssist features a multi-person collaborative setting which is a unique addition to existing egocentric datasets in the real world. HoloAssist provides a set of instructional and procedure videos with multi-turn dialogues. Procedure tasks are defined as following a set of defined steps or procedures to achieve a specific goal, deviation from the procedure can be construed to be a mistake. HoloAssist spans 166 hours and 2,221 sessions. §: simulation, *: Minecraft-like, +: partially included. Figure 4: The skill level of participants (0-10) for the tasks is self-reported by the participants. The skill levels roughly reflect the length of the sessions though they might be noisy. Figure 5: Data distribution of 166 hours captured by HoloAssist. **(left)** number of sessions per activity, and **(right)** total length of sessions in minutes. also provide the transcriptions of the conversations in the video. With this set of annotations, we can understand the difference between third-person post hoc summaries and real-time conversations during the activities. More examples are in supplementary materials. **Coarse-grained action annotations.** The coarse actions usually describe a high-level step in the task (_e.g._, _change battery of a GoPro_ in the GoPro set up task) and can be divided into multiple fine-grained actions. To deal with the open-world setting, we ask the annotators to write a sequence (_e.g._, _man changes the battery of GoPro_) to describe the coarse actions and also identify the active verb-noun pair and optionally an adjective for the noun for benchmarking purposes (_e.g._, _change battery_). The dataset includes 414 coarse-grained actions with 90 nouns and 39 verbs. The distribution of the actions follows a long-tail distribution shown in Figure 6, where 185 actions are considered head classes while the rest are considered tail classes according to the action frequency for evaluation purposes. **Fine-grained action annotations.** Fine-grained actions are the low-level atomic actions (_e.g._, _press button_, _grab screw_, etc.) for completing a step in the task, usually lasting for 1-2 seconds. The fine-grained actions are presented in a verb-(adj.)-noun pair format. There are 1887 fine-grained actions with 165 nouns and 49 verbs. For a more comprehensive evaluation, we create a split of head actions with 1082 top actions and 805 tail actions. Distributions of fine-grained actions are shown in Figure 7. As mentioned earlier, noun and verb vocabularies are not pre-defined but gradually built through annotation. We ask the annotators to enter a new verb and noun if they cannot find it in the vocabulary. After the data is annotated, we ask the annotators to revisit and check the tail classes and see if they are repetitive to head classes. Due to the open-world nature, some verb and noun combinations might be interchangeable with others in the list. We show some examples in the supplementary materials. **Mistake annotation.** Each fine-grained action is labeled as either _correct_ or _mistake_, as indicated in Figure 2. Mistakes include the ones that are "self-corrected by the task performers", are "verbally corrected by the instructors", and "are not corrected labeled". Our human annotators annotate all three mistake types separately, but for benchmark evaluation, we will consolidate them into one mistake class. Figure 6: Data distribution of the coarse-grained actions. **(left)** duration of the actions in seconds, **(middle)** 30 most frequently occurring verbs, and **(right)** 30 most frequently occurring nouns. Figure 7: Data distribution of the fine-grained actions. **(left)** duration of the actions in seconds, **(middle)** 30 most frequently occurring verbs, and **(right)** 30 most frequently occurring nouns. We can see that most fine-grained actions last less than 2 seconds, and there is a long tail distribution in actions, verbs, and nouns. We defer the detailed study of differentiating whether and how the mistakes are corrected to future work. To ensure the annotation quality, we additionally ask the third-person annotators to explain why the action is a mistake and also assign a mapping to every mistake that is corrected by an instructor verbally to the conversation sentence whose type is "instructor correcting mistakes". **Intervention annotation.** Since instructors assist the task performers verbally, we annotate the conversation between the instructors and performers to reflect the interventions in task completion. We annotate each conversation sentence with two attributes to indicate the conversation types and the conversation initiator. The conversation initiators can either be the "task performer" or the "instructor". And the sentence purpose types can be the instructor "correcting mistakes", "answering questions", "following up with more instructions", "confirming previous actions". "describing the high-level task", "opening/closing remarks", or the task performer starting the conversation to ask questions. The human annotators watch the videos and use their best judgment to annotate the roles of different conversations in the videos. In our benchmarks, we consider 3 intervention types: _correcting mistakes_, _following up with more instructions_, and _confirming the previous action_ as they are more related to physical actions in the task procedure. Figure 2 shows examples of conversational interventions. More examples can be found in the supplementary materials. **Audit process.** Annotations are done by professional annotators based on the following process. The annotators first take a pass on the video to add the fine-grained actions, coarse-grained actions, conversation, and text summary, along with the associated annotation elements for each event. After self-review, the annotated data is passed to an independent reviewer for auditing, and the mistakes are fixed directly or sent back to the original annotator for updates. The annotations finally go through a targeted review to check the open-ended text fields like narration, action sentences, and conversation transcriptions to ensure consistency. Before the annotations were delivered, we applied a list of constraints to systematically check the annotations to further dig out the wrong annotations. ## 4 Observations and Tasks from HoloAssist Here we present observations from HoloAssist in Section 4.1 and identify a few open problems that are necessary components for an interactive AI assistant. In Section 4.2, we define new tasks and benchmarks with HoloAssist. ### Observations from HoloAssist **Correlation between mistakes and intervention.** We find that the response time for the human assistant to intervene in the procedure and correct mistakes depends on the severity level of the mistakes. If mistakes are critical, the instructors proactively interrupt the student immediately (within less than 5 seconds) while other mistakes are either self-corrected by the task performer or corrected later by the instructor. In Table 3, we present the top actions that need immediate intervention and the top actions that are self-corrected by the task performers or corrected later by the instructors. We notice that actions related to linear tasks, where the task progression is stalled if steps are not followed in order, are often intervened immediately by the instructors. For example, "insert joy con controller", and "place tray" etc. In contrast, the lazily edited corrections are related to furniture assembly such as the actions "drop allen wrench", "drop hex socket head", etc. These are tasks where mistakes are unclear in every stage, and the user often intuitively adjusts their steps. **Grounded guidance.** We also notice that the instructions from human instructors are often grounded in the 3D environment. An important aspect of grounded guidance is the ability to communicate about the physical world by pointing \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{**Spatial deixis from the intervention transcripts**} \\ \hline “You should press the button that’s on the body of the camera \\ just at the right of the lens.” \\ “You should leave the bolt, like it was before.” \\ “The button is on the other side of the Switch.” \\ “The SD card comes from the right slot, on the right hand side \\ and it opens by using a knob next to the screen \\ on the right bottom side of the screen.” \\ \hline “Currently the tray is facing you, please rotate so the back is \\ facing you.” \\ “You should put it the other way around. It’s upside down.” \\ “Please start by removing the screw of the top shelf first.” \\ “And now, ehm, it is, it should be the other one that should be \\ on top of the other.” \\ \hline “To the right of that, there is a tiny little square.” \\ \hline \hline \end{tabular} \end{table} Table 4: Examples of deictic phrases that help in grounded guidance by specifying contextual spatial locations. \begin{table} \begin{tabular}{l l} \hline \hline Immediate intervention & Lazy intervention \\ \hline insert screw & drop allen wrench \\ approach button & drop hex socket head \\ insert joy con controller & drop screw \\ place tray & screw hex socket head \\ pull battery door & screw screw \\ \hline \hline \end{tabular} \end{table} Table 3: Top mistakes that are corrected immediately **(left)** or later and sometimes through self-correction **(right)**. to things in context. Spatial deixis refers to phrases that are used to locate things in space and to express direction and distance. The deictic analysis of the transcripts in HoloAssist reveals a wide set of words that indicate specific and relative locations and directions, especially during interventions to correct mistakes. A list of some prominent examples is shown in Table 4. ### Benchmark Tasks Inspired by the observations above, we think it is important for an interactive AI assistant to have a good world state estimation model that can detect mistakes and predict whether to intervene in the task procedure. Besides, augmenting instructions with spatial guidance can be useful for AI agents. To this end, we introduce new mistake detection, intervention prediction and 3D hand pose forecasting tasks for interactive and grounded guidance. Additionally, we benchmark models on action recognition tasks following the convention in [12]. **Mistake detection** is defined following the convention [33] but applied to fine-grained actions in our benchmark. We take the features from the fine-grained action clips from the beginning of the coarse-grained action until the end of the current action clip, and the model predicts a label from {_correct_, _mistake_}. The task is challenging given that the class distribution is highly skewed, with around 6% mistakes among the fine-grained actions. **Intervention type prediction** is to predict the intervention types given an input of a window of 1, 3, or 5 seconds before the intervention. This newly proposed benchmark is to test if the model can correctly figure out the correct intervention types during task completion. Currently, HoloAssist includes 3 intervention types, and we report the precision and recall of each intervention type. **3D hand pose forecasting** is another new benchmark introduced by HoloAssist. Existing action forecasting work [12] mostly focuses on providing semantic labels of future actions and does not provide explicit 3D guidance on hand poses. Predicting 3D hand poses can be useful for various applications [4], and it can augment instructions and spatially guide users in different tasks. In this benchmark, we take 3 seconds inputs similar to other 3D body location forecasting literature [43] and forecast the continuous 3D hand poses for the next 0.5, 1.0, and 1.5 seconds. The evaluation metric is the average of mean per joint position error over time in centimeters compared to ground truth. To have a proper evaluation metric that can help 3D action guidance, we remove the mistakes from the action sequences and only forecast 3D hand pose for the correct labels. ## 5 Experiments In this section, we provide the evaluation results of the proposed benchmarks. We will start with the standard action recognition benchmarks, and then we will present the results of the newly proposed benchmarks. We also provide ablations of different sensor modalities to understand the roles of different sensors in various tasks. We hope the baseline results can guide future research in this space. **Implementation details.** We adopt TimeSformer [9], a state-of-the-art vision transformer (ViT) [13] based video model, as the backbone and change the head with a different number of classes for different benchmarks. We modify the original TimeSformers to perform multimodal learning by introducing additional tokens for different modalities and embedding layers to encode the additional sensor modalities. Specifically, we use 26\(\times\)2 tokens for both left and right hands (one token for one hand joint), one token for eye gaze, one token for head poses, and 196 tokens for depth. We can enable and/or disable different modalities during training and evaluation. Detailed configurations are available in supplementary materials. Note that we consider the resulting model a vanilla multimodal model and serve as a baseline for future studies. We randomly split the 2221 sessions into train, validation, and test sets following a ratio of 70%, 10%, and 20% on a per-task basis, which includes 1545 sessions for training, 213 sessions for validation, and 463 sessions for testing. We also synchronize all the modalities according to the video stream and keep the frame rate at 30 fps by sub-sampling other modalities in our experiments. During training, the model is trained for 15 epochs with an initial learning rate of 0.01 and a batch size of 64 using stochastic gradient descent. We divide the learning rate by 10 at the epoch 11 and 14. For each input segment, we randomly sample 8 frames/data points within the segment. We train our models with 4xA6000 GPU machines, and the fine-grained action recognition runs usually take about one day. We also train the model with random initialization and an ImageNet pre-trained ViT backbone. For our hand pose forecasting benchmark, which is a regression task, we adopt Seq2Seq model [37] following [21]. **Action recognition.** We show the fine-grained action recognition results in Table 5 and the coarse-grained action recognition results in Table 6. We can see that initialized with a pre-trained ViT model on ImageNet; all models can achieve around 35% top-1 accuracy on fine-grained action recognition and 50% top-1 accuracy on coarse-grained action recognition. This is comparable to baselines of other fine-grained action benchmarks [33] (23% top-1 accuracy), We notice that the pre-trained model on ImageNet may reduce the influence of other modalities that are not pre-trained. If trained from scratch, we can see from Table 5 (bottom) that adding hands can improve the prediction of verbs and lead to better results than the RGB-only model. mistake detection where the features are extracted from the pre-trained action recognition models shown in Table 5. Here we find the hand poses information benefits the task and outperforms the other modalities. **Intervention type prediction.** For intervention prediction, we show the results in Table 8+. We can see adding hands and eye gaze (R+H+E) can significantly boost the overall precision and recall to 48.31% and 37.59%, improving about 35 and 4 percentage points over RGB. This may be because eye gaze is a forecasting signal, as people often look at the regions before the action starts, which can assist the models to attend to important regions for better anticipation. Footnote †: Evaluated on 10% of the entire data **3D hand pose forecasting.** Table 9 shows that given only the 3D poses of hands as input, the model can perform with the accuracy of 9.80, 10.68, and 11.25 centimeters, the average of mean per joint position error, for 0.5, 1, and 1.5 seconds, \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Mads.} & \multicolumn{3}{c|}{All Classes Accuracy} & \multicolumn{3}{c|}{Head Classes Accuracy} & \multicolumn{3}{c}{Tail Classes Accuracy} \\ & Top1 / 5 Act & Top1 / 5 Verb & Top1 / 5 Noun & Top1 / 5 Act & Top1 / 5 Verb & Top1 / 5 Noun & Top1 / 5 Act & Top1 / 5 Verb & Top1 / 5 Noun \\ \hline \multirow{5}{*}{RGB} & RGB & 50.91/86.89 & 60.51/93.45 & 73.35/95.90 & 53.40/89.53 & 62.78/95.39 & 75.00/96.70 & 0.37/2.24 & 20.00/58.89 & 43.89/81.67 \\ & Hands & 22.20/54.16 & 35.12/72.34 & 37.43/68.33 & 23.44/57.08 & 36.75/75.25 & 38.84/70.45 & 0.00/0.12 & 6.11/20.56 & 12.22/30.56 \\ & R+H & 50.80/86.54 & 59.71/93.36 & 73.20/95.78 & 53.27/89.12 & 61.94/95.20 & 75.09/96.57 & 0.37/2.28 & 20.00/60.56 & 39.44/81.67 \\ & R+H+E & 50.35/85.51 & 58.85/93.21 & 73.67/95.63 & 52.65/88.03 & 60.85/94.89 & 75.28/96.32 & 0.53/2.28 & 23.33/63.33 & 45.00/83.33 \\ & R+H+E+I & 50.18/86.07 & 59.00/93.21 & 73.64/96.25 & 52.49/88.78 & 61.22/95.07 & 75.41/96.91 & 0.50/2.12 & 19.44/60.00 & 42.22/84.44 \\ \hline \multirow{5}{*}{RGB} & RGB & 32.17/70.45 & 41.88/84.77 & 53.81/84.12 & 33.92/73.75 & 43.49/87.34 & 55.42/85.97 & 0.06/0.65 & 13.33/38.89 & 25.00/51.11 \\ & Hands & 27.36/59.71 & 40.17/76.33 & 43.77/73.64 & 28.83/62.78 & 41.74/78.90 & 45.36/75.75 & 0.06/0.28 & 12.22/30.56 & 15.56/36.11 \\ & R+E & 31.32/69.86 & 41.71/84.36 & 53.81/83.68 & 32.92/73.04 & 43.20/86.94 & 55.42/85.41 & 0.16/0.75 & 15.00/38.33 & 25.00/52.78 \\ & R+H & 34.42/73.02 & 45.28/85.09 & 56.14/86.45 & 36.16/76.15 & 47.04/87.44 & 57.76/88.28 & 0.19/0.97 & 13.89/43.33 & 27.22/53.89 \\ & R+H+H & 35.18/73.91 & 45.51/85.80 & 56.85/86.57 & 37.13/77.12 & 47.54/88.37 & 58.70/88.40 & 0.03/0.94 & 9.44/40.00 & 23.89/53.89 \\ \hline \hline \end{tabular} \end{table} Table 6: **Coarse-grained action recognition results.** The overall trend is similar to fine-grained action recognition. \begin{table} \begin{tabular}{l|c c c c|c c c|c c c} \hline \hline \multirow{2}{*}{} & \multirow{2}{*}{Mods.} & \multicolumn{3}{c|}{All Classes Accuracy} & \multicolumn{3}{c|}{Head Classes Accuracy} & \multicolumn{3}{c}{Tail Classes Accuracy} \\ & Top1 / 5 Act & Top1 / 5 Verb & Top1 / 5 Noun & Top1 / 5 Act & Top1 / 5 Verb & Top1 / 5 Noun & Top1 / 5 Act & Top1 / 5 Verb & Top1 / 5 Noun \\ \hline \multirow{5}{*}{RGB} & RGB & 34.83/68.60 & 42.14/78.96 & 66.81/90.04 & 35.26/69.34 & 42.56/79.53 & 67.19/90.36 & 0.03/0.17 & 10.86/36.33 & 38.01/66.48 \\ & Hands & 20.86/43.92 & 35.38/65.76 & 37.10/63.42 & 21.13/44.50 & 35.72/66.30 & 37.50/64.06 & 0.00/0.01 & 10.11/25.47 & 7.30/15.54 \\ & R+H & 35.06/68.95 & 42.45/79.42 & 67.05/90.01 & 35.49/69.71 & 42.87/80.03 & 67.43/90.32 & 0.03/0.16 & 11.05/33.71 & 38.39/66.85 \\ & R+H+E & 35.27/68.69 & 42.92/79.11 & 67.03/89.96 & 35.72/69.42 & 43.33/79.67 & 67.45/90.29 & 0.03/0.18 & 11.99/37.64 & 35.96/65.17 \\ & R+H+E+E+I & 34.80/68.26 & 42.24/78.88 & 66.65/89.76 & 35.23/69.00 & 42.63/79.46 & 67.03/90.08 & 0.03/0.17 & 12.92/35.96 & 38.20/65.17 \\ \hline \multirow{5}{*}{RGB} & RGB & 18.78/48.09 & 28.45/65.43 & 43.72/73.69 & 19.03/48.72 & 28.74/66.00 & 44.09/74.21 & 0.00/0.01 & 6.55/22.66 & 15.73/34.27 \\ & Hands & 23.94/47.58 & 39.79/68.76 & 39.34/65.76 & 24.26/48.21 & 40.18/69.30 & 39.79/66.43 & 0.00/0.00 & 10.67/28.65 & 5.62/16.29 \\ \cline{1-1} & R+E & 20.86/50.27 & 30.92/67.59 & 45.28/75.29 & 21.13/50.93 & 31.22/68.20 & 45.65/75.80 & 0.00/0.01 & 8.05/21.72 & 17.60/37.08 \\ \cline{1-1} & R+H & 29.32/59.20 & 41.48/73.92 & 52.54/80.65 & 29.70/59.95 & 41.87/74.51 & 52.99/81.17 & 0.00/0.03 & 12.55/29.78 & 19.29/41.76 \\ \cline{1-1} & R+H+H+E & 29.58/59.14 & 41.58/73.73 & 52.65/80.76 & 29.97/59.89 & 41.99/47.41 & 53.10/81.29 & 0.00/0.04 & 10.67/29.96 & 19.29/40.82 \\ \cline{1-1} & R+H+H+E+I & 26.87/56.13 & 39.29/72.49 & 49.96/78.60 & 27.22/56.82 & 39.69/73.06 & 50.35/79.07 & 0.01/0.05 & 8.99/29.96 & 21.16/43.26 \\ \hline \hline \end{tab respectively. We should note that this task is challenging as hands can move quickly within a window of 0.5 seconds. Compared to static-H, which uses the last 3D hand pose, our baseline (H) already outperforms it. In Figure S3, we show the visualization of the hand pose forecasting. **Importance of hand poses and eye gaze.** As we can already see from the Tables 5, 7, 8, 9, the 3D hand poses and eye gaze can help the model prediction to recognize actions, detect mistakes, and understand users' intentions. These modalities can augment the commonly used RGB images for better performance. Simply concatenating more modalities (e.g., depth, head poses) as inputs may not necessarily lead to a more capable model, as those modalities may need specialized encoders or model architectures to process them simultaneously. We believe HoloAssist will enable and foster further research in multi-modal learning in this direction. ## 6 Conclusion and Future Work In this work, we identified and explored several important problems with building an interactive AI assistant in the physical world. We introduced a large-scale multimodal egocentric video dataset, HoloAssist, containing rich information about human interaction in an assistive task completion setting. The task performer wears a HoloLens 2 headset while completing various object-centric manipulation tasks. The real-time video feed from the headset is sent to a remote instructor who provides verbal guidance to the task performer. HoloAssist captures seven raw sensor modalities during the interaction, and among them, we found hand pose and eye gaze are useful information sources for an interactive AI agent. By augmenting the data with additional third-person manual annotations on action segments, mistakes, and intervention types, we constructed new benchmarks on mistake detection, intervention type prediction, and 3D hand pose forecasting, which we believe is a necessary component for an interactive and grounded AI assistant. As a first step in this direction, this work also leaves room for future work to improve upon (_e.g._, annotating object poses in the data, investigating object-centric models of affordance and manipulations in AI assistance, etc.). We believe HoloAssist, coupled with the associated benchmarks and tooling will benefit future research into building competent AI assistants Figure 8: **Qualitative visualization for 3D hand pose forecasting. We visualize the ground-truth hand joint positions (_i.e._, hand pose, visualized in Green) and the prediction of the hand pose for the next 1.5 seconds (visualized in Red). The input to the model is a 3-second long clip ahead of the prediction. The task is challenging as hands often move quickly. As we can see from the figure, the predicted hand pose is more off from the grounded truth in the longer future.** \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Mods.} & \multicolumn{2}{c|}{Overall} & \multicolumn{2}{c|}{Confirm Action} & \multicolumn{2}{c|}{Correct Mistake} & \multicolumn{2}{c}{Follow-up} \\ & Prec. & Recall & Prec. & Recall & Prec. & Recall & Prec. & Recall \\ \hline RGB & 47.92 & 46.09 & 44.09 & 27.65 & 46.13 & 41.44 & 53.54 & 69.18 \\ Hands & 15.75 & 33.33 & 0 & 0 & 0 & 0 & 47.25 & 100 \\ R+H & 48.08 & 47.06 & 43.64 & 32.21 & 46.63 & 44.67 & 53.95 & 64.3 \\ R+E & 48.33 & 47.38 & 45.45 & 31.87 & 45.16 & 45.16 & 54.39 & 65.1 \\ R+H+E+I & 48.75 & 47.75 & 45.55 & 33.94 & 46.11 & 44.91 & 54.59 & 64.42 \\ \hline \hline \end{tabular} \end{table} Table 8: **Intervention type prediction results. The classes in the benchmark are highly skewed. We can see that adding hands and eyes improves the intervention-type prediction.** \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Mods.} & \multicolumn{2}{c}{Mean Error Distance (cm, \(\downarrow\))} \\ & 0.5 sec & 1.0 sec & 1.5 sec \\ \hline Static-H & 9.34 & 13.91 & 16.70 \\ \hline Hands & 9.80 & 10.68 & 11.25 \\ \hline H+E & 9.80 & 10.70 & 11.25 \\ H+E+I & 9.80 & 10.69 & 11.25 \\ R+H+E & 9.73 & 10.65 & 11.22 \\ R+H+E+I & 9.72 & 10.62 & 11.19 \\ \hline \hline \end{tabular} \end{table} Table 9: **3D hand pose forecasting benchmark results. We report the mean per joint position error for 0.5 1.0, 1.5 seconds (lower the better). The static hand baseline (Static-H) refers to using the last frame of the input. The Seq2Seq [37] model trained using hands only H, and using only hand H achieves better results than Static-H.** for everyday tasks in the real world. ## Acknowledgement We thank all the 222 data collectors who participated in the study and acknowledge the hard work of the annotation team led by Megan Yuan and Dan Luo from DataTang Technology Inc. We thank Nick Saw for the help in the data collection software and Yale Song and Vibhav Vineet from the Computer Vision Group at Microsoft Research for the early discussion. We also thank the feedback from colleagues at MSR.
2301.13460
Energy-Efficient Vehicular Edge Computing with One-by-one Access Scheme
With the advent of ever-growing vehicular applications, vehicular edge computing (VEC) has been a promising solution to augment the computing capacity of future smart vehicles. The ultimate challenge to fulfill the quality of service (QoS) is increasingly prominent with constrained computing and communication resources of vehicles. In this paper, we propose an energy-efficient task offloading strategy for VEC system with one-by-one scheduling mechanism, where only one vehicle wakes up at a time to offload with a road side unit (RSU). The goal of system is to minimize the total energy consumption of vehicles by jointly optimizing user scheduling, offloading ratio and bit allocation within a given mission time. To this end, the non-convex and mixed-integer optimization problem is formulated and solved by adopting Lagrange dual problem, whose superior performances are verified via numerical results, as compared to other benchmark schemes.
Youngsu Jang, Seongah Jeong, Joonhyuk Kang
2023-01-31T07:49:28Z
http://arxiv.org/abs/2301.13460v1
# Energy-Efficient Vehicular Edge Computing with One-by-one Access Scheme ###### Abstract With the advent of ever-growing vehicular applications, vehicular edge computing (VEC) has been a promising solution to augment the computing capacity of future smart vehicles. The ultimate challenge to fulfill the quality of service (QoS) is increasingly prominent with constrained computing and communication resources of vehicles. In this paper, we propose an energy-efficient task offloading strategy for VEC system with one-by-one scheduling mechanism, where only one vehicle wakes up at a time to offload with a road side unit (RSU). The goal of system is to minimize the total energy consumption of vehicles by jointly optimizing user scheduling, offloading ratio and bit allocation within a given mission time. To this end, the non-convex and mixed-integer optimization problem is formulated and solved by adopting Lagrange dual problem, whose superior performances are verified via numerical results, as compared to other benchmark schemes. Vehicular edge computing, one-by-one access, offloading, bit allocation, scheduling. ## I Introduction With the rapid development of vehicular technology including autonomous driving, future vehicles are expected to play a role of providing various infotainment services to users as well as a simple means of transportation. Services such as voice recognition, autonomous driving, video streaming, and virtual reality/augmented reality (VR/AR) require significant computing resources and strict delay constraints, which might not be processed in on-board vehicles with limited computing and battery resources. Vehicular edge computing (VEC) [1, 2, 3, 4] has emerged as an economical and scalable alternative to process offloaded data efficiently while providing improved quality of service (QoS) to vehicular users from anywhere and at any time at reduced costs [1]. In VEC systems, an edge server mounted on a road side unit (RSU) located nearest the vehicles can provide additional computational resources for high-complexity applications, which allows to reduce latency as well as save the energy required for offloading procedure. However, in general, the vehicular communication environment rapidly varies due to the high speed and mobility of vehicles, making it difficult to apply the traditional mobile edge computing (MEC)-based offloading method as it is. Therefore, further researches on efficient task offloading strategies suitable for the VEC systems are needed. With the rapid spread of electric vehicles in recent years, efficient use of the vehicle's limited battery capacity has become very important. Due to the constrained energy budget of vehicles, several studies have been conducted on task offloading in VEC systems to minimize the energy consumption [2, 3, 4]. The authors in [2] study the energy-efficient workload offloading problem, and propose a low-complexity distributed solution based on consensus alternating direction method of multipliers, but only a single RSU is considered. In [3], a novel three-layered system, i.e., vehicular edge cloud computing (VECC), is proposed as a solution to energy conservation and computation augmentation for vehicular computing, and a deep learning-assisted energy-efficient task offloading algorithm is developed in [4]. However, [3] does not consider the partial offloading to offload the part of the task, and [4] only consider the user association without considering multiple access scheme, which can be further improved. In this paper, we propose an energy-efficient task offloading strategy in VEC system with a one-by-one access [5] that is revealed to provide the better energy efficiency than the orthogonal access in MEC scenario considered in our previous study [6]. We jointly optimize the offloading ratio, bit allocation and offloading scheduling that minimize the total energy consumption of vehicles under a given deadline, whose solutions are verified to significantly reduce the total energy consumption of vehicles compared to the benchmarks via numerical results. ## II System Model In this paper, we consider a VEC system including \(K\) vehicles and \(M\) RSUs as shown in Fig. 1. The RSUs are placed along the unidirectional road with \(J\) lanes, the distance between adjacent RSUs is \(d\), and the coverage radius of each RSU is \(r_{\text{RSU}}\). We define \(\mathcal{M}=\{1,\ldots,M\}\) as the set of RSUs, where the location of RSU \(m\) in the xy-plane is calculated as \(\mathbf{p}_{m}^{r}=(r_{\text{RSU}}+(m-1)d,0)\) for \(m\in\mathcal{M}\), with the height \(H\). We assume that \(K\) vehicles arrive at the first RSU's coverage edge in time \(t_{k}\in\{t_{1},\ldots,t_{K}\}\), the set of which is defined as \(\mathcal{K}=\{1,\ldots,K\}\). Also, the vehicles in the same lane have the same velocity, and the velocity of each lane \(j\) is assumed to be \(v_{j}\in\{v_{1},\ldots,v_{J}\}\)[7]. Here, we develop the optimal offloading procedure with the aim of minimizing the total energy consumption of all vehicles. To enable the offloading of a given application of Fig. 1: Illustration of the task offloading in VEC systems. each vehicle, the following steps need to be performed. First, the vehicle \(k\in\mathcal{K}\) transmits the input data to be computed at the nearest RSU via uplink transmission. Next, the RSU computes the received data. Lastly, the RSU transmits the output of application to vehicle \(k\) via downlink transmission. Frequency division duplex (FDD) is assumed, where equal bandwidth \(B\) is allocated for both uplink and downlink. Accordingly, there is no interference between uplink and downlink communication. For tractability, the time horizon \(T\) is equally divided into \(N\) frames as shown in Fig. 2, and each frame duration is \(\Delta\) with satisfying \(T=N\Delta\). The frame duration \(\Delta\) is supposed to be small enough so that the vehicle's position is approximately constant within each frame [8]. Under these circumstances, the position of the vehicle in the \(j\)th lane on the ground plane at the \(n\)th frame can be represented as \(\mathbf{p}_{j,n}^{v}=(n\Delta v_{j},(j-1)d_{\text{lane}})\), where \(j=1,\ldots,J\), \(n\in\mathcal{N}=\{1,\ldots,N\}\) and \(d_{\text{lane}}\) denotes the lane width. In each frame, the vehicle can communicate with the nearest RSU for offloading. Following [9], the channel gain between the vehicle \(k\) and the adjacent RSU at the \(n\)th frame is given by \(\mathbf{h}_{k}[n]=\mathbf{h}_{k}^{s}[n]\sqrt{h_{k}^{l}[n]}\), where \(\mathbf{h}_{k}^{s}[n]\) is the small-scale fading coefficient which follows Rayleigh distribution with unit variance, and the large-scale fading coefficient \(h_{k}^{l}[n]\) to reflect the path-loss is expressed as \(h_{k}^{l}[n]=h_{0}/(\|\mathbf{p}_{j,n}^{v}-\mathbf{p}_{m_{\min}}^{r}\|^{2}+H^{ 2})^{\frac{n}{2}}\) with \(\|\cdot\|\) being the norm-2 function, \(m_{\min}\in\mathcal{M}\) being the index of the closest RSU, \(\alpha\) being the path loss exponent, and \(h_{0}\) being the received power at the reference distance \(d=1\)m for a transmission power of \(1\)W. We assume that the channel noise is an additive white Gaussian with zero mean and power spectral density \(N_{0}\) [dBm/Hz]. In this paper, we adopt an one-by-one access introduced in [5], a simple but powerful scheduling method, where only one vehicle can be served at each frame (c.f., Fig. 2(b)). Compared to a conventional orthogonal multiple access (c.f., Fig. 2(a)), where one frame is equally divided and assigned to all vehicles so that each vehicle can communicate with RSU within a single time slot of duration \(\delta=\Delta/K\) per each frame, the one-by-one access scheme is verified to be superior by numerical results in Sec. V. This is because the remaining vehicles except the closest vehicle to RSU keep "mute", which can save the energy. The further discussions are shown in the later part of this paper. To adopt the one-by-one scheduling mechanism, the time-varying wake-up scheduling variables are defined as \(\{a_{k}^{q}[n]\}_{n=1}^{N-2}\), where \(q=u\) and \(q=d\) stand for uplink and downlink, respectively. If \(a_{k}^{q}[n]=1\), the vehicle \(k\) offloads to the nearest RSU, otherwise \(a_{k}^{q}[n]=0\). The task of the vehicle \(k\in\mathcal{K}\) can be quantified with the number \(L_{k}\) of input bits, the number \(C_{k}\) of CPU cycles per input bit for computation, and the number \(\kappa_{k}\) of output bits produced by computation per input bit. The size of input data is in general much larger than the output data, i.e., \(0<\kappa_{k}<1\). All tasks offloaded by vehicles need to be computed within the deadline \(T\) for the completion. For offloading procedure, the computation energy consumption to execute the application of vehicle \(k\) with \(l\) input bits when the CPU operates at frequency \(f\) is calculated by \[E_{k}^{c}(l,f)=\gamma C_{k}l^{2}, \tag{1}\] where \(f\) [CPU cycles/s] represents the operating frequency of the processor, and \(\gamma\) denotes the effective switched capacitance of the processor related to the chip architecture [10]. According to standard information-theoretic arguments, when the vehicle \(k\) transmits \(l_{k}^{u}[n]\) bits in uplink during the \(n\)th frame of duration \(\Delta\), the communication energy consumption of the vehicle \(k\) in one-by-one access scheme is given as \[E_{k,n}^{\text{one}}(a_{k}^{u}[n],l_{k}^{u}[n])=a_{k}^{u}[n]\frac{N_{0}B \Delta}{\|\mathbf{h}_{k}[n]\|^{2}}\bigg{(}2^{\frac{l_{k}^{u}[n]}{2h\Delta}}-1 \bigg{)}. \tag{2}\] It is noticed in (2) that the communication energy consumption depends on the scheduling variables \(a_{k}^{u}[n]\), the number of transmission bits \(l_{k}^{u}[n]\), and the channel condition \(h_{k}[n]\) affected by the communication distance. As comparison, in orthogonal access [6], all the vehicles transmit the input data during the allocated slot duration \(\delta\) with satisfying \(\delta=\Delta/K\) at each frame, which yields the communication energy consumption of the vehicle \(k\) at time slot \(n\) as \[E_{k,n}^{\text{orth}}(l_{k}^{u}[n])=\frac{N_{0}B\delta}{\|\mathbf{h}_{k}[n]\|^ {2}}\bigg{(}2^{\frac{l_{k}^{u}[n]}{2h\delta}}-1\bigg{)}. \tag{3}\] ## III Energy-efficient Offloading with One-by-one Access In this section, we formulate the problem to minimize the total energy consumption of \(K\) vehicles by jointly optimizing the bit allocation, offloading ratio and scheduling for one-by-one access scheme. For reference, the total energy consumption of vehicles in local execution and offloading case with the orthogonal access [6] are briefly discussed first. ### _Energy Consumption for Local Execution_ In this part, we consider the total energy consumption of overall vehicles when all the applications are processed locally. In order to process \(L_{k}\) bits within \(T\) seconds, the CPU frequency of vehicle \(k\) needs to be selected as \(f_{k}^{v}=C_{k}L_{k}/T\). According to (1), the total energy consumption for local execution is obtained by \(\sum_{k=1}^{K}E_{k}^{\text{local}}(L_{k})=\sum_{k=1}^{K}\gamma_{k}^{v}C_{k}^{3}L_ {k}^{v}/T^{2}\), where \(\gamma_{k}^{v}\) is the effective switched capacitance of the vehicle \(k\)'s processor. ### _Minimal Energy Consumption for Orthogonal Multiple Access (Our Previous Work [6])_ In our previous work [6], an orthogonal access scheme is developed for VEC systems to minimize the total energy consumption of vehicles. To this end, the joint optimization of bit allocation and offloading ratio between local execution and RSU execution is studied, where the vehicle \(k\) offloads Fig. 2: Frame structure of the considered VEC system: (a) orthogonal access [6], (b) one-by-one access. the ratio \(\rho_{k}\) of the input bits to the RSU and locally computes the remaining portion \((1-\rho_{k})\) of the input bits. At the \(n\)th frame, we denote \(l_{k}^{u}[n]\) as the number of uplink bits transmitted from the vehicle \(k\) to the RSU, \(l_{k}^{v}[n]\) as the number of bits computed for the task of the vehicle \(k\) at the RSU, and \(l_{k}^{d}[n]\) as the number of downlink bits transmitted from the RSU to the vehicle \(k\). Since, in the orthogonal access, the offloading process is analyzed in frame-by-frame manner [6, 8], the energy consumption of the vehicle is expressed as \(E_{\text{total}}^{u}[t_{k}^{u}[n],\rho_{k})\!=\!\sum_{k=1}^{K}\sum_{n=1}^{N-2}E_ {k,n}^{\text{orth}}(l_{k}^{u}[n])\!+\!\sum_{k=1}^{K}E_{k}^{\text{local}}((1- \rho_{k})L_{k})\). To minimize the total energy consumption of the vehicle, the joint optimization problem of bit allocation and offloading ratio can be formulated as (8) in [6], whose solutions are detailed in [6]. ### _Minimal Energy Consumption for One-by-one Access_ We now formulate the optimization problem when adopting the one-by-one access scheme [5] for VEC systems, and then propose Algorithm 1 to resolve the formulated problem. Herein, we consider not only the bit allocation and the offloading ratio between local execution and RSU execution, but also the scheduling variables for one-by-one access. To this end, the total energy consumption of vehicles is given by \(E_{\text{total}}^{\text{one}}(l_{k}^{u}[n],\rho_{k},a_{k}^{u}[n])=\sum_{k=1}^{ K}\sum_{n=1}^{N-2}E_{k,n}^{\text{one}}(a_{k}^{u}[n],l_{k}^{u}[n])+\sum_{k=1}^{ K}E_{k}^{\text{local}}((1-\rho_{k})L_{k})\). Let us denote \(\mathcal{Z}=\{l_{k}^{u}[n],l_{k}^{u}[n],a_{k}^{u}[n],a_{k}^{d}[n],\rho_{k}\}\) as the set of optimization variables, and therefore the optimization problem for one-by-one access is formulated as \[\underset{\mathcal{Z}}{\text{minimize}}\;\;E_{\text{total}}^{u}( \mathcal{Z})\] (4a) s.t. \[\frac{l_{k}^{u}[n]}{B\Delta}\leq a_{k}^{u}[n]\log_{2}\left(1+\frac{P _{\text{max}}\|h_{k}[n]\|^{2}}{N_{0}B}\right),\;\forall k,n\in\tilde{\mathcal{ N}}, \tag{4b}\] \[\frac{l_{k}^{d}[n+2]}{B\Delta}\leq a_{k}^{d}[n+2]\log_{2}\left(1+ \frac{P_{\text{RSU}}\|h_{k}[n+2]\|^{2}}{N_{0}B}\right)\!\!,\] \[\forall k,n\in\tilde{\mathcal{N}},\] (4c) \[\sum_{i=1}^{n}l_{k}^{c}[i+1]\leq\sum_{i=1}^{n}l_{k}^{u}[i],\; \forall k,n\in\tilde{\mathcal{N}},\] (4d) \[\sum_{i=1}^{n}l_{k}^{d}[i+2]\leq\kappa_{k}\sum_{i=1}^{n}l_{k}^{c} [i+1],\;\forall k,n\in\tilde{\mathcal{N}},\] (4e) \[\sum_{k=1}^{K}a_{k}^{u}[n]=1,\;\sum_{k=1}^{K}a_{k}^{d}[n+2]=1,\; \forall k,n\in\tilde{\mathcal{N}},\] (4f) \[\sum_{n=1}^{N-2}l_{k}^{u}[n]=\rho_{k}L_{k},\;\sum_{n=1}^{N-2}l_{k} ^{c}[n+1]=\rho_{k}L_{k},\;\forall k,\] (4g) \[\sum_{n=1}^{N-2}l_{k}^{d}[n+2]=\kappa_{k}\rho_{k}L_{k},\;\forall k,\] (4h) \[0\leq\rho_{k}\leq 1,\;\forall k,\] (4i) \[a_{k}^{u}[n],a_{k}^{d}[n]\in\{0,1\},\;\forall k,n\in\mathcal{N},\] (4j) \[l_{k}^{u}[n],\;l_{k}^{u}[n],\;l_{k}^{d}[n]\geq 0,\;\forall k,n\in \mathcal{N}, \tag{4k}\] where the constraints (4b) and (4c) guarantee that the achievable rates in uplink and downlink are larger than or equal to the number of transmitted bits in the corresponding links. Also, an equality constraint (4f) is for scheduling of one-by-one access to satisfy that the only one vehicle can communicate with RSU in each frame. Since the problem (4) is non-convex and mixed-integer optimization problem which cannot be directly solved by using standard convex optimization techniques. To address the non-convexity, we adopt the corresponding Lagrange dual problem of (4). Let us define \(\mathcal{Y}=\{\lambda_{k}^{u}[n],\lambda_{k}^{d}[n],\mu_{k}^{u}[n],\mu_{k}^{d}[n ],\mu_{k}^{u},u_{k}^{u},u_{k}^{d}\}\) as the set of Lagrange dual variables corresponding to (4b)-(4e), (4g) and (4h), respectively. Then, the Lagrangian of problem (4) is defined as (5), where \(F_{k}^{u}[n]\) and \(F_{k}^{d}[n]\) is expressed as \[F_{k}^{u}[n]\!=\!\frac{N_{0}B\Delta}{\|\mathbf{h}_{k}[n]\|^{2}} \!\left(\!2^{\frac{\nu_{k}^{u}[n]}{B\Delta}}\!-\!1\!\right)\!\!-\!\lambda_{k}^{ u}[n]\log_{2}\!\left(\!1\!+\!\frac{P_{\text{max}}\|h_{k}[n]\|^{2}}{N_{0}B}\! \right)\!\!, \tag{6}\] and \[F_{k}^{d}[n]=-\lambda_{k}^{d}[n]\log_{2}\left(1+\frac{P_{\text{RSU}}\|h_{k}[n+2] \|^{2}}{N_{0}B}\right)\!\!. \tag{7}\] Given \(\mathcal{Y}\), the optimal offloading ratio \(\rho_{k}^{\text{opt}}\) can be obtained by applying Karush-Kuhn-Tucker (KKT) conditions. The Lagrange dual function of problem (4) is given by \[g(\mathcal{Y})=\left\{\begin{array}{l}\min_{\mathcal{Z}}\; \mathcal{L}(\mathcal{Z},\mathcal{Y})\\ \text{s.t.}\;\;\eqref{eq:2},\text{(4i)}-\eqref{eq:2}.\end{array}\right. \tag{8}\] In order to minimize dual function \(\mathcal{L}(\mathcal{Z},\mathcal{Y})\), the stationary point \(\rho_{k}^{\text{opt}}\) to make the derivative of \(\mathcal{L}\) with respect to \(\rho_{k}\) equal to zero can be obtained as \[\rho_{k}^{\text{opt}}=1-\sqrt{\left[\frac{(u_{k}^{u}+u_{k}^{c}+\kappa_{k}u_{k} ^{d})T^{2}}{3\gamma_{k}^{c}C_{k}^{3}L_{k}^{2}}\right]_{0}^{1}}, \tag{9}\] where \([c_{k}^{l}]_{n}^{b}=\text{min}\{\text{max}\{a,c\},b\}\). In a similar way, to minimize \(F_{k}^{u}[n]\) and \(F_{k}^{d}[n]\), the optimal scheduling variables \(a_{k}^{u,opt}[n]\) and \(a_{k}^{d,opt}[n]\) for \(k\in K\) and \(n\in N\) are calculated by the following theorem. **Theorem 1**: _Given \(\mathcal{Y}\) and \(l_{k}^{u}[n]\) for all \(k\in\mathcal{K}\) and \(n\in\tilde{\mathcal{N}}\), the optimal scheduling variables for Lagrange dual function are obtained as_ \[a_{k}^{u,\text{opt}}[n]=\left\{\begin{array}{l}1\quad k=\arg \underset{k^{\prime}\in\mathcal{K}}{\min}F_{k^{\prime}}^{u}[n]\\ 0\;\;\text{otherwise},\end{array}\right. \tag{10a}\] \[a_{k}^{d,\text{opt}}[n]=\left\{\begin{array}{l}1\quad k=\arg \underset{k^{\prime}\in\mathcal{K}}{\min}F_{k^{\prime}}^{d}[n]\\ 0\;\;\text{otherwise}.\end{array}\right. \tag{10b}\] _As a result, the scheduling variables \(a_{k}^{u}[n]\) and \(a_{k}^{d}[n]\) should be chosen so as to minimize \(F_{k}^{u}[n]\) and \(F_{k}^{d}[n]\) under the constraints (4f) and (4j). Given \(a_{k}^{u,\text{opt}}[n]\), \(a_{k}^{d,\text{opt}}[n]\) and \(\rho_{k}^{\text{opt}}\), the optimization problem (4) is simplified as_ \[\underset{\{l_{k}^{u}[n]\},\,\{l_{k}^{u}[n]\},\,\{l_{k}^{d}[n]\}} \sum_{k=1}^{K}\sum_{n=1}^{N-2}E_{k,n}^{u}(l_{k}^{u}[n]) \tag{11a}\] \[\text{s.t.}\;\eqref{eq:2}-\eqref{eq:2},\eqref{eq:2}-\eqref{eq:2}, \eqref{eq:2},\eqref{eq:2}. \tag{11b}\] _Since the problem (11) is convex, we can solve this problem using standard convex optimization solver such as CVX [11]. After that, we can solve the dual problem as follows:_ \[\underset{ Since the dual problem (12) is concave with respect to \(\mathcal{Y}\), the subgradient method is adopted so that converging the global point can be guaranteed [12]. Accordingly, the dual variables in each iteration are given by \[\lambda_{k}^{u,z+1}[n] =\bigg{[}\lambda_{k}^{u,z}[n]+\pi_{1}\bigg{(}\frac{l_{k}^{u}[n]}{B \Delta}\] \[\qquad-a_{k}^{u}[n]\log_{2}\Big{(}1+\frac{E_{k}^{u}(L_{k,n}^{u})h_ {k,n}}{N_{0}B\Delta}\Big{)}\bigg{)}\bigg{]}^{+}, \tag{13}\] \[\lambda_{k}^{d,z+1}[n] =\bigg{[}\lambda_{k}^{d,z}[n]+\pi_{2}\bigg{(}\frac{l_{k}^{d}[n]}{ B\Delta}\bigg{)}\bigg{)}\bigg{]}^{+},\] (14) \[\mu_{k}^{u,z+1}[n] =\big{[}\mu_{k}^{u,z}[n]+\pi_{3}(l_{k}^{c}[n+1]-l_{k}^{u}[n]) \big{]}^{+},\] (15) \[\mu_{k}^{d,z+1}[n] =\big{[}\mu_{k}^{d,z}[n]+\pi_{4}(l_{k}^{u}[n+2]-\kappa_{k}l_{k}^ {c}[n+1])\big{]}^{+}, \tag{16}\] \[u_{k}^{u,z+1} =u_{k}^{u,z}+\pi_{5}\bigg{(}\sum_{n=1}^{N}l_{k,n}^{u}-\rho_{k}L_{k }\bigg{)}, \tag{17}\] \[u_{k}^{c,z+1} =u_{k}^{c,z}+\pi_{6}\bigg{(}\sum_{n=1}^{N}l_{k,n}^{c}-\rho_{k}L_{k }\bigg{)},\] (18) \[u_{k}^{d,z+1} =u_{k}^{d,z}+\pi_{7}\bigg{(}\sum_{n=1}^{N}l_{k,n}^{d}-\kappa_{k} \rho_{k}L_{k}\bigg{)}, \tag{19}\] where the superscript 'z' represents the iteration index, \([c]^{+}=\text{max}\{0,c\}\) and \(\{\pi_{i}\}_{i=1}^{7}\) are step sizes. The overall process is shown in Algorithm 1. ``` 1:Initialize\(\mathcal{Y}\) and \(l_{k}^{u}[n]\), \(\forall k\), \(n\in\mathcal{N}\). 2:Repeat 3: Obtain \(\rho_{k}^{\text{opt}}\) using (9). 4: Obtain \(a_{k}^{u,\text{opt}}[n]\) and \(a_{k}^{d,\text{opt}}[n]\) using (10a) and (10b). 5: Obtain \(l_{k}^{u,\text{opt}}[n]\), \(l_{k}^{c,\text{opt}}[n+1]\) and \(l_{k}^{d,\text{opt}}[n+2]\) for given \(\rho_{k}^{\text{opt}}\), \(a_{k}^{u,\text{opt}}[n]\) and \(a_{k}^{d,\text{opt}}[n]\) using CVX. 6: Update \(\mathcal{Y}\) with (13)-(19). 7:Until convergence. 8:\(\mathcal{Z}^{\text{opt}}\), \(\forall k\), \(n\in\mathcal{N}\). ``` **Algorithm 1** Joint optimization of bit allocation, offloading ratio and scheduling for one-by-one access in VEC systems. ## IV Simulation Results In this section, the simulation results are presented to evaluate the performance of our proposed Algorithm 1 in VEC systems. For simulations, we consider the VEC system including a one-way road with three lanes. Each vehicle randomly arrives at the starting point, and it is assumed that the task deadline of all vehicles is equal to \(T\). Also, Rayleigh fading is considered for small-scale fading, and 3GPP path loss model [13] is used for large-scale fading. The remaining parameters are shown in Table I. As a benchmark, the three schemes are considered such as (i) local execution scheme, where all tasks are computed locally, (ii) orthogonal access scheme [6], where the same time slot is allocated to each vehicle, (iii) one-by-one access scheme with an equal bit allocation that transmits the same number of bits to the uplink and downlink in each frame, without optimizing the bit allocation. Fig. 3 shows the total energy consumption of vehicles as a function of deadline \(T\) on a logarithmic scale, where the number of vehicles is set to 3, and the input bits of each vehicle are set to 75Mbits. It is observed that the proposed one-by-one access scheme consumes the least energy compared to other benchmarks. Furthermore, energy consumption of one-by-one access dramatically decreases around 10s, as it approaches the first RSU. This is because as the distance between the RSU and the vehicle gets closer, the vehicle can offload the larger amount of tasks to the RSU so as to consume the less communication energy. On the other hand, in orthogonal access, where the frame duration is divided and equally allocated for each vehicle, although the vehicle is close Fig. 3: Total energy consumption of vehicles versus deadline \(T\) to the RSU, it cannot fully utilize the entire duration, yielding the higher energy consumption than one-by-one access scheme. Similarly, in the case of one-by-one with equal bit allocation, since the same amount of bits needs to be offloaded in each frame regardless of channel conditions, the vehicular energy consumption decreases when the vehicle approaches the RSU, and increases again when the vehicle moves away from the RSU. Additionally, at the deadline larger than 14s, the vehicular energy consumption is even higher than that of local execution, which shows that the optimal bit allocation plays an important role in the aspect of energy efficiency. In Fig. 4, we compare the total energy consumption as a function of the number of input bits, where \(K=3\) and \(T=25s\). In local execution, since the computation energy consumption is proportional to the cube of the input bits, the energy consumption increases significantly as the number of input bits increases. Both orthogonal and one-by-one access scheme are designed to offload the most of tasks, resulting in the sizable energy reduction compared to local execution. In particular, in the case of one-by-one access, where the entire frame duration can be allocated to the vehicle closest to the RSU, it is robust against the large-scale input bits. Fig. 5 compares the total energy consumption as a function of the number of vehicles under \(T=25s\). In local execution, the vehicular energy consumption is highest as the number of vehicles increases, since the total energy consumption of vehicles is obtained by summing all the computational energy consumption of each vehicle. Also, in orthogonal access scheme, the available time duration at each vehicle becomes shorter with the increase on the number of vehicles, which results in the larger energy consumption. On the other hand, the one-by-one access scheme achieves the lowest energy consumption as the number of vehicles increases. ## V Conclusion In this paper, an energy-efficient task offloading scheme for VEC system with one-by-one access is proposed. To minimize the total energy consumption of vehicles, we jointly optimize the offloading ratio, bit allocation, and offloading scheduling under a given deadline. The non-convex and mixed-integer optimization problem is formulated and solved by adopting Lagrange dual problem. Via simulations, we verify that the proposed energy-efficient offloading scheme can significantly reduce the total energy consumption of vehicles compared to the benchmarks. As a future work, a scenario considering traditional non-orthogonal multiple access (NOMA) and rate-splitting multiple access (RSMA) can be studied.
2309.07237
Evaluation of Battery Storage to Provide Virtual Transmission Service
An immediate need in the transmission system is to find alternative solutions that improve system operation and defer the need for new transmission lines. This study comprehensively evaluates the performance and economic benefits of using battery energy storage systems (BESS) as virtual transmission (VT) to promote power transfer cross distant regions. Specifically, this work implements various day-ahead energy scheduling models to analyze the impact of VT on system operation cost, network congestion, model computational time, and market performance. The performance of VT is compared with three alternative network congestion mitigation methods, including building new high-voltage physical transmission lines, cost-driven battery energy storage systems, and network reconfiguration, as well as combinations of two of aforementioned methods. The benchmark day-ahead scheduling model is a traditional security-constrained unit commitment model without system upgrades or other network congestion mitigation. Numerical simulations conducted on the IEEE 24-bus system demonstrate that among all the examined schemes, VT is the only one comparable to physical transmission lines that can provide satisfying congestion relief and operation cost reduction without sacrificing computing time and load payment significantly.
Qiushi Wang, Xingpeng Li
2023-09-13T18:16:25Z
http://arxiv.org/abs/2309.07237v1
# Evaluation of Battery Energy Storage System to Provide Virtual Transmission Service ###### Abstract An immediate need in the transmission system is to find alternative solutions that improve system operation and defer the need for new transmission lines. This study comprehensively evaluates the performance and economic benefits of using battery energy storage systems (BESS) as virtual transmission (VT) to promote power transfer cross distant regions. Specifically, this work implements various day-ahead energy scheduling models to analyze the impact of VT on system operation cost, network congestion, model computational time, and market performance. The performance of VT is compared with three alternative network congestion mitigation methods, including building new high-voltage physical transmission lines, cost-driven battery energy storage systems, and network reconfiguration, as well as combinations of two of aforementioned methods. The benchmark day-ahead scheduling model is a traditional security-constrained unit commitment model without system upgrades or other network congestion mitigation. Numerical simulations conducted on the IEEE 24-bus system demonstrate that among all the examined schemes, VT is the only one comparable to physical transmission lines that can provide satisfying congestion relief and operation cost reduction without sacrificing computing time and load payment significantly. Battery Storage, Congestion Analysis, Market Implications, Power System Operations, Virtual Transmission. ## Nomenclature \begin{tabular}{c l} \(g\) & Transmission element (line or transformer) index. \\ \(k\) & Transmission element (line or transformer) index. \\ \(n\) & Bus index. \\ \(w\) & Solar generation index. \\ \(e\) & Battery index. \\ \(vt\) & Virtual transmission line index. \\ \(t\) & Time period index. \\ \(G(n)\) & Set of generators at bus \(n\). \\ \(K\) & Set of all transmission elements. \\ \(N\) & Set of all buses. \\ \(ES(n)\) & Set of all battery storage systems at bus \(n\). \\ \(ES(vt)\) & Set of all battery-based virtual transmission systems. \\ \(S(n)\) & Set of all solar generators at bus \(n\). \\ \(K(n-)\) & Set of branches with bus n as the to-bus. \\ \(K(n+)\) & Set of branches with bus n as the from-bus. \\ \(X_{k}\) & The reactance of transmission element. \\ \(C_{g}\) & Linear cost for generator \(g\). \\ \(C_{g}^{NL}\) & No-load cost for generator g. \\ \(C_{g}^{SU}\) & The start-up cost for generator g. \\ \(d_{nt}\) & Predicted load demand of bus n in the time period t. \\ \(BigM\) & A big real number. \\ \(p_{min}\) & The minimum capacity of generator g. \\ \(p_{max}^{max}\) & Maximum capacity of generator g. \\ \(p_{k}^{max}\) & Emergency thermal line limit for line \(k\). \\ \(u_{gt}\) & Commitment status of unit \(g\) in the time period \(t\). \\ \(v_{gt}\) & Start-up variable of generator g in the time period \(t\). \\ \(\theta_{kt}\) & Phase angle difference between from-end and to-end of line \\ \(p_{gt}\) & N in the time period \(t\). \\ \(p_{gt}\) & The output of generator g in the time period \(t\). \\ \(P_{kt}\) & Flow in line k in the time period \(t\). \\ \(P_{st}\) & The output of solar generators in the time period \(t\). \\ \(P_{et}^{c}\) & Charging rate of battery e in the time period \(t\). \\ \(P_{et}^{d}\) & Discharging rate of battery e in the time period \(t\). \\ \(E_{et}\) & Energy storage energy level in the time period \(t\). \\ \(u_{et}^{c}\) & 1 indicates charging mode; otherwise, 0. \\ \(u_{et}^{d}\) & 1 indicates discharging mode; otherwise, 0. \\ \(E_{e}^{min}\) & Minimum energy storage energy level. \\ \(E_{e}^{max}\) & Maximum energy storage energy level. \\ \(p_{e}^{max}\) & Maximum energy storage charge rate. \\ \(p_{e}^{d,max}\) & Maximum energy storage discharge rate. \\ \(\eta_{e}^{e}\) & Charging efficiency. \\ \(\eta_{d}^{d}\) & Discharging efficiency. \\ \(\Delta T\) & Length of a time interval. \\ \(J_{kt}\) & 1 indicates branch \(k\) is in the network in the time period \(t\); \\ & otherwise, it is 0. \\ \end{tabular} ## I Introduction Although fast growth of solar and wind power substantially decarbonizes the electricity sector, a large portion of clean energy generation is expected to be frequently curtailed and thus wasted due to limited transmission capacity. Even with the current penetration level of renewable generation in many practical power grids, curtailment of clean energy generation is often observed. For example, in California Independent System Operator (ISO) territory, 187,000 MWh of wind and solar generation was curtailed in 2015. The curtailment amount increased by a factor of 8.5 to 1,587,000 MWh in 2020 [1]-[2]. The 2023 National Transmission Needs Study [3] by the United States Department of Energy concluded that an immediate need for updated and/or new transmission infrastructure is required by 2030 to meet the load growth and clean energy penetration. However, the overall transmission investment has decreased over the past decades, while the average timeline for building a new high-voltage transmission line is ten years. To bridge the gap between short-term transmission needs and long-term transmission planning and deployment, non-wire alternatives that can help alleviate transmission network congestion and reduce renewable generation curtailment are investigated and compared to traditional wired solutions in this paper. One of non-wire solutions is the use of large-scale battery energy storage system (BESS). BESS can help reshape the load profile and thus impact the power flow in the adjacent area. Alberto and Steven provided a brief overview of existing energy storage technologies and their applications in [4]. They also illustrated the concept of using battery storage to increase transmission capability by relaxing _N_-1 contingency conditions in thermally constrained networks of high-voltage transmission lines. [5][6] further developed the economic dispatch algorithm to enable merchant storage facilities to compete in an electricity market to provide transmission congestion relief services. A few BESS studies on real power systems in the literature [7] and [8] concurred with the research conclusion that adding BESS can help reduce congestion and offset transmission needs. Pacific Northwest National Laboratory examined the technical and financial feasibility of using a BESS and a combustion turbine generator (CTG) to defer the investment in a third transmission cable for Nantucket Island [9]. The assessment results showed that the benefits of BESS plus CTG operations with minimal low-cost distribution upgrades outweighed constructing a third transmission line. Besides the stationary battery storage system, prior efforts in the literature also evaluated the possibility of mobile energy storage systems. There is a feasible solution of the integrated optimization model for distribution planning problems, which uses temporary, transportable energy storage to reduce or defer the distribution network expansion [10]. The research in [11] and [12] further extended the application of the integrated planning strategy to the transmission network. The researchers presented different algorithms to minimize the cost of a combination of transmission lines and battery-based energy storage units. The BESS includes stationary and mobile storage. Most BESS-related research and applications demonstrate the benefit of a single BESS application to improve local system performance [13][14]. With the continuous technology advancement, BESS has become more cost-effective, while its size and duration have increased [15][16]. As a result, BESS may provide more benefits in its existing applications and gain the potential to support new applications to be explored. Nguyen demonstrated the virtual transmission (VT) concept in a two-machine network, which uses BESSs at the two ends of a line to mimic a new parallel line [17]. The objective of VT in [17] is to increase revenue for generators in the region during congested and non-congested times. Another objective of the VT application is to minimize the total relative congestion level. In [18], a research team evaluated the congestion management (CM) performance of grid operator-owned VT lines where there is no interface with the energy market. When the BESS is used in preventive CM mode, it is referred to as VT, while it is referred to as grid booster (GB) when used for curative CM. The researchers found that using GB as a curative CM is more effective than VT as a preventive CM for the battery size and location in the test network. Although ongoing pilot VT projects are happening globally, it is essential to understand how VT schemes would behave in a meshed network and a deregulated market environment. In addition, it is also important to investigate how well VT performs compared to other CM schemes and how well VT coordinates with other CM schemes. Network reconfiguration (NR) has been demonstrated to be a low-cost but very effective CM strategy in both transmission and distribution systems [19][20][21]. NR is able to relieve network congestion as a preventive control scheme in the pre-contingency situation [22] and as a corrective control scheme in the post-contingency situation [23][24]. As a congestion relief strategy, NR is shown to achieve substantial system cost savings and reduce significant renewable generation curtailment [25][26]. To bridge the research gaps and address the aforementioned research questions, this paper will investigate the effectiveness of VT as a non-wire transmission capacity expansion solution in the application of day-ahead operational planning that solves the security-constrained unit commitment (SCUC) problem. Its performance will be evaluated and compared to other options, including new high-voltage physical transmission (PT) lines and NR. In addition, this paper will also combine multiple CM options to achieve better grid performance. Particularly, this paper will implement seven different SCUC optimization models to evaluate various congestion mitigation schemes for day-ahead generation scheduling. These seven SCUC models are explained as follows: (1) a traditional benchmark SCUC, (2) an enhanced SCUC with a new physical transmission (SCUC-PT), (3) an enhanced SCUC with BESS (SCUC-BESS), (4) an enhanced SCUC with non-simultaneous charging and discharging constraints on BESS as VT (SCUC-VT), (5) an enhanced SCUC with NR (SCUC-NR), (6) an enhanced SCUC with both VT and NR (SCUC-VT-NR), and (7) an enhanced SCUC with BESS and NR (SCUC-BESS-NR). The remainder of this paper is organized as follows. The formulations for various SCUC models of interest are described in Section II. The test case and simulation results are presented in Section III. Finally, Section IV concludes this paper and presents potential future work. ## II Modeling and Methodology Power system day-ahead energy scheduling is determined by solving SCUC. This section presents the formulations used by a traditional SCUC model as a benchmark, as well as various enhanced SCUC models with congestion mitigation strategies. It also explains the metrics used for analyzing the impacts of different CM strategies on the wholesale power markets. ### _Traditional SCUC_ SCUC minimizes the total cost of generations over multiple time periods while maintaining the solution physically feasible for each period. A widely used formulation for a traditional SCUC model is presented as follows. \[\begin{split}&\text{minimize}\sum_{g\in\mathcal{G}}\sum_{\{e \in\mathcal{G}\}}\bigl{(}c_{g}P_{gt}+c_{g}^{NL}*u_{gt}+c_{g}^{SU}\\ &*v_{gt}\bigr{)}\end{split} \tag{1}\] Constraints: \[u_{gt}\in\{0,1\},\forall g,t \tag{2}\] \[v_{gt}\in\{0,1\},\forall g,t \tag{3}\] \[v_{gt}\geq u_{gt}-u_{g,t-1},\forall g,t \tag{4}\] \[p_{g}^{min}*u_{gt}\leq P_{gt},\forall g,t \tag{5}\] \[P_{gt}\leq p_{g}^{max}*u_{gt},\forall g,t \tag{6}\] \[P_{gt}-P_{g,t}\leq R_{g}^{hr},\forall g,t \tag{7}\] \[P_{g,t-1}-P_{g,t}\leq R_{g}^{hr},\forall g,t \tag{8}\] \[P_{kt}=\theta_{kt}/x_{k},\forall k,t \tag{9}\] \[-P_{k}^{max}\leq P_{kt}\leq P_{k}^{max},\forall k,t \tag{10}\] \[\begin{split}&\sum_{g\in\mathcal{G}(n)}p_{gt}+\sum_{k\in\mathcal{ K}(n-)}P_{kt}-\sum_{k\in\mathcal{K}(n+)}P_{kt}\\ &\qquad=d_{nt}-\sum_{w\in\mathcal{S}(n)}p_{gt},\forall n,t\end{split} \tag{11}\] The objective function (1) minimizes the system's total cost, including generator operation, start-up, and generators' no-load costs. Equations (2)-(11) are the constraints for the traditional SCUC optimization model. Binary variables \(u_{gt}\) for generation commitment status and \(v_{gt}\) for generator startup indicator are defined in (2) and (3), respectively. Constraint (4) defines the relation between \(u_{gt}\) and \(v_{gt}\). Generator output limits are enforced in (5)-(6). Generator ramping rate limits are respected in (7)-(8). The line power flow equation and thermal capacity limit are presented in (9) and (10), respectively. Constraint (11) guarantees the power balance will be met at each node in each time interval. ### _Network Congestion Mitigation Solutions_ This sub-section will first present the formulations for several CM schemes, including BESS, VT, PT, and NR. The corresponding SCUC models are then summarized. When BESS is present in the system to be scheduled along with generators, some existing constraints need to be updated to capture the impact of BESS. At the same time, new constraints are also required to represent BESS's unique characteristics in SCUC. BESS cannot be charged or discharged at the same time. Instead, the status of a BESS should be either charging, discharging, or idle, as represented in (12). When the BESS is in charging or discharging mode, the charging and discharging power rate must be within the maximum physical limits as enforced in (13)-(14). Constraint (15) sets the boundaries of BESS energy level. In (16), the BESS energy level calculation considers charging and discharging efficiencies. In a network with BESSs, (11) needs to be replaced by the updated nodal power balance constraint (17) due to BESS charging and discharging activities. \[u_{et}^{c}+u_{et}^{d}\leq 1,\forall e,t \tag{12}\] \[0\leq P_{et}^{c}\leq P_{e}^{c,max}u_{et}^{c},\forall e,t \tag{13}\] \[0\leq P_{et}^{d}\leq P_{e}^{d,max}u_{et}^{d},\forall e,t \tag{14}\] \[E_{e}^{min}\leq E_{et}\leq E_{e}^{max},\forall e,t \tag{15}\] \[E_{et}=E_{e,t-1}+(\eta_{e}^{c}P_{et}^{c}-P_{et}^{d}/\eta_{e}^{d})\Delta T \tag{16}\] \[\begin{split}&\sum_{g\in\mathcal{G}(n)}p_{gt}+\sum_{k\in\mathcal{ K}(n-)}P_{kt}-\sum_{k\in\mathcal{K}(n+)}P_{kt}\\ &=\sum_{e\in\mathcal{E}(n)}(P_{et}^{c}-P_{et}^{d})+d_{nt}-\sum_{w \in\mathcal{S}(n)}P_{st},\forall n,t\end{split} \tag{17}\] A transmission line absorbs the power from one end while injecting that power into the other end. Therefore, to ensure the behavior of BESS-based VT is consistent with a parallel PT line, constraints are needed to avoid BESSs on both sides of the transmission line charging simultaneously or discharging simultaneously. Equations (18)-(19) are constraints to limit the status of BESSs for VT lines, in which \(ES(vt)\) is a set of two BESSs that are located at the two ends of a congested physical line respectively to ensure VT behavior for each \(vt\). \[\sum_{e\in ES(vt)}u_{et}^{c}\leq 1,\forall vt,t \tag{18}\] \[\sum_{e\in ES(vt)}u_{et}^{d}\leq 1,\forall vt,t \tag{19}\] Network reconfiguration that can leverage the flexibility in the transmission network is another effective method to help relieve line congestion. This study will also implement the NR scheme to evaluate the performance of stand-alone BESSs and VTs. The updated constraints when implementing NR in SCUC are listed as (20)-(23), replacing (9)-(10). The model also includes a constraint (21) to limit the number of line switching actions to at most one in a single time interval to avoid severe system stability risks. \[J_{kt}\in\{0,1\},\forall k,t \tag{20}\] \[\sum_{k\in\mathcal{K}}(1-J_{kt})\leq 1,\forall t \tag{21}\] \[-BigM(1-J_{kt})\leq P_{kt}-\theta_{kt}/x_{k} \tag{22}\] \[\leq BigM(1-J_{kt}),\forall k,t \tag{23}\] \[-BigM(1-J_{kt})\leq P_{kt}-\theta_{kt}/x_{k} \tag{24}\] \[\leq BigM(1-J_{kt}),\forall k,t \tag{25}\] [MISSING_PAGE_POST] \[-BigM(1-J_{kt})\leq P_{kt} Seven different SCUC optimization models for day-ahead generation scheduling are then formulated and implemented to evaluate various congestion mitigation schemes. They are explained in Table 1. ### _Market Analysis Metrics_ Congestion management and transmission transfer capacity investment are essential to meet load growth and support clean energy penetration. It is also important to analyze the impact of those schemes on the wholesale power energy markets. In this paper, it is assumed that the power market follows a locational marginal price (LMP)-based market clearing mechanism that is adopted by most US grid operators. LMP is the marginal cost of supplying one additional MW of power to a given location. It is dependent on not only the location but also the time. Mathematically, it is equal to the dual variable of the nodal power balance constraint. In addition to the total generation cost, another metric for evaluating the system efficiency and market performance is the load payment which is defined as follows, \[Load\mathit{Payment}=\sum_{n}\sum_{t}d_{nt}\mathit{LMP}_{nt} \tag{24}\] ## III Case Studies All the aforementioned variations of SCUC models were implemented and tested on the IEEE 24-bus system that is modified to reflect the current trend of transforming coal power into more sustainable generation resources. The optimization problems were solved using the Gurobi solver in the Python-based Pyomo package. The Python scripts were run in the Anaconda Spyder environment with Python version 3.8.6 on Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz 3.40 GHz computer system. ### _Test Power System Case_ The IEEE 24-bus system was first developed in 1979 with a load model, generation system, and transmission network [14]. Since then, it has been widely used as a test system for transmission planning and reliability tests. The system has 24 buses with 38 connected elements at voltage levels of 230 kV and 138 kV. Figure 1 shows the configuration of the IEEE 24-bus system. The branch and conventional generation and load data can be found in [27]. Modifications to the generation are made to the case based on the assumption that the future power system will be free of coal-fired generation. All conventional coal-type generators on buses 2, 15, 16, and 23 are removed from the system. Instead, solar generators totaling 1,110 MW are added to buses 14, 15, and 16. In this paper, the solar deliveries are fixed at their maximum available power, as there will be no curtailment even when transmission lines are congested. It is assumed that the daily peak load is 80% of the maximum load. The load profiled in the test cases uses summer weekday data in [27] as the hourly peak load in percent of daily peak. ### _Traditional SCUC Results_ Traditional SCUC optimization is run to evaluate the congestion level of the modified IEEE 24-bus network. Results show two lines with congestion: line 11 during evening hours and line 19 during peak sun hours. Besides line 11 and line 19, line 29 gets stressed and operates above 70% of the line capacity between 10 a.m. and 4 p.m. Line 11 is a generation tie line that connects generators 9-11 to the system through the point of connection bus 8. Line 11 hits its thermal limit mainly because generators 10 and 11 need to deliver power at their total capacity to meet the load profile when solar resources are unavailable after sunset. Since generation tie lines are usually designed to match the plant's maximum power output limit, upgrading line 11 for more transfer capacity against inter-area congestion is unnecessary. On the other hand, line 19 is connected to one of the corridors between the 230 kV and 138 kV systems. The line gets congested when the system utilizes all the available solar power during peak sun hours. Applying new infrastructures to this line can help us better understand how BESS helps relieve congestion and reduce system costs in a meshed system. Therefore, line 19 is selected as the targeted line to place new \begin{table} \begin{tabular}{|l|l|c|} \hline **Models** & **Descriptions** & **Equations** \\ \hline **SCUC** & The traditional SCUC optimization & (1)-(11) \\ & model is a benchmark. & \\ \hline **SCUC-PT** & A new physical line is added to the & (1)-(11) \\ & system and the SCUC model. & (1)-(11) \\ \hline **SCUC-BESS** & BESSs are added to the system and the SCUC model. & (1)-(10), (12)- \\ & the SCUC-PT & (17) \\ \hline **SCUC-VT** & VT operation constraints are added to the SCUC-BESS model. & (1)-(10), (12)- \\ **SCUC-NR** & Network reconfiguration strategy is & (1)-(11), (20)- \\ & applied to the SCUC case. & (23) \\ \hline **SCUC-BESS-NR** & Network reconfiguration strategy is & (1)-(10), (12)- \\ & applied to the SCUC-BESS case. & (17), (20)-(23) \\ \hline **SCUC-VT-NR** & Network reconfiguration strategy is & (1)-(10), (12)- \\ & applied to the SCUC-VT case. & (23) \\ \hline \end{tabular} \end{table} Table 1: Model Descriptions and Formulation Summary Figure 1: Network topology of the IEEE 24-bus system [28]. lines and BESSs. Additionally, adding new infrastructure to line 19 has the potential to help eliminate the congestion observed on line 11. In the SCUC-PT line case, the new line is added in parallel to line 19 between buses 11 and 14 with identical line parameters as line 19. In BESS-related cases, both batteries on each side of line 19 are assumed to have the same size and technical specifications. The size of each BESS is 800 MWh with a maximum charging/discharging rate of 200 MW. ### _Comparison of Simulation Results_ As the objective of the models is to minimize the total cost, this paper compares the system's economic performance under different CM schemes. In addition, the associated system congestion and computing time are analyzed and compared. These comparisons are summarized in Table 2. The "operation cost reduction" column provides an overview of the percentage decrease in the daily total system operation cost compared to the SCUC base case. It shows that all examined CM schemes can achieve cost reduction. The "average No. of congested lines per hour" column describes the overall congestion status of the 24-bus system over the 24 hours. The higher the average number of congested lines per hour in the system, the more line(s) or the more hour(s) the line(s) are congested. Among all the CM schemes, only the PT and VT schemes provide Pareto improvement solutions and can well balance between system congestion relief and cost reduction. Although both BESS and VT schemes are implemented by adding two identical batteries on both sides of the transmission line, the standalone BESSs without VT constraints did not relieve line congestion but led to more network congestion. Subsection III D below provides a detailed analysis of the cause of different battery behaviors under BESS and VT schemes. The NR is the only scheme that does not require additional capital investment costs, while it can substantially reduce the operation cost. However, one serious concern when using BESS/VT with NR in SCUC problems is the computational complexity since such scheme combinations would result in a much longer solving time than other schemes in an order related to the number of branches. ### _BESS Operation Analysis_ The main difference between BESS and VT schemes is the battery charging and discharging status restriction. The BESS scheme allows batteries to charge or discharge freely within their energy level limits. In contrast, the VT scheme prohibits the two batteries on each side of the transmission line from charging simultaneously or discharging simultaneously. The VT operation constraint leads to a different optimal battery operation solution for the testing system. Figure 2 and Figure 3 show the energy exchange profiles of the batteries on bus 11 and bus 14, respectively. These two figures show results from both SCUC-BESS and SCUC-VT. The green area between the charging/discharging curves of the two CM schemes represents the difference in the energy being stored in the batteries when in the charging mode or the difference in the energy that the batteries inject into the grid in the discharging mode. The battery on bus 11 shows a higher charging/discharging rate in the SCUS-VT case than in the SCUS-BESS case. One possible explanation is that allowing only one battery in the charging mode will force the energy to be stored in a more concentrated manner. As a result, the battery on bus 11 in the VT case stores more energy before dawn, enabling it to provide more power between 11 a.m. and 4 p.m. when line 19 is congested and line 29 gets stressed. In the BESS case, the average number of congested lines per hour rises to 0.42 from the SCUC base case's 0.38, mainly because generator 23 at bus 18 delivers more power during peak sun hours, causing already-stressed line 29 to become congested. No matter with or without the VT operation constraint, the energy usages of the two BESS on each side of the transmission line are not balanced. With the solar and load profile of the test case, the battery on bus 14, close to the solar resources, is used heavier than the battery on bus 11. ### _Market Analysis_ Other important metrics to evaluate VT compared with other CM schemes would be the energy market settlements, including the system operation cost and load payment reflecting social welfare. These results are presented in Figure 4. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multirow{2}{*}{**Model**} & **Operation cost reduction** & **Average No. of congested lines per hour** & **Computing time (s)** \\ \hline **SCUC** & 0.00\% & 0.38 & 6.5 \\ \hline **SCUC-PT** & 11.71\% & 0.13 & 5.4 \\ \hline **SCUC-BESS** & 14.09\% & 0.42 & 2.3 \\ \hline **SCUC-VT** & 14.04\% & 0.25 & 2.3 \\ \hline **SCUC-NR** & 6.38\% & 0.38 & 268.1 \\ \hline **SCUC-BESS-NR** & 15.26\% & 0.38 & 1446.2 \\ \hline **SCUC-VT-NR** & 15.16\% & 0.42 & 10300.4 \\ \hline \end{tabular} \end{table} Table 2: Transmission Facility Performance Comparison Figure 3: Charging and discharging profile of the battery at bus 14 under different CM strategies. Figure 2: Charging and discharging profile of the battery at bus 11 under different CM strategies. The load payment is not necessarily correlated with system operation costs and congestion status. Although all six enhanced SCUC models with various CM schemes have less system operation cost, only the PT CM strategy leads to lower load payment than the SCUC benchmark. Although the VT CM scheme provides more congestion relief than the NR CM scheme, its load payment is higher than the NR CM scheme. Stronger evidence can be observed from the results of SCUC-BESS-NR and SCUC-VT-NR that lead to much greater cost reduction but much higher load payment. The load payment is related to the load profile because batteries re-shape the load profile through charging and discharging activities. Figure 5 illustrates the difference in LMP between the SCUC-VT model and the SCUC-NR model at each bus during each hour. It is observed that, compared with the SCUC-NR model, the SCUC-VT model significantly increases the LMP when the battery on bus 11 absorbs energy from the grid as an additional load, as shown in Figure 2, without enough low-cost generation online. The occasions are when the system is not congested, especially between 4 and 5 a.m. This explains why the load payment of SCUC-VT is higher even though SCUC-VT can lead to lower total cost, indicating less network congestion with the proposed VT CM scheme as compared to utilizing the flexibility in the transmission network to mitigate the congestion without additional asset investments. ### _Sensitivity Analysis: Battery Size_ The size of the battery significantly impacts the battery charging/discharging decisions that affect the market settlement results. Figure 6 summarizes system operation costs and load payments for different BESS sizes ranging from 100 to 400 MW with an increment of 50 MW, assuming the same duration. It is observed from Figure 6 that as the BESS size increases, the system operation cost reduces accordingly, which is expected. However, it is interesting to observe the total cost remains the same after it drops to $878,636 when the BESS size increases to 250 MW; further increasing BESS size will not provide any further benefits against network congestion. Similarly, the load payment does not change when BESS size reaches the same turning point of 250 MW, precisely, a range of 200 MW - 250 MW. It is also interesting to observe that there is no fixed pattern regarding load payment change concerning BESS size change before BESS size hits this turning point. Figure 7 shows the power flows on line 19 for two models: (i) SCUC benchmark and (ii) SCUC-PT with a new line parallel to line 19 and sharing the same parameters with line 19. For SCUC-PT, the total flow crossing the path of line 19 is slightly over 700 MW for the congestion hours from 12 pm to 3 pm, indicating that entirely relieving the congestion would require slightly over 200 MW additional transfer capacity, which aligns with the optimal BESS size of power capacity per Figure 6. ## IV Conclusions BESS-based virtual transmission, as a new concept of alternative transmission lines, can help relieve network congestion and reduce the total grid operation cost. This study demonstrates that compared with the options of a new physical line or network reconfiguration strategy, BESS as VT can achieve greater cost reduction and shorter computational time, but higher load payment. The battery size on each side of the critical line that may be congested in peak hours would affect the VT performance in terms of congestion relief, cost reduction and market clearing results. Limiting the operation of BESS to mimic physical transmission lines helps relieve system congestion under normal system operating conditions with negligible cost increases. Combining VT with other system congestion-relieving methods such as NR may further reduce the system total operation cost, but it may significantly increase the load payment as well as the optimization calculation time. Figure 4: System operation costs and energy market settlement for different SCUC models under various CM schemes. Figure 5: LMP difference between VT and NR cases. Figure 6: System operation cost and energy market settlement for different BESS sizes. Figure 7: Power flows on line 19 from benchmark SCUC and SCUC-PT. Further research is needed to (1) evaluate VT's behavior in power systems with higher renewable energy penetration levels and (2) optimize the size of the BESS on each side of the critical line to achieve the optimum VT performance with least cost.
2307.16696
Large Language Models for Education: Grading Open-Ended Questions Using ChatGPT
As a way of addressing increasingly sophisticated problems, software professionals face the constant challenge of seeking improvement. However, for these individuals to enhance their skills, their process of studying and training must involve feedback that is both immediate and accurate. In the context of software companies, where the scale of professionals undergoing training is large, but the number of qualified professionals available for providing corrections is small, delivering effective feedback becomes even more challenging. To circumvent this challenge, this work presents an exploration of using Large Language Models (LLMs) to support the correction process of open-ended questions in technical training. In this study, we utilized ChatGPT to correct open-ended questions answered by 42 industry professionals on two topics. Evaluating the corrections and feedback provided by ChatGPT, we observed that it is capable of identifying semantic details in responses that other metrics cannot observe. Furthermore, we noticed that, in general, subject matter experts tended to agree with the corrections and feedback given by ChatGPT.
Gustavo Pinto, Isadora Cardoso-Pereira, Danilo Monteiro Ribeiro, Danilo Lucena, Alberto de Souza, Kiev Gama
2023-07-31T14:12:06Z
http://arxiv.org/abs/2307.16696v2
# Large Language Models for Education: ###### Abstract. As a way of addressing increasingly sophisticated problems, software professionals face the constant challenge of seeking improvement. However, for these individuals to enhance their skills, their process of studying and training must involve feedback that is both immediate and accurate. In the context of software companies, where the scale of professionals undergoing training is large, but the number of qualified professionals available for providing corrections is small, delivering effective feedback becomes even more challenging. To circumvent this challenge, this work presents an exploration of using Large Language Models (LLMs) to support the correction process of open-ended questions in technical training. In this study, we utilized ChatGPT to correct open-ended questions answered by 42 industry professionals on two topics. Evaluating the corrections and feedback provided by ChatGPT, we observed that it is capable of identifying semantic details in responses that other metrics cannot observe. Furthermore, we noticed that, in general, subject matter experts tended to agree with the corrections and feedback given by ChatGPT. ChatGPT, Open-ended Questions, Automated grading 2023 2023 2023 2023 2023 2023 ChatGPT, Open-ended Questions, Automated grading ## 1. Introduction The software industry regularly presents challenges to development teams. Companies and teams continually strive to improve operational efficiency while reducing costs and maintaining or increasing productivity. In this context, software developers must continuously enhance their skills to remain relevant in their careers [2]. The process of studying and training1 is integral to developers' work routine, supporting their continuous improvement and playing a crucial role in enhancing technical skills. Footnote 1: For the context of this work, we use the terms “training” and “studying” interchangably. During the training process, problem-solving activities or exercises are essential for two main reasons. Firstly, they help solidify and self-assess the understanding of new concepts. Secondly, they indicate whether the individual's current knowledge is sufficient for performing tasks [16, 22, 24]. The feedback that developers receive regarding their exercises is just as important as solving them [23, 26]. Good feedback is crucial for the learning process of software developers, as it provides insights into their progress and areas for improvement. However, offering effective feedback can be challenging, requiring broad subject knowledge, availability, and the ability to identify knowledge gaps [5]. Moreover, providing feedback promptly adds to the complexity of this task [23]. In the context of a software producing organization, it is necessary for the individuals who provide feedback to find time to do that. Thus, they need to allocate hours they usually dedicate for tasks related to software development and allocate part of their schedule for this grading and feedback activity, which reduce their operational efficiency. This can have a significant impact, especially for companies that invest in their employees' learning. This is specifically the scenario experienced at Zup Innovation. Due to Zup's size, with several thousand employees, of which approximately 90% work in engineering teams, theoretical exams predominantly consist of closed-ended questions, either single-choice or multiple-choice. However, the usage of closed-ended questions presents a significant limitation in the evaluation process, given their limited _feedback_. Aiming to accelerate the grading and feedback process, this work presents an investigation into the use of ChatGPT as a supplementary evaluation method. Our goal is to understand whether ChatGPT could be considered as a mechanism for grading open-ended questions in the training process employed by Zup. To accomplish this, the study began by creating a set of open-ended questions on two topics of interest to Zup: (1) web application caching and (2) stress and performance testing. We asked two experts in this field to answer three questions each. With the responses from these experts, we conducted a pilot experiment involving six developers from the engineering team at Zup. In this pilot, we administered an online questionnaire and asked the developers to respond to the six open-ended questions. We used ChatGPT to correct the pilot questions and evaluated the quality of the prompts for answer grading. After receiving feedback from the pilot participants, we randomly invited 100 more people from the engineering team, of whom 50 had completed at least one technical training, while the other group had started but not completed any training. Of these, 16 and 24 individuals from each group, respectively, completed the questionnaire (N=40). In this work, we bring the following contributions: * We explored the use of ChatGPT in the domain of open-ended question grading and feedback. * We assessed the responses of experts (2 people) and non-experts (40 people) using ChatGPT. * We compared the grading provided by ChatGPT using a typical metric, identifying and explaining any potential inconsistencies. ## 2. Why ChatGPT? The recent technological advancements, such as the significant improvement in computational power and the enormous amount of data stored in structured and unstructured formats, have greatly benefited the field of Machine Learning (_ML_), especially Deep Learning (_DL_). DL, in particular, has revolutionized various domains of knowledge, such as image and speech recognition (Beng et al., 2017). The early DL models for long sequences (such as texts) processed inputs sequentially (Krizhevsky et al., 2014), which required larger models, more time, and computational power for training. Additionally, these models struggled to relate different parts of the sequences, resulting in limitations in learning (Krizhevsky et al., 2014). However, with the introduction of _Transformers_ models, parallel processing of long-term sequences became possible, enhancing the learning capacity and accelerating the process (Krizhevsky et al., 2014). These improvements culminated in _Large Language Models_ (LLMs), which provided significant advances in text processing and natural language understanding (Zhu et al., 2017). **Transformers Models.** Transformers models are a widely used deep neural network architecture in Natural Language Processing (_NLP_). Unlike previous neural network models that processed elements sequentially, Transformers introduced the attention mechanism (Krizhevsky et al., 2014). The attention mechanism allows the model to assign different weights to specific parts of the input during training. Instead of solely relying on sequential order, the model can focus on parts of the sequence that are more relevant to the task at hand, capturing complex and long-range dependencies. For instance, when given a paragraph as input, while previous models "read" the paragraph sequentially, the attention mechanism allows Transformers to assign higher importance to specific words. This way, the model can capture long-range relationships between words, even when there are several words between them. Moreover, the attention mechanism enables Transformers to process data in parallel, dividing the input sequence into multiple parts and performing attention operations independently. This leads to efficient and simultaneous processing of information, resulting in smaller models compared to traditional ones. This characteristic is one of the main reasons why these models can efficiently and scalably process and generate text on a large scale (Krizhevsky et al., 2014; Zhu et al., 2017). **Large Language Models.** LLMs are Transformers models with millions, billions, or even more parameters, extensively trained on large textual datasets, such as libraries of books, web articles, and conversations on social networks. Through this diverse training data, these models acquire a deep understanding of the structure, grammar, and semantic context of human language. Previously, training and utilizing these models for specific tasks required considerable computational resources and advanced technical knowledge to develop, deploy, and enhance the models. However, platforms like ChatGPT and similar ones have democratized access to LLMs, enabling individuals without ML expertise to interact intuitively with these models through chat interfaces, such as virtual assistants. This mode of interaction has brought about an alternative learning paradigm in the field of ML: prompt-based learning. Instead of refining the model traditionally, through providing more data and adjusting parameters, tasks are reformulated as textual prompts. An appropriate prompt can shape the model's behavior, directing the desired output without the need for conventional fine-tuning (Krizhevsky et al., 2014). As a result, LLMs demonstrate impressive ability to perform complex tasks, even when trained with few examples (few-shot learning (Beng et al., 2017)) or no examples (zero-shot learning (Chen et al., 2018)). This qualifies them for a wide range of NLP activities, including automatic translation, text generation, and document summarization (Zhu et al., 2017). **ChatGPT in Education.** ChatGPT is a specific implementation of an LLM based on the GPT (_Generative Pre-trained Transformer_) architecture (Beng et al., 2017). This tool has the potential to promote improvements in learning and teaching experiences at various levels, from school education to university and professional development. One of the advantages of ChatGPT, along with other LLMs, is the ability to offer personalized learning, taking into account the preferences, abilities, and individual needs of each student. This personalization can contribute to making the learning experience more effective and engaging (Chen et al., 2018). ## 3. Research Questions This work aims to provide answers to two research questions: 1. [leftmargin=*,noitemsep,topsep=0pt] 2. Considering the responses of experts, what is the quality of the grading provided by ChatGPT? 3. Considering the responses of non-experts, what is the quality of the grading provided by ChatGPT? To answer **RQ1**, we asked experts to respond to six open-ended questions. These questions were corrected using ChatGPT. For comparison purposes, we also asked ChatGPT to answer the same questions; and we also corrected the questions answered by ChatGPT with ChatGPT. After refining the responses to the six questions, to answer **RQ2**, we asked a larger group of developers, but without expertise in that specific domain, to respond to the questions. ## 4 Method: Questionnaire In this section, we will present in detail the process of creating, testing, and implementing the questionnaire. We will discuss the steps involved in formulating the questions, as well as the methods used to ensure the validity and reliability of the results. Additionally, we will describe the process of testing the questionnaire with a pilot group and how the results of this test were used to improve the final experiment. ### Questionnaire design To create the questionnaire, we selected two topics for which we had experts available in the Education team at Zup: (1) caching and (2) stress and performance testing. This decision was made so that we could generate prompts that compared the responses of the experts with the responses of the participants in the experiment. This way, we could reduce confirmation bias in the similarity calculation performed by ChatGPT [27]. For the selection of questions, we chose three questions for each topic. We opted for a small number of open-ended questions to avoid overburdening the research participants, as they were using their work hours to answer the questions. The questions were organized in ascending order of difficulty, starting with the easiest and ending with the most difficult. The chosen questions (Q_n_), along with the responses of the experts (R_n_), were: _Caching_ 1. Explain in your own words what you understand about _caching_ in REST applications. 2. "_Caching_ is a technique that allows storing frequently accessed data in memory to reduce the complexity cost of querying them. Nowadays, there are several ways to apply _caching_ in REST APIs, starting on the server-side through techniques of local and distributed caching. It is also possible to enable caching on the client-side, where through the use of HTTP protocol headers, policies are defined to govern the _caching_ behavior, such as using versions, expiration time, and also specifying which clients can store the data, referring to the end user's browser and/or CDNs." 3. Explain briefly how the two types of _caching_ work: client-side caching and server-side caching. 4. "_Server-side caching can be served locally, causing a portion of the server's memory heap to be used for storing data that has high network or computation cost and is frequently accessed. This strategy should be used in scenarios where the system architecture is monolithic or there is a restriction that only one instance of the system is used. Another way to provide caching at the application layer is through the use of distributed cache providers, favoring a global point of access that is shared among instances, facilitating data synchronization with the source of truth._" 5. Explain cache invalidation in REST applications and present a way to address it. 6. "_Cache invalidation is an operation that aims to keep the caching lean and consistent. To provide these guarantees, invalidation policies must be used. Some examples are Least Recently Used (LRU), which aims to remove from the cache the data that has not been accessed recently. Another example of a policy is Least Frequently Used (LFU), which aims to remove from the cache the data that is least accessed. There are also providers that work with expiration policies, where the data enters with a duration time, and upon reaching a certain time, they are automatically removed from the cache._" _Stress and performance testing_ 1. Explain the concept of load and stress testing. 2. "_Load testing means verifying how an application or system behaves under an expected workload, which can be small, moderate, or large. Additionally, this workload is applied for a certain interval of time, such as minutes or hours, to validate the system's stability and detect possible problems in resource usage, such as memory, CPU, disk, or connections to a database, for example. It is important to understand that load testing does not exceed the expected or designed capacity for an application or system. On the other hand, stress testing is related to verifying how an application or system behaves when subjected to a very high and intense workload, usually a workload higher than expected or specified in the requirements. The idea here is to subject the application beyond its designed capacity in order to detect problems or bottlenecks in resource or internal component usage. The goal is to discover how the system behaves under extreme pressure, such as traffic spikes, excessive resource usage, hardware failures, or abnormal conditions._" 3. What are the main metrics used to evaluate the performance of an application during a load test? 4. "_Generally, for a web application, including REST APIs, the main metrics we collect and evaluate are: response time, throughput (number of operations per unit of time), and error rate. There is a well-known method called the 'RED Method,' which basically recommends evaluating these 3 metrics for request-based services. For non-request-based applications, such as batch processing or streaming services, other metrics like CPU, memory, or network are also usually collected and evaluated._" 5. What are the best practices for conducting load tests on applications that expose REST APIs? 6. "_There are several important practices when conducting load tests, such as defining the use cases to be validated and the expectations of the expected workload. It is also important to define which metrics are relevant for the test, as they will help identify performance issues and bottlenecks (here, the 'RED Method' can be adopted). Another point is to run load tests against an application in production or a similar production-like environment, such as a staging environment; this way, we will obtain numbers close to the reality of the system. And last but not least, applying the tests with a realistic data set whenever possible._" The questionnaire used in the study was implemented through the TypeForm platform. It was designed to be anonymous, and all questions were mandatory. However, due to the implementation of privacy policies at Zup, the questionnaire did not include demographic questions. Instead, it contained only six open-ended questions. Before each group of questions, we presented the purpose of the study and introduced the concepts of caching and stress and performance testing. ### Questionnaire Pilot Before making the survey available and collecting responses from a larger population, we first conducted a pilot with two objectives: 1) to evaluate the understanding of the presented questions, and 2) to assess the quality of the prompts used for question grading in ChatGPT. To conduct the pilot, we sent an invitation message to a virtual space with developers from Zup who discuss topics related to caching and to stress and performance testing2. In the message, we introduced the objectives of the work along with the link to the online form. In total, six participants responded to the questions. According to TypeForm statistics, 65 people opened the form, while 49 started to answer it. Out of those, only six completed the responses to all questions. The average time for completing the activity was 38 minutes. With the first set of responses, we initiated the process of engineering the prompt for grading (further details in Section 4.4). After the gradings were completed, we shared the responses together with the evaluation from ChatGPT with the same virtual space where we invited the individuals, as a way to request feedback on the evaluation performed by ChatGPT. Two people provided feedback, which helped us improve the provided prompt (e.g., "All my responses followed the same feedback pattern. It presents a correct and general approach on the topic, mentioning some important points. However, **it could provide more details and real-world examples**. Sometimes, it asks for more technical details, and in others, it requests clear explanations for all audiences."). Footnote 2: At Zup, the use of such spaces for forming discussion groups on relevant topics is common. ### Actual Questionnaire After conducting the pilot and refining the prompts based on participants' feedback, we sent the questionnaire to a larger population of developers at Zup. Initially, we selected 100 individuals who had already participated in at least one of the training sessions offered by the Education team. We made this decision to target the questions towards developers who are more accustomed to study activities during their work environment. Out of the 100 participants, we selected 50 who had successfully completed at least one training, meaning they achieved the minimum required average to pass the course; similarly, we selected 50 who had not completed any training. Subsequently, we individually invited each participant through a message sent via the private enterprise chat platform. Similar to the pilot, in the message we expressed our interest and the study's objective, along with the link to the online questionnaire. As in the pilot, the questionnaire was anonymous and no questions regarding demographic information were asked. Over the course of a week, we sent reminder messages to those who had not indicated whether they had responded to the questionnaire or not. According to TypeForm statistics, 90 individuals (53 from the group who completed at leas one training and 37 from the group who did not) opened the form, while 76 (43+33) started to respond to it. In the end, we obtained 40 (16 +24) responses. The average time for completing the activity was 42 minutes. ### Prompt Engineering The term _prompt_ refers to a set of instructions provided to a LLM (Language Model) as a way to customize or refine its capabilities. The prompt defines the context of the conversation, informs the LLM about what information is relevant, and specifies the type of output expected. The quality of the output generated by an LLM is directly related to the quality of the prompt provided by the user. Prompt engineering [14], on the other hand, refers to the process of adjusting or improving the responses generated by LLMs [28]. In other words, prompt engineering involves training LLMs via prompts. The process of prompt engineering involves modifying the prompts to obtain more desirable results. This fine-tuning technique optimizes human-computer interaction by refining text generation based on the users' needs and expectations. #### 4.4.1 Prompt Engineering Phases We describe our prompt engineering steps below. **Prompt V1.** In the first prompt version, we used ChatGPT as an oracle for open-ended questions. Additionally, we included technical aspects that, although relevant to the company's context, were not necessary for answering the open-ended questions. The initial prompt is as follows: Suppose you are an expert in creating web applications using REST APIs. In addition to being an expert, you grade exams on this topic. Consider the following question and answer: Q: Explain in your own words what you understand about caching in REST applications? R: {} What score would you give to this answer on a scale from 0 to 10? Return the answer in a JSON format, with a variable'score' for your rating, and another variable 'explanation' for the justification of this score. Your explanation should have at least 20 words. In the explanation, identify any knowledge gaps and explain how to minimize them using real-world examples. If no response is provided, indicate 'No response was given,' and assign a score of zero. In the prompt above, we illustrated a question about _caching_. We used the approach of creating a prompt for each question, only changing the context (first paragraph) and the question itself. The response is provided to the prompt in a way that can be parameterized (indicated by {} in the prompt). However, when evaluating this first prompt on the pilot data, we observed that the scores given by the model were consistently high, even for simple answers, indicating a possible model miscalibration. **Prompt V2.** As an attempt to increase the model's evaluation rigor, we removed the first sentence from the prompt and replaced it with the following instruction: "I need you to grade the exams using the highest grading standard you can." The rest of the prompt remained unchanged. Furthermore, the first prompt was designed to be specific to the context of programming in Java. However, it was observed that the questions were theoretical and did not require knowledge of a specific programming language. Therefore, this information was also removed. With these modifications, it was possible to observe an increase in the evaluation rigor of the questions. **Prompt V3.** Although the V2 version of the prompt increased the rigor of question evaluation, it still has an important limitation: relying solely on the knowledge base and interpretation of ChatGPT, which is known to provide incorrect information (Kumar et al., 2017). In order to minimize this limitation, we designed a new prompt by providing the expert's answer and then we asked to compare the student's answer with the expert's answer. **Prompt V4.** After the first round of pilot evaluation, the scores with the explanations provided by ChatGPT were shared with the pilot participants. Among the feedback received, it was suggested that the explanation would be more illustrative if it included real-world examples. Therefore, we added a final instruction at the end of the prompt, indicating that "Whenever possible, the explanation should include real-world examples." The final prompt generated and used for the correction of the remaining open-ended questions can be seen below: ``` 1need you to grade the exams using the highest grading standards possible. Consider the following question and the answer provided by an expert: Q: Explain in your own words what you understand about _caching_ in REST applications? R Expert: _Caching_ is a technique that allows storing data frequently accessed in memory to reduce the complexity cost of querying it. Nowadays, there are various ways to apply _caching_ in REST APIs, starting on the server-side through local and distributed cache techniques. It is also possible to enable client-side _caching_, where HTTP protocol headers define policies that govern _caching_ behavior, such as versioning, expiration time, and defining which clients can store the data, referring to the end-user's browser and/or CDNs. Now, consider the student's response provided below. R Student: {} What grade would you give to the student's response, considering the expert's answer, on a scale from 0 to 10? Return the response in a JSON format, with a variable 'grade', containing your grade, and another variable 'explanation' with the explanation for this grade. Your explanation must have at least 20 words. In the explanation, identify knowledge gaps and explain how to minimize these gaps using real-world examples. If no response is provided, inform 'No response provided', and give a grade of zero. #### 4.4.2. Prompts Execution In this article, we utilized ChatGPT through its API3. We used the GPT-4 model with the following configurations: a maximum of 100 tokens and a temperature of 0.6; the temperature is a hyperparameter that affects the probability of token distributions in the model. It is possible to adjust the temperature to control the diversity and creativity of the generated texts. The temperature value ranges from 0 to 1. While a high temperature (e.g., 0.7+) leads to more diverse results, a low temperature (e.g., 0.2) tends to produce more deterministic results. Footnote 3: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference) The value of 0.6 was chosen to obtain slightly different corrections for similar questions. The maximum token value was set to make the GPT's corrections more concise and direct. ### Quality Metrics Like other LLMs, ChatGPT has a well-known limitation regarding false or distorted perceptions of information generated by the model itself. These "hallucinations" (a technical term used to describe this limitation) can cause the LLM to generate answers that appear correct when, in fact, they are not. To minimize this limitation and complement the grading provided by ChatGPT, we also calculated the cosine similarity metric between the response provided by the expert and the response provided by the study participant. The cosine similarity metric is widely used to measure similarity between vectors in multidimensional spaces (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017). By calculating the cosine similarity between the participants' responses and a reference standard (expert's response), we can obtain an objective measure of how close the responses are to the desired pattern. To calculate this metric, we used the implementation available in the short library4. In general, to calculate cosine similarity, we first convert the responses into numerical vectors. Then, we multiply these vectors with each other and sum the results. Finally, we divide the multiplication result by the product of the sizes of the vectors. The final value obtained ranges from -1 to 1, where 1 indicates that the responses are very similar, while -1 indicates that the responses are very different. In summary, this metric measures the projection of the participant's response onto the direction of the expert's response, providing a measure of their similarity. Footnote 4: [https://www.sbert.net/](https://www.sbert.net/) ## 5. Results We organize the results in terms of the research questions. First, to answer **RQ1**, we present an analysis of the correctness of experts' responses (Section 5.1), comparing them to ChatGPT's responses. Next, to address **RQ2**, we present an analysis of the correctness of students' responses (Section 5.2). ### Grading experts' responses We started by evaluating the responses provided by experts (as presented in Section 4) using ChatGPT. To conduct this evaluation, we asked ChatGPT to grade the responses given by the experts. To do this, we had to edit the _prompt_ to remove the lines indicating the expert's evaluation. As a result, the expert's response was corrected as if it were a student's answer. The scores assigned by ChatGPT to the experts' responses are presented in the Experts' column of Table 1. After collecting and grading the experts' responses with ChatGPT, we asked ChatGPT to answer the six open questions. To do this, we used the following prompt: "Using the highest grading scale you can, provide an answer of up to 100 words for the following question: QN". The limit of 100 words was established since the average word count in the experts' responses was 114 words. These responses provided by ChatGPT were then also corrected by ChatGPT itself. The result of this correction is available in the "ChatGPT" column of Table 1. **Grading of experts' responses by ChatGPT.** As one could see, the answers to the first group of questions about caching (Q1, Q2, and Q3) consistently received lower scores compared to the second group. As highlighted in Section 4, the responses related to caching were provided by one expert, while the responses concerning stress testing and performance were provided by another expert. Upon analyzing the feedback provided by ChatGPT, we observed comments that might have influenced the received scores. For instance, when considering the first set of answers about caching, ChatGPT indicated that the responses presented basic concepts but did not adequately explain or provide examples. In Q2, which received a score of \(7\) and asked about the difference between client-side and server-side caching, the expert explained server-side caching but did not mention the second approach, client-side caching. ChatGPT identified this gap and highlighted it in its correction. "_The student explained server-side caching well, including monolithic and distributed strategies, but did not mention client-side caching. In client-side caching, data is stored on the user's device, such as browsers and APIs, speeding up page loading. For example, saving images and CSS from the site of the last visit._" Another interesting example to highlight is the explanation provided for Q5, which inquired about the main load testing metrics. In this explanation, ChatGPT indicated that: "_The student's response correctly addressed the response time, throughput, and error rate metrics, mentioned the RED Method, and cited examples of other situations. However, it could have explicitly explained the relationship of these metrics with the user experience and mentioned more specific metrics for other applications, such as latency indexes for real-time systems_". In other words, ChatGPT correctly identified the items provided in the responses but did not recognize a potential relationship among these items--even though this relationship was not questioned in the prompt. **Perception of ChatGPT's grading by the experts.** Next, we sent the corrections provided by ChatGPT to the experts who wrote the responses. When we inquired whether the experts agreed with the evaluations, they generally concurred with the assessments. In particular, regarding question Q3, where there was a higher disagreement between the experts' and ChatGPT's responses, the expert who provided the answer commented that: "_[in this question] he was precise and noticed that my response was incomplete. I believe the score was higher than it should have been. I would give myself a 5._" However, for Q4, a point of disagreement was observed as the expert mentioned that "_This explanation doesn't make much sense since what she says is missing from the response is actually there._" referring to the difference between load testing and performance testing. **Corrections of ChatGPT's responses by ChatGPT.** The gradings provided for the expert responses questioned whether ChatGPT could provide even more elaborate answers. The column labeled "ChatGPT" shows the result of this investigation. As evident, in general, the responses provided by ChatGPT received higher scores when compared to the responses provided by the experts. Particularly, the difference was greater in the group of questions about _caching_: 2 points of difference for question Q2 and 1.5 points of difference for question Q3. For example, ChatGPT's response to Q2 addressed both server-side and client-side caching concepts: "_Client-side and server-side caching are strategies to temporarily store data to improve performance and loading speed. In client-side caching, browsers store information locally on the user's device, such as HTML, CSS, and JavaScript files. This reduces the need to repeatedly retrieve resources from the server, saving loading time and bandwidth. On the other hand, server-side caching involves storing information, such as request responses, directly on the server. In these cases, if the information is already cached, server responses are faster, optimizing the user experience._" However, for the second group of questions about stress and performance testing, a smaller variation was observed between the expert's grading and ChatGPT's grading: only 0.5 points for Q4 and also 0.5 points for Q6. **Comparison of expert and ChatGPT gradings.** Although the gradings provided by ChatGPT for the expert responses offer an initial means of comparison, as discussed in Section 4.5, relying solely on ChatGPT's corrections may be insufficient, as it can exhibit hallucinations, meaning it provides false or distorted insights generated by the model itself [7]. To complement ChatGPT's corrections, we employ the Cosine Similarity metric, presented in Table 1, under the column labeled "Cos Sim." When closer to 1, there is a higher likelihood that the responses provided by the experts and ChatGPT are similar. As evident, in general, the responses exhibited a high degree of similarity (\(>\)0.7). However, only response Q5 showed a lower similarity value, even though both responses received a score of 9, according to ChatGPT's correction. Upon evaluating these two responses, it was observed that the expert and ChatGPT approached slightly different topics, although both were correct. For instance, in Q5, the expert commented: "_Usually, in a web application, including REST APIs, the main metrics we collect and evaluate are: response time, throughput (number of operations per unit of time), and error rate. In fact, there is a well-known method called the 'RED Method,' which basically recommends evaluating these 3 metrics for request-based services and applications._" On the other hand, ChatGPT's response was more generic and did not mention the existence of the 'Red Method': "_The main metrics used to evaluate the performance of applications during a load test include response time, throughput, \begin{table} \begin{tabular}{c|c c|c c|c} \hline \hline & \multicolumn{2}{c|}{**Experts**} & \multicolumn{2}{c|}{**ChatGPT**} & \\ **Question** & **\# Words** & **Grade** & **\# Words** & **Grade** & **Cos Sim** \\ \hline Q1 & 107 & 8 & 94 & 9 & 0.8410 \\ Q2 & 109 & 7 & 105 & 9 & 0.7338 \\ Q3 & 94 & 7 & 82 & 8.5 & 0.7492 \\ Q4 & 178 & 8.5 & 93 & 9 & 0.8546 \\ Q5 & 88 & 9 & 61 & 9 & 0.5429 \\ Q6 & 110 & 8.5 & 93 & 9 & 0.8046 \\ \hline \hline \end{tabular} \end{table} Table 1. Evaluation of experts’ and ChatGPT’s responses, graded by ChatGPT. The column # Words” presents the size of the response provided by the expert (or ChatGPT), measured by the number of words. The column “Cos Sim” presents the result of the cosine similarity metric. the number of simultaneous users, resource utilization (CPU, memory, disk, and network), and system errors or failures. These metrics help identify bottlenecks, determine scalability, and ensure system stability under different load conditions."_ ### Grading students' responses After the individual evaluation of expert responses, we proceeded with the evaluation of the study participants' responses. As highlighted in Section 4.4.1, in the final version of the prompt, we instructed ChatGPT to compare the student's response with the expert's response. This approach aimed to minimize known hallucination biases often observed in LLMs like ChatGPT [7, 19]. Furthermore, based on the feedback from the experts, the initially provided responses were adjusted by the authors of this article to incorporate the insights highlighted by ChatGPT. The following Table 2 provides a summary of the evaluations from the study participants. **Difference in scores between participant groups.** Although the participants of the survey did not undergo any training on caching or stress and load testing, our initial hypothesis was that participants who had completed at least one technical training would have less difficulty answering the questions (indicated by higher scores) compared to participants who had not completed any technical training. In particular, for questions Q1 and Q2, it was possible to observe that participants who completed at least one training scored at least one point higher than those who did not complete any training (for Q1, median of 6 for the completed group, while median of 5 for the non-completed group; for Q2, the completed group scored 7 at the median, while the non-completed group scored 6). However, for questions Q4 and Q5, the difference between the groups was small (around 0.29 points in Q4 and around 0.1 points in Q5, on average). On the other hand, for questions Q3 and Q6, participants who did not complete any training received higher scores compared to those who completed a training (for Q3, median of 5 points for the completed group and 7 points for the non-completed group, while for Q6, the average was again 5 points for the completed group and 7 points for the non-completed group). **Reliability in ChatGPT's corrections.** As a way to complement the corrections provided by ChatGPT, we calculated the cosine similarity metric, which, in summary, evaluates the degree of proximity between two sentences in a vector space (further details in Section 4.5). Figure 1 presents a comparison between ChatGPT's correction (normalized to range from 0 to 1) and the result of the cosine similarity metric calculation. In general, it can be observed that, for most questions, the lines follow a similar pattern: when one line tends to go up or down, the other line tends to follow suit. The main difference in this trend can be observed in Q5 (Figure 1-(e)). In this particular question, on average, ChatGPT's scores had a mean of 5.25 points, while the mean of the similarity metric was 0.38. Figure 2 presents the distribution of corrections using ChatGPT and the cosine similarity metric. As observable, although the distribution of some responses has slightly different shapes, the values of mean and median remain close, with Q5 being the exception. **Divergences observed in ChatGPT's corrections.** To better understand the divergence observed in Q5, we investigated which responses had the greatest discrepancy in scores. We found a total of 10 responses that had more than 3 points of divergence. Upon analyzing these responses, we noticed that ChatGPT was able to detect details in the answers that potentially influenced the generated scores. For example, one participant answered Q5 as follows: "_response time, quantity of data transferred, and success rate (throughput) and error rate_". However, ChatGPT noticed the confusion the participant made between success rate and throughput: "_The student's response addresses the main metrics, with some inaccuracies. He mentions response time, throughput, and error rate but confuses the quantity of data transferred with throughput. To clarify, throughput generally refers to the number of operations per unit of time and not the amount of data transferred. [...]_". In other words, although the participant used the correct terms, the semantics of the terms were incorrect. This may have misled the cosine similarity metric, even though ChatGPT was able to detect and correct it. Other cases of divergence occurred when the participant answered the question in a limited or incomplete manner. In these cases, again, the response contained certain terms used in the expert's response but lacked details or elaboration. ## 6. Discussion: LLMs for Education The popularization of AI assistants could significantly impact traditional education. For example, students can use AI assistants to get answers to specific questions, clarify concepts or receive real-time feedback on their work. They are also auxiliary equity tools, considering the individual journeys of students. Neurodivergent people often cannot finish reading long texts, or they can at a high emotional cost and cognitive overload, something many neurotypical people do not imagine exists. ChatGPT and the like can help extract key points from readings, minimizing such effort. Teachers can benefit from the aid of these tools in creating teaching materials and personalizing instruction to meet individual student needs. However, AI assistants should not replace the key role of teachers. Education goes beyond the transmission of information and involves social interactions, the development of socio-emotional skills and the ability to apply knowledge in real contexts. These aspects are fundamental and cannot be replaced by technology. Finally, it is necessary to consider the ethical and privacy issues related to the use of AI assistants in education. It is important to \begin{table} \begin{tabular}{l l|l l l} \hline \hline **Question** & **Group** & **Avg.** & **Median** & **Std. Dev.** \\ \hline \multirow{3}{*}{Q1} & Completed & 5,50 & 6 & 1,71 \\ & Non-Completed & 4,96 & 5 & 0,92 \\ & Completed & 7,29 & 7 & 0,47 \\ & Non-Completed & 5,65 & 6 & 0,69 \\ & Completed & 5,21 & 5 & 1,86 \\ & Non-Completed & 6,04 & 7 & 1,94 \\ & Completed & 5,29 & 6 & 2,26 \\ & Non-Completed & 5,00 & 5 & 2,72 \\ & Completed & 5,21 & 6 & 2,82 \\ & Non-Completed & 5,31 & 6 & 2,66 \\ & Completed & 5,21 & 5 & 1,86 \\ & Non-Completed & 6,04 & 7 & 1,94 \\ \hline \hline \end{tabular} \end{table} Table 2. Assessments of study participants’ responses. Figure 1. Comparison of responses provided by ChatGPT and those obtained by calculating the cosine similarity metric. The red line indicates the question grade value indicated by ChatGPT, while the blue line indicates the resulting metric value. Figure 2. Distribution of ChatGPT corrections and the cosine similarity metric. ensure that student data is protected and that decisions related to education are not based solely on algorithms, but rather on a balanced approach that takes into account the expertise of teachers and the well-being of students. ## 7. Limitations As any empirical study, this work also has several limitations and threats to validity. First, to conduct the study, we used a population of 40 developers. Although they are professionals in the field, we can hardly generalize their responses to other groups of developers from different countries with different experiences. Another threat is related to the feedback provided by ChatGPT. In some responses, we noticed that ChatGPT seems to have been too strict. For example, the question "What are the main metrics used to evaluate the performance of an application during a load test?" does not address potential relationships among these metrics. However, we frequently observed responses where ChatGPT provided feedback indicating this absence. Due to its proprietary implementation, the authors are not aware of the reasons why ChatGPT did not stick to answering what was asked. Still, there is limitation related to the metric we used to compare ChatGPT's responses. Although there are other well-known metrics like BLEU (Krishnan et al., 2017) and METEOR (Bahdan et al., 2017), we decided not to report them in this work due to inconsistent results we obtained. For example, a significant number of answers were evaluated as zero, which indicates that the student's answer does not match the reference answer. Moreover, BLEU is a metric designed to evaluate translations of texts, and we understand that the scenario of the article is not necessarily the most suitable for its use. Other studies have observed that using these metrics is not appropriate for educational purposes as they do not correlate with human evaluation (Krishnan et al., 2017). ## 8. Related Work Although very recent, there is a growing number of studies using ChatGPT for educational purposes. For instance, the study conducted by Moore _et al._(Moore et al., 2017) explored the use of ChatGPT-3 to assess whether student-generated questions can be useful in the learning process. The results suggest that the model can be a powerful tool to assist teachers in pedagogical assessments, providing an innovative and effective approach to evaluate students' knowledge. Zhu, Liu, and Lee (Zhu et al., 2017) investigated the impact of automated assessment technologies on the review of scientific arguments presented by students. The results revealed that automated reviews were positively correlated with an increase in students' grades. Furthermore, when the automatic reviews were contextualized with the students' responses, they proved to be even more effective in aiding learning by providing personalized and specific feedback. Additionally, Bernius, Krusche, and Bruege (Bernius et al., 2017) evaluated the use of LLMs to generate feedback for open-ended questions in courses with a large number of students. The results demonstrated a decrease of up to 85% in the effort required by teachers for evaluating these questions. Furthermore, 92% of the evaluations generated by LLMs were considered of high quality by instructors. Moreover, the majority of students rated the quality of this feedback as equivalent to that provided by instructors. These studies demonstrate how the use of ChatGPT and automated assessment technologies can have a positive impact on the learning process of students. These tools offer the opportunity to provide immediate, personalized, and detailed feedback, allowing students to improve their performance and understanding of concepts. However, none of these works addressed the use of tools like ChatGPT for the correction and feedback of open-ended questions in the field of software engineering, which is the objective of this study. ## 9. Conclusions This study investigated the use of ChatGPT as a complementary strategy for correcting and providing feedback on open-ended questions. For this purpose, we recruited two experts and 40 developers to answer a set of six questions on two different topics: (1) caching and (2) stress and performance testing. The responses from these individuals were corrected by ChatGPT. Based on this data, several observations were made: * In general, experts agreed with the corrections provided by ChatGPT. Among the six feedbacks given by ChatGPT, there was only one instance of disagreement from the expert; * The cosine similarity metric is not always suitable to be used as a proxy for ChatGPT scores, as it loses contextual information that ChatGPT is able to identify. ### Future Work For future work, we are interested in using grading rubrics annotated by experts with specific weights for certain items that deserve emphasis in the responses. This way, we can direct corrections to the most relevant response items. We also hope to expand our analysis to cover other LLMs, as well as different versions of ChatGPT. Moreover, we plan to explore other teaching materials, such as books, as our database for open questions (and their answers). This approach may further enrich the analysis and offer a more comprehensive view on the effectiveness and applicability of ChatGPT in education. Finally, we still want to understand to what extend does ChatGPT recognizes its own answers and, thus, give better grades to them. ### Artifacts Availability The data analyzed in this study is available online at: [https://tinyurl.com/chatgpt-for-edu](https://tinyurl.com/chatgpt-for-edu). ## Acknowledgements We thank all the Zuppers who answered the questionnaire and the reviewers who provided relevant suggestions for improvements. This work is partially supported by FAPESPA (#053/2021) and CNPq (#308623/2022-3). ## AI Tooling Ultimately, although this work used ChatGPT as a subject under study, we did not use ChatGPT to generate text content for this paper. However, this paper was initially submitted in Portuguese, but later translated into English with the support of ChatGPT (using the prompt "The text below is written using the LateX markup language. Translate it to English, so it keeps the original LaTeX markup. Do not perform any other textual adjustments.").
2309.06129
LEyes: A Lightweight Framework for Deep Learning-Based Eye Tracking using Synthetic Eye Images
Deep learning has bolstered gaze estimation techniques, but real-world deployment has been impeded by inadequate training datasets. This problem is exacerbated by both hardware-induced variations in eye images and inherent biological differences across the recorded participants, leading to both feature and pixel-level variance that hinders the generalizability of models trained on specific datasets. While synthetic datasets can be a solution, their creation is both time and resource-intensive. To address this problem, we present a framework called Light Eyes or "LEyes" which, unlike conventional photorealistic methods, only models key image features required for video-based eye tracking using simple light distributions. LEyes facilitates easy configuration for training neural networks across diverse gaze-estimation tasks. We demonstrate that models trained using LEyes are consistently on-par or outperform other state-of-the-art algorithms in terms of pupil and CR localization across well-known datasets. In addition, a LEyes trained model outperforms the industry standard eye tracker using significantly more cost-effective hardware. Going forward, we are confident that LEyes will revolutionize synthetic data generation for gaze estimation models, and lead to significant improvements of the next generation video-based eye trackers.
Sean Anthony Byrne, Virmarie Maquiling, Marcus Nyström, Enkelejda Kasneci, Diederick C. Niehorster
2023-09-12T11:08:14Z
http://arxiv.org/abs/2309.06129v3
# LEyes: A Lightweight Framework for Deep Learning-Based Eye Tracking using Synthetic Eye Images ###### Abstract Deep learning has bolstered gaze estimation techniques, but real-world deployment has been impeded by inadequate training datasets. This problem is exacerbated by both hardware-induced variations in eye images and inherent biological differences across the recorded participants, leading to both feature and pixel-level variance that hinders the generalizability of models trained on specific datasets. While synthetic datasets can be a solution, their creation is both time and resource-intensive. To address this problem, we present a framework called Light Eyes or "LEyes" which, unlike conventional photorealistic methods, only models key image features required for video-based eye tracking using simple light distributions. LEyes facilitates easy configuration for training neural networks across diverse gaze-estimation tasks. We demonstrate that models trained using LEyes are consistently on-par or outperform other state-of-the-art algorithms in terms of pupil and CR localization across well-known datasets. In addition, a LEyes trained model outperforms the industry standard eye tracker using significantly more cost-effective hardware. Going forward, we are confident that LEyes will revolutionize synthetic data generation for gaze estimation models, and lead to significant improvements of the next generation video-based eye trackers. ## Main Gaze estimation refers to the computational techniques employed to ascertain an individual's point of visual focus. Commonly, algorithms in this field use eye images as inputs and yield an inferred gaze point or gaze direction, typically represented as x,y coordinates [1]. This field of research has recently witnessed a surge of new interest driven by technological advancements across various domains. Most notably, the widespread adoption of Virtual Reality (VR) head-sets [17, 52], the integration of eye tracking technology into smartphones and tablets [64, 34], and the continuous improvement of both wearable eye tracking devices [59] and augmented reality systems [55] have fueled this growing interest. Further, there has been a remarkable expansion of high-resolution eye tracking experiments recorded in controlled laboratory settings, where participants are often positioned with chin and forehead rests to enable precise eye movement to be captured. This research spans across diverse domains, such as healthcare [54], economics [35, 6, 3], neuroscience and cognitive science [19, 46], and education [62, 4], highlighting the extensive potential and broad applicability of eye tracking technology. A prevalent gaze estimation technique involves recording videos of eye movements and monitoring the position of both the pupil (P) and any corneal reflections (CR) present in each frame. This process, known as P-CR eye tracking, together with a quick calibration procedure estimates where a person is looking and the movement of their eyes [26]. Beyond enhancing the precision of the device itself, a more accurate gaze estimation enables other less obvious benefits such as improved foveated rendering, optimizing GPU resources by rendering detailed areas while reducing peripheral resolution [26, 65]. This lowers the computational load of the system and improves the visual experience for the user [32, 52]. Improved gaze estimation also facilitates natural interactions in virtual environments through realistic eye contact between avatars [68], assists users with mobility impairments [40], and serves as a key tool for technical training and evaluation in fields such as surgery [63, 10], dentistry [7], and aviation [48]. The deployment of deep learning algorithms has significantly enhanced the accuracy and robustness of gaze estimation techniques, as evidenced by multiple studies [15, 14, 13, 39, 33, 27, 42]. Deep learning algorithms address issues present in conventional algorithmic approaches, which are vulnerable to unpredictable factors like blinks or reflections in the recording [32]. Yet despite these benefits, the incorporation of deep learning algorithms continues to pose challenges, principally due to the complex task of gathering data for training the model [17, 52]. This data procurement obstacle in gaze estimation can be detailed as follows: 1. **Data scarcity:** While data scarcity is a common issue across many deep learning domains [2], this challenge is particularly acute in the field of eye-tracking research. Collecting a sufficient amount of training data for the development of deep learning models in this area demands significant time and resources [5, 17]. Figure 1: **A.** Images from the four datasets we used to test the LEyes framework. **B.** The LEyes synthetic training sets corresponding to the real eye datasets in A. These images are based on the light distributions of the real eye datasets. **C.** This shows the predictions of the LEyes trained model on the real eye images. **D.** An overview of our approach: First, we establish a set of parameters based on the distributions of the collected data. These distributions pertain to pixel-level details like the iris and pupil intensity. Next, we employ a generator to efficiently produce new synthetic images from these parameters. The generated images are used to train a neural network which is then tested on real eye images recorded from the same device. 2. **Annotated datasets:** The second challenge involves the necessity for annotating segmented regions within eye images. This annotation is essential for creating labels for supervised learning algorithms that train deep learning models. It is a process that not only is time-consuming but also technically demanding, often requiring manual labeling by an experienced researcher [17, 52]. 3. **Differences in recorded eye images:** The third challenge stems from disparities in eye images found in the limited amount of publicly available datasets. Differences can occur not just across recording setups, but also from variation in eye image attributes like iris brightness, which lead to pixel level differences that contribute to sub-optimal network performance [42]. This is a major issue as slight differences can have a substantial impact model performance and generalizability. A proposed solution to these challenges is the use of synthetic datasets which allows for the generation of vast amounts of annotated images [27, 5, 39, 42]. Synthetic data has been used successfully to train deep neural networks in fields such as medical imaging [16], autonomous driving [51], and microscopy [18]. Typically in the field of gaze estimation, synthetic eye images creation methods aims for photorealism by employing a 3D model of the human eye and surrounding facial region to produce 2D images akin to those captured by eye-trackers, using render software or game engines such as Blender or Unity. The goal of such processes is to match the synthetic dataset's underlying distribution with the variability seen in real-world eye images [67, 42]. The photorealistic synthetic data approach, however, is not without limitations. One key challenge is the complexity of generating synthetic datasets that accurately emulate the distribution of real eye images. Additionally, concerns exist regarding the potential for achieving state-of-the-art outcomes when compared to models trained on genuine eye images. A study illustrated a decline in model accuracy by \(1^{\circ}\) when comparing a model trained on photorealistic synthetic images to one trained on a subset of real eye images using a neural network [27, 42]. We hypothesize that numerous intricate features must be precisely constructed during synthetic dataset creation, and even minor deviations in design can significantly impact a model's inference capabilities during testing. In this study, we adopt an innovative approach that departs from the traditional practice of creating photorealistic images, choosing instead to capitalize on the inherent simplicity of eye images. Rather than meticulously recreating every visual aspect, we focus on modeling the light distributions of the key features within an eye image required for eye tracking. We have found that LEyes images are not only easy to create, but are fast to generate. Most importantly, our approach based on LEyes provides more accurate results than other synthetic data methods over a range of different eye tracker setups. ## Results ### Overview of the LEyes Framework Previous research [5, 50, 39] has shown that key features in eye images relevant for eye tracking can be effectively represented using 2D Gaussian distributions. Creating a synthetic dataset of eye images necessitates the accurate portrayal of such features, including the pupil, reflections, and pixel-level characteristics such as iris brightness [27, 42, 33], which can be affected by specific lighting conditions altering dimensions and luminosity of features located in the image such as the iris or pupil. Emulating essential hardware attributes, such as lighting conditions and camera parameters, is vital for replicating real-world situations [27, 42, 33]. LEyes shows that by generating abstract images using 2D Gaussian distributions that contain the relevant features for an eye tracker, one can effectively capture both eye features and camera attributes for neural network training. The approach is outlined as follows: First, to model key features such as the pupil, iris, and CRs, luminance attributes are derived by calculating the distributions of recorded data on a given device setup. To ensure generalizability of the model for a wide range of participants we use a larger parameter range than is derived from the distributions. Subsequently, these parameter ranges calculated from the distributions are utilized to craft images by layering and combining the parameter inputs through mathematical operations, achieving simple but realistic portrayals of eye features and noise within the created image. The images are then scaled and discretized to align with standard 8-bit camera output. Refer to the Methods section for a complete description of the process of generating LEyes images along with full descriptions of each model architecture used in the paper. To turn the feature parameters into images to be used to train the deep learning models we utilize the generator function from the DeepTrack 2.1 package [41]. The use of a generator combined with the relatively simple images created from 2D Gaussian distributions enables swift creation of customized synthetic images at reduced computational cost compared to photorealistic models. For example, the NVGaze dataset required 30 seconds to create each image; to create the entire dataset, would take approximately 3.8-years on a single GPU. In practice, this was reduced to a week as the researchers had access to a supercomputer [27]. Our models require no special computational resources and can be trained on platforms such as Google Colab, making them accessible to a wider group of researchers. The generator function also keeps track of both the image and corresponding label during training, which allows the generator to discard images after one pass to prevent over-training. Importantly, no images need to be pre-generated and occupy disk-space when a generator function is used [18]. ### Pupil Localization We begin our analysis by considering the performance of the LEyes framework in a pupil center localization task in a VR setting, a common task for video based eye trackers [27]. To test our model we selected the widely used 2019 EDS challenge dataset (OpenEDS 2019) [17]. We chose this dataset to run a comparative analysis as it has been used extensively to assess the accuracy of other methods in gaze estimation tasks [42, 9, 28, 33, 32]. OpenEDS 2019 [17] was collected using a VR head-mounted display equipped with dual eye-facing cameras, capturing images at 200 Hz under controlled lighting conditions. The dataset encompasses eye-region video footage from 152 participants for a total of 12,759 images featuring pixel-level annotations derived from human-annotated key points of the iris, pupil, and sclera. For a complete description of the data, refer to the original paper [17]. Various deep learning architectures have been proposed for eye segmentation tasks and LEyes simulations are model agnostic, yet, in light of their prevalent use and proven efficacy in eye tracking tasks [11, 66], we chose to train a U-Net model with a ResNet-34 backbone. The model takes a grayscale eye image as its input and outputs a probability map indicating the location of the pupil in the image. To determine the center of the pupil we threshold this mask and employ a center of mass algorithm on the pupil region in the resulting binary image. We compare our results with other state-of-the-art models and frameworks, including Pistol [13], PuRe [58], the EllSeg framework [32, 33], and DeepVog [69]. These models employ a variety of methods, ranging from conventional ellipse fitting to deep learning architectures. Note that we stress the difference between model and framework where a "model" is a specific representation trained to make predictions, while a "framework" is a set of tools and libraries used to develop, train, and deploy such models. Estimation accuracy in eye-tracking applications is often evaluated using the cumulative detection rate, which shows how much of the pupil locations estimated by a method are within a given distance from the ground truth pupil center [27, 32, 14, 58]. Performance is often specifically assessed as the percentage of images for which the pupil location was estimated within a 5-pixel distance from the ground truth [27, 14]. However, recent algorithms have demonstrated superior performance on VR datasets, often reaching ceiling performance well below this 5-pixel threshold. Consequently, we have narrowed our analysis to examine performance for errors up to just 2 pixels. As illustrated in Figure 2, we achieved a 2-pixel error rate of 75.8%, which surpasses EllSeg (model trained on all datasets) at 71.8% and is markedly superior to Pure (65.6%), DeepVOG (60.9%), and Pistol (55.4%). The violin plots in the bottom section of Figure 2 indicate that the distribution of performance across participants in the testing set at the 2-pixel level for a model trained on LEyes is comparable to other models. Notably, its median value at this error level is 80%, outperforming the next best model by 7%. We underscore comparisons with the different variants of the EllSeg frame work [32, 33], one of the few public frameworks leveraging synthetic training data, like ours. The architecture used in the the EllSeg framework, named DenseElNet [33], has a comparable number of trainable parameters (2.24 million) compared to our model. The orange line in Figure 2(B) shows the EllSeg variant trained across multiple eye datasets. Notably, OpenEDS 2019 is one of the datasets included in its training set. Astonishingly, our LEyes model still surpasses this variant, even though there is evident data leakage with 88.6% of the training samples present in the test set. Two further comparisons of EllSeg models trained on purely synthetic datasets (RITeyes [42] and NVGaze [27]) show that the LEyes framework consistently outperforms other publicly available models that use only synthetic data. Taken together, the results highlight that LEyes achieves higher performance against other methods tested on the EDS 2019 dataset. In line with earlier observations on domain discrepancies and generalization in gaze estimation [33, 32, 42], our results demonstrate that models exhibit optimal performance when trained on datasets analogous to their respective test distributions, something that is easily achieved within the LEyes framework. Notably, this efficacy persists even when there are discernible differences, from a human perspective, between the training and test datasets. ### Simultaneous Pupil and Corneal Reflection Localization Pupil and CR localization can be treated as two separate problems, yet since their positions in the eye images co-vary in a systematic way, it is advantageous to consider them together. In this section, we present a new P-CR eye tracking pipeline, trained entirely using LEyes images. Eye trackers often use several light sources to guarantee at least a pair of reflections for every gaze position, and for robust eye tracking it is necessary to reliably associate at least two corneal reflections with their specific light source across all anticipated eye movements [11]. Therefore, our pipeline is not only able to localize the pupil and CR centers in an eye image, but also match the CRs to specific light sources. While previous work has developed models that locate the pupil and CRs simultaneously and perform CR matching [49, 39], we introduce a novel method that importantly streamlines the process of robust P-CR eye tracking by using the maximum value of the model output to select only the 'best' two CRs. This ability of our method to robustly select CRs for gaze estimation is especially important due to the complex reality often encountered in eye images where CRs may be missing or additional, unwanted, reflections are often present. Through our novel pipeline, illustrated in Figure 3, we aim to demonstrate the power of LEyes in such challenging scenarios. First, since LEyes requires input images of a certain size that contain the pupil, we have developed a novel adaptive cropping strategy. Second, we demonstrate the success of our strategy to select the two 'best' CRs in an eye image. We test our new P-CR pipeline using two VR datasets. First, we use a dataset compiled by Chugh, et al. (2021) [11] which contains eye images of 15 participants captured from a VR headset with an eye tracking attachment. The Figure 2: **A.** We compare the cumulative detection rate on the OpenEDS 2019 dataset of a U-Net model trained using the LEyes method at different pixel errors against PuRe [58], Pistol [13], DeepVOG [69], ELG [53]. **B.** We make special comparisons with several models trained using the EllSeg Framework [31, 33]. **C & D:** The corresponding violin plots for panels A and B respectively, showing the detection rate at 2 pixel error for each participant in the testing set achieved by LEyes compared with the aforementioned models. Figure 3: Flowchart of the simultaneous P-CR pipeline: Using an adaptive cropping strategy the center of the crop is determined using PuRe’s pupil center prediction (\([X_{PuRe},Y_{PuRe}]\)) if the confidence metric for PuRe’s prediction (\(C\)) is above a given confidence threshold (\(C_{th}\)), otherwise, the crop is determined by the pupil prediction of the LEyes-trained model given a naive center crop (\([X_{img\_center},Y_{img\_center}]\)). The pupil-centered crop is passed through the model, which outputs logits representing likely feature locations for each prediction, illustrated here as heat maps (\(M\)) for both the pupil (\(M_{Pupil}\)) and for each CR (\(M_{CR1\ldots 5}\) in this example). For each CR map, the highest value is located. These peaks are compared between maps and the two highest values across all the maps determine which CRs are selected. The asterisks signify which maps contain the two highest values in this example. However, if the exclusion criteria are met, the image is deemed invalid (see text). dataset includes manually annotated \((x,y)\) coordinates for the pupil-center and centers of the CRs. Second, we test on the OpenEDS 2020 Challenge dataset (OpenEDS 2020) [52], which consists of 200 participants and includes manually annotated segmentation labels for the pupil for 5% of the data, amounting to a total of 2605 images. The images were captured at a frame rate of 100 Hz under controlled illumination using a VR headset. Since only pupil, but no CR, annotations are provided with the Open EDS 2020 dataset [52], we provide illustrative examples instead of a quantitative comparison. Through these examples, we want to highlight that our model appears to provide accurate predictions also for CRs in this dataset, despite have more CRs (eight instead of five) with a different spatial configuration compared to the Chugh, et al. (2021) dataset. Figure 4 (bottom) illustrates how our model performs on a representative selection of eye images from the OpenEDS 2020 dataset. Predictions from all eye images are available in the repository associated with this paper. #### Adaptive Cropping Strategy Before inputting the eye image into the model, it needs to be cropped to the input size expected by the model in such a way that the pupil is in the crop. A naive cropping strategy assumes that the pupil is in the center of the eye image. However, this is not always the case and such a crop may exclude parts of, or even the entire pupil from the crop. To solve this challenge, we employ PuRe [58], a well known lightweight open-source pupil detection method based on ellipse fitting, to create a 128\(\times\)128 pixel image centered on its detected pupil center. We ran PuRe over the two datasets and found that PuRe had average pixel errors of 13.23 and 20.77 in the OpenEDS 2020 and Chugh et al. 2021 datasets, respectively. Next, providing these crops based on PuRe's pupil center estimate to LEyes still yielded high average errors of 11.0 and 7.76 pixels in those datasets due to cases where PuRe failed to locate the pupil. Therefore, we adopted an adaptive cropping strategy using PuRe's confidence metric. This confidence, ranging between 0 and 1, is based on various metrics outlined in detail in the paper [58], with 0 indicating a poor ellipse outline. In our cropping method, if PuRe's confidence is larger than or equal to a threshold, the crop used as input to the LEyes U-Net is based on PuRe's pupil center estimate. If the confidence is below this threshold, we instead use the naive center crop on the image with no guarantee that the pupil will be present. This hybrid cropping strategy significantly improves our model accuracy. For the OpenEDS 2020 dataset the lowest average pupil error was 4.18 pixels, achieved when using a confidence threshold of 0.90 (the largest average pixel error was 4.76 for confidence thresholds between 0.50-0.95). For the Chugh et al. 2021 dataset, an average pixel error of 4.15 pixels was achieved at the 0.70 confidence threshold (largest error 5.11). Figure 4: Heat maps for both the Chugh et al. 2021 dataset and the Openeds 2020 dataset. The maximum of the corresponding logit value is shown under each heat map. In the Chugh et al. 2021 dataset, the labeling of the CRs starts at the top-most IR reflection and then proceeds clockwise (top right). In the OpenEDS 2020 dataset, the labels used when training the model start at the lower right CR and proceed clockwise. Our algorithm selects the two highest logit values from the CR maps along with the pupil value for a complete robust P-CR pipeline. The last column shows the prediction locations of the centers of the pupil and selected CRs on the corresponding eye image. #### Selecting the 'best' CRs using model output The LEyes U-Net model takes a grayscale eye image as input and produces output maps for each feature (the pupil and each CR) that correspond to the confidence the model has that a given feature's center is located at a given position in the input image. We will represent these unnormalized output values, which we will refer to as logit values, in the form of a heatmaps. The maximum value of each heatmap corresponds to where the model is most confident of the prediction for the pixel location of each eye feature's center. To robustly select the two CRs the model is most confident about, we choose the two CRs with the highest corresponding logit values across the output heatmaps. Figure 4 shows the heat maps of each CR and their associated max values derived from real eye images from both datasets along with the predicted locations of the selected CRs overlaid onto the eye image. To exclude eye images clearly unsuitable for eye tracking, for instance images that contain a blink or when both cropping strategies failed to capture the pupil, our method excludes images that fail to produce at least two heatmaps where the max values are greater than or equal to one. To assess our model, we compared its average pixel error to the CR annotations in the Chugh et al. 2021 dataset [11]. They achieved successful matches of at least two CRs within five pixels for 91% of the images in their test set and an average error of 1.5 pixels on these images. It is worth mentioning that Chugh et al. 2021 had to sacrifice 88% of the dataset for both training and validation of the model [39], so their results include only a small part (12%) of the whole dataset. In contrast, since LEyes is trained on synthetic images, we can evaluate our model on the entire dataset. Therefore, direct comparisons between the two models are not straightforward since they are evaluated on a different number of images and use different exclusion criteria. To make the results more comparable, we apply our exclusion criterion that the maximum value of at least two heat maps is larger than one in conjunction with Chugh et al. (2021)'s criterion that evaluates model performance only on the images where the predicted locations of at least 2 CRs were less than 5 pixels away from the ground truth. Using these criteria, our model exhibited an average pixel error of 1.59 across all the CRs. Focusing solely on the best two CRs, this error was reduced by 18% to 1.30 pixels. Further, using both exclusion criteria we retain 70% of images from the dataset. ### High-Resolution Gaze Tracking The Pupil-Corneal Reflection (P-CR) eye tracking method, often employed in controlled lab settings for gaze estimation, requires accurate identification of both the pupil and Corneal Reflections (CRs) [21, 50, 14]. When estimating the smallest and fastest of eye movements, an eye tracker with high spatial and temporal resolution is required. This typically requires sub-pixel localization of the pupil and CR(s). To address these requirements, we developed a dual Convolutional Neural Network (CNN) model trained on LEyes images. One CNN focuses on locating the pupil center, while the other locates the CR center; an illustration of our setup is in Figure 5. Our model was compared with traditional thresholding methods, the LEyes U-Net model used in OpenEDS 2019 but with different parameters used in generator to account for the dataset, and a state-of-the-art commercial eye tracker (SR Research EyeLink 1000 Plus). The data for this high-resolution study was captured in a co-recorded experiment using our custom-built FLEX system [50, 24] and the EyeLink 1000 Plus eye tracker. Such co-recording was required since the eye images captured by the EyeLink are not accessible and a direct comparison of its image processing to the LEyes method is thus not possible. The EyeLink's illuminator was used to deliver illumination for both systems. This setup resulted in eye images containing a single CR. Both the FLEX system and the EyeLink acquired data at 1000 Hz. Since the focus of this comparison is on eye tracking signal quality, data was recorded from 4 expert participants who performed a series of fixation and saccade tasks during eight minutes. To provide additional variation in the luminance profiles of the eye images, and thereby test the robustness of our model, the four participants were recorded a second time with the FLEX system configured to a sampling rate of 500 Hz. The captured eye images were brighter at this lower sampling rate due to the longer possible exposure time. The eye images captured by the FLEX system were first processed using a standard thresholding operation [50] to provide an initial localization of the pupil and CR centers. We then took 180x180 pixel crops centered on the pupil and the CR features from the original images and fed these to the LEyes CNNs. As shown in Figure 6, both thresholding and in particular the LEyes CNNs provided a significant improvement in stability of the pupil signal compared to using the LEyes U-Net model, which therefore was omitted from further analysis. Example raw pupil and CR signals resulting from the thresholding operations and the LEyes CNNs are shown in Figures 7a (1000 Hz) and 7c (500 Hz). As can Figure 5: Experimental setup: In a co-recorded setup we acquire eye images from the FLEX setup and gaze signals from the Eyelink 1000 Plus. We analyzed the eye images which we recorded from expert participants using a dual CNN approach. The pupil CNN localized the pupil center, while the CR CNN localized the center of the CR located in the eye image. Both CNNs achieved sub-pixel pixel error. Image of co-recording setup adapted from [50]. be readily appreciated, the sample-to-sample variation in both the pupil center and the CR center signal is lower for the LEyes method than for the standard thresholding method for data acquired at both sampling rates. To formalize this observation, the precision in the form of root mean square of sample-to-sample deviations in the signal (RMS-S2S [20, 47, 45]) was computed across the dataset and plotted in Figures 7b (1000 Hz) and 7d (500 Hz). This analysis confirms that for both the 1000 and the 500 Hz data sets, the LEyes CNNs consistently demonstrated superior precision (lower values) than the thresholding method. Researchers using eye tracking are rarely interested in the individual pupil and CR signals, but instead use the gaze signal derived from them. Does the improved precision of the pupil and CR center signals lead to an improved gaze signal? To examine this, we derived P-CR gaze signals using pupil and CR centers estimated by thresholding or by the LEyes CNNs and compared both with the gaze signal delivered by the EyeLink. Each signal was calibrated using standard methods and example segments are plotted in Figures 8a (1000 Hz) and 8b (500 Hz). Again, it can be readily appreciated that the gaze signal derived from the LEyes CNNs is smoother and more stable than that derived from standard thresholding operations or delivered by the EyeLink. To quantify this observation, we computed the RMS-S2S precision of these signals which quantifies short-timescale smoothness, as well as the STD precision [45] which quantifies the spatial spread of the signal and indicates its stability. These evaluations are presented in Figures 8c-e (1000 Hz) and 8f-h (500 Hz). This analysis confirms that the dual LEyes CNN method consistently demonstrated superior RMS-S2S precision (lower values) than the thresholding method and the results from the Eyelink 1000 Plus. It is important to note that all methods processed each video frame independently, without using any temporal information from preceding or future frames. Thus, the increased precision seen in the CNN method cannot be attributed to the use of temporal information [44, 45]. The signal stability (STD precision) achieved by the LEyes method was on par with the EyeLink for the 1000 Hz dataset and slightly better than the EyeLink for the 500 Hz dataset, and consistently outperformed the thresholding method for both datasets. The accuracy achieved did not systematically differ between the three methods, indicating that the gains in precision did not come at the cost of reduced accuracy. ## Discussion We developed a novel framework named LEyes for training gaze estimation algorithms, achieving cutting-edge results for both virtual reality (VR) and high-resolution, lab-based eye-tracker setups. LEyes outperformed other methods in a pupil center localization task by a margin of at least 4%. In a high-resolution setting, LEyes exceeded the performance of the industry-standard EyeLink 1000 Plus eye tracker across two lighting conditions in a co-recorded experiment. Additionally, we introduced a novel LEyes-trained P-CR pipeline that both simplifies and improves CR detection by considering only the two best CRs Figure 6: Representative segment of pupil and CR center locations derived from 1000 Hz eye images. The pupil center was determined using three different methods; thresholding (blue), a U-Net trained using the LEyes framework and derived from the EDS2019 U-Net (green), and a CNN trained for pupil center localization using LEyes images (red). Figure 7: CR and pupil center signals. Left column: representative segment raw pupil and CR center signals derived from eye images recorded at 1000 Hz (a) and 500 Hz (b). Right column (panels b and d): an RMS precision comparison between the thresholding and LEyes CNN methods for the pupil and CR signals on all data of four participants. Error bars depict standard error of the mean. Figure 8: Calibrated gaze signals. Left column: representative segment of calibrated P-CR signals derived from 1000 Hz data (a) and 500 Hz data (b) as derived from pupil and CR center locations determined using either thresholding or the dual LEyes CNN strategy, along with the EyeLink. The signals in both panels contain two small saccades and have been vertically offset for clarity. Further, an RMS precision, STD precision and an accuracy comparison for the 1000 Hz data (middle column, panels c–e) and the 500 Hz data (right column, panels f–h) between the three gaze tracking methods on data of all participants are shown. Error bars depict standard error of the mean. in the recorded image. Overall, our results emphasize both the accuracy and flexibility in design of the LEyes framework, highlighting its applicability across gaze estimation applications. LEyes has the potential to be a game-changer for the many companies and startups attempting to enter the VR and eye-tracking space. LEyes enables these companies to bring their devices to market without the necessity of collecting or purchasing potentially millions of eye images from a third party, alleviating both the costs and hurdles related to data acquisition. This opens up a streamlined path to market, making it an attractive option for emerging companies. In an academic setting, LEyes significantly reduces the amount of data required to conduct an eye-tracking study that uses a deep learning model to analyze the data, by eliminating the need to sacrifice recorded data for model training and validation, resulting in both time and cost savings. For example, our model was able to run inference on the entirety of the Chugh et al. 2021 dataset, while the original paper used 88% of the data for both training and validation and were thus left with only 12% for evaluating there model [11, 39]. Furthermore, using Python, LEyes offers an alternative to the challenging task of creating photorealistic synthetic data. Many researchers may not possess the skills, time, or resources to access and use software platforms like Blender or Unity3D. Finally, when combined with the FLEX system which has a hardware cost of about $1000 USD, LEyes offers a low cost and open source alternative to the EyeLink 1000 Plus. Our study has limitations: First, the models were trained on simulated data but tested on real data. We did not investigate any potential learning differences between synthetic and real eye datasets. Future research may benefit from analyzing these differences to further improve the quality of the synthetic data generation. Second, we aim to explore "Domain Adaptation" techniques such as fine-tuning LEyes-trained models with real eye images to assess performance impact. Third, the low participant count in our high-resolution experiment, while sufficient for our purpose of demonstrating the power of a LEyes trained model, potentially limits the generalization of our findings for this specific test. Despite seeing promising results with the LEyes framework and good generalizability across large participant samples in the other tests, recruiting a broader participant base that encompasses both experts and novices can be seen as a worthwhile further study. Prior to LEyes, the development of gaze estimation algorithms using machine learning was confined to those who possessed the resources to amass large annotated datasets or the technical expertise and large computational resources to generate synthetic data. With LEyes, the training of deep learning models for gaze estimation has become easily accessible to everyone, democratizing the field and opening new avenues for exploration and application. ## Methods We have made the code for the various simulations and model training regimes described below, as well as the trained models and the code for evaluating the model on the various real image data sets available at the following link: [https://github.com/dcnieho/Byrneetal_LEyes](https://github.com/dcnieho/Byrneetal_LEyes). ### Generating Light Simulations Five different simulations modeling the light distribution of eye images were used for training the U-Net for OpenEDS 2019, the U-Net models with attention used on the OpenEDS 2020 and Chugh et al.'s 2021 datasets and the CR and pupil CNNs. Here we first present features shared between these simulations, and then detail the individual simulations in order of complexity. The full code to generate LEyes images is available at our GitHub Repository linked to this paper. #### Common features Following previous work [5, 39], we developed simulated images that model the light distributions of the relevant aspects of an eye image that the given model would have to deal with during inference. Blob-like features, such as the pupil and CR were modeled as 2D Gaussian distributions using the equation \[G(x,y)=Ae^{-a(x-x_{c})^{2}-b(x-x_{c})(y-y_{c})-c(y-y_{c})^{2}}, \tag{1}\] where \[a =\frac{\cos(\theta)^{2}}{2\sigma_{\alpha}^{2}}+\frac{\sin(\theta )^{2}}{2\sigma_{\beta}^{2}}, \tag{2}\] \[b =\frac{\sin(2\theta)}{4\sigma_{\alpha}^{2}}-\frac{\sin(2\theta)} {4\sigma_{\beta}^{2}},\] (3) \[c =\frac{\sin(\theta)^{2}}{2\sigma_{\alpha}^{2}}+\frac{\cos(\theta )^{2}}{2\sigma_{\beta}^{2}}, \tag{4}\] and where \(\theta\) is the orientation of the 2D Gaussian and \(\sigma_{\alpha}\) and \(\sigma_{\beta}\) its spread along the minor and major axes, respectively. The luminance of the pupil was determined per simulation by analyzing the eye images on which inference would be run, while the luminance of a CR was always set to full white. Regardless of the Gaussian amplitude \(A\) of the feature, which was varied to create differently steep edges, the minor and major axis radii of the luminance plateau in each feature (the dark part of a pupil, or the bright part of a CR) were kept constant by parameterizing \[\sigma_{r}=r/\sqrt{-2\log\frac{1}{A}},r\in\{\alpha,\beta\}. \tag{5}\] To create the final simulated image, first the relevant features were layered onto a background luminance distribution that differed between simulations. These layers were then collapsed into a single image by subtracting dark features (such as pupils) from the background, and by adding bright features to the collapsed image of the preceding layers using the operation \(max(image,background)\). Pixel noise was added to the final image by adding a value from a Gaussian distribution \(X\sim\mathcal{N}(0,\,\sigma_{n}^{2})\) to the image that was drawn independently for each pixel. Finally, the resulting image was limited to the range \([0,255]\), scaled to the range \([0,1]\) and discretized to \(256\) levels, corresponding to \(8\)-bit camera images. #### CR 500 Hz & CR 1000 Hz The CNN for CR center localization used for the 500 Hz data was the same as presented in previous work[5]. As such, only the key points of this simulation are described. Circular CRs (\(\sigma_{\alpha}=\sigma_{\beta}\in[1,30]\), \(A\in[2,20000]\)) were placed on a background that was made up of two parts, divided by a randomly oriented straight line representing the pupil-iris border that passed close to the CR. On one side of the line the background was dark, with a luminance drawn from an exponential distribution with its scale parameter set to \(10\) pixel intensity values, and offset \(1\). The other part of the background was middle grey (pixel intensity value \(L_{CR}=128\)). The standard deviation of image noise was varied per generated image, with \(\sigma_{n}\in[0,30]\). The simulations used for training the CNN for determining CR centers in the 1000 Hz eye videos were identical to those used for the 500 Hz data, except that the middle-grey part of the background varied in luminance between \(L_{CR}\in[32,153]\). For both the 500 Hz and the 1000Hz models, in the second stage the location of the CR center was constrained to a range spanning \(1.5\) pixels around the image center. #### Pupil 500 Hz & Pupil 1000 Hz The simulated light distributions used for training the CNN for locating pupil centers differed from the simulations for the CR CNNs in a few ways. First, the simulated images contained a 2D Gaussian representing the darker pupil. Second, the images contained one or multiple bright 2D Gaussians representing CRs that were randomly positioned and could thus overlap the pupil. Third, instead of a background consisting of dark and grey segments separated by a straight line, the background now consisted of a uniform field at a range of grey levels, representing the iris at various illumination levels. Specifically, a randomly oriented dark 2D Gaussian with minor axis radius \(\alpha_{p}\in[20,60]\) pixels, major axis radius \(\beta_{p}\in[1\alpha_{p},1.3\alpha_{p}]\) and amplitude \(A_{p}\in[2,20000]\) was used to represent the pupil. Its luminance \(L_{p}\) was drawn from an exponential distribution with a scale parameter of \(10\), and offset \(1\). Between \(1\) and \(4\) corneal reflections (CRs) were generated with minor axis radius [4, 12] and major axis radius \(\beta_{c}\in[1\alpha_{c},1.1\alpha_{c}]\) and \(A_{c}\in[2,20000]\) and randomly positioned. Overlap between CRs was avoided by removing CRs whose center location was closer to another CR than \(1.25\) times the sum of the major axis radii of the two CRs, and replacing it with a new randomly positioned CR. The background luminance level representing the iris was \(L_{background}\in[64,179]\) pixel intensity values. The standard deviation of image noise was varied per generated image, with \(\sigma_{n}\in[0,30]\). The simulations used for training the CNN for determining pupil centers in the \(1000\) Hz eye videos were identical to those used for the \(500\) Hz data, except that the background luminance level representing the iris was \(L_{background}\in[32,153]\) pixel intensity values to encompass the iris luminance values in the darker \(1000\) Hz eye images. For both the \(500\) Hz and the \(1000\)Hz models, in the second stage the location of the pupil center was constrained to a range spanning \(1.5\) pixels around the image center and only \(1\) randomly positioned CR was generated. #### U-Net for OpenEDS 2019 In order to ensure that the U-Net reliably detects the pupil and not the iris, the simulations used for training the U-Net contained several more features than those for the pupil CNN. Firstly, a bright background representing the sclera with luminance \(L_{s}\leftarrow\mathcal{N}(217,\,26)\) was generated. On top of this an iris was generated as a randomly positioned and oriented ellipse (\(\alpha_{i}\in[30,42.5]\) and major axis radius \(\beta_{i}\in[1\alpha_{i},1.3\alpha_{i}]\) and \(L_{i}\leftarrow\mathcal{N}(77,\,16)\)) rendered with an edge modulated by a raised cosine function over a range of between \([8,20]\) pixels. Then an irregularly shaped collarette was generated close to the center of the iris consisting of between \(13\) and \(24\) vertices arranged around the collarette center at an average distance \(r_{col}\in[.3\beta_{i},.6\beta_{i}]\), with the individual distance of vertices varied between \([0.05r_{col},0.2r_{col}]\). The resulting polygon was upsampled to five times the number of vertices using periodic cubic spline interpolation to create a shape with a smoothly varying edge, and the resulting polygon was rendered at luminance \(L_{col}=[1.25L_{i},1.6L_{i}]\) with an edge modulated by a raised cosine function over a range of between \([1,4]\) pixels. On top of this were layered a randomly positioned and oriented pupil (minor axis radius \(\alpha_{p}\in[10,30]\) and major axis radius \(\beta_{p}\in[1\alpha_{p},1.3\alpha_{i}]\), \(A_{p}\in[2,2000]\) and \(L_{p}\leftarrow\mathcal{N}(34,\,15)\)) and between \(1\) and \(8\) randomly positioned and oriented CRs (minor axis radius \(\alpha_{c}\in[0.8,4]\) and major axis radius \(\beta_{c}\in[1\alpha_{c},1.4\alpha_{c}]\), \(A_{c}\in[2,20000]\) and \(L_{CR}=255\)), again avoiding overlap. The standard deviation of image noise was varied per generated image, with \(\sigma_{n}\in[0,15]\). #### U-Net for Chugh et al. 2021 dataset We use a simulation that improves on previous work [39] to perform pupil and CR localization and CR matching. The pupil is represented by a randomly oriented dark 2D Gaussian with a minor axis radius \(\alpha_{p}\in[6,22.5]\) pixels, major axis radius \(\beta_{p}\in[1\alpha_{p},1.3\alpha_{p}]\) and amplitude \(A_{p}\in[200,100000]\). Its luminance \(L_{p}\) is drawn from an exponential distribution with a scale parameter of \(10\) and offset \(1\). Five randomly oriented CRs are generated, each having a random minor axis \(\alpha_{c}\in[1,2.5]\) pixels, a random major axis \(\beta_{c}\in[\alpha_{p},1.1\alpha_{p}]\), and a random amplitude \(A_{c}\in[200,100000]\). Each CR has a drop-out rate of \(16\%\). Between \(1\) and \(5\) spurious (non-CR) reflections may randomly appear in the image. These are generated in the same way as CRs, each with a random minor axis radius \(\alpha_{s}\in[1,2.5]\) pixels and random major axis radius \(\beta_{s}\in[\alpha_{s},2.5\alpha_{s}]\). The location of each spurious reflection is generated using a rejection sampling method with an inverted Gaussian \((1-G(x,y)_{p}\), c.f. Eq 1) to make them less likely to appear near the pupil center. A grayscale gradient background was created by drawing two random values from a luminance range of \(L_{background}\in[63,178]\) and smoothly varying the luminance from one side to the other along a random axis. This is to prevent the model from interpreting any dark part of the image as part of the pupil. The standard deviation of image noise was varied per generated image, with \(\sigma_{n}\in[0,30]\). As this model not only performs pupil and CR center localization but also matching of CRs to specific illuminators, the positions of the CRs need to follow the same pattern as in the real dataset. Specifically, for Chugh et al.'s 2021 dataset [11], this involves five IR lights that project to a house-shaped polygon that is usually close to the pupil. The polygon is modeled as a rectangle with an additional vertex above the middle of its top edge. The rectangle's base width is randomly sampled from \(w\in[0.1d,0.45d]\) where \(d=128\) pixels, the length of one side of the synthetic image. The rectangle's height is sampled from \([0.5w,0.6w]\), and the height of the roof from \([0.2w,0.5w]\). The polygon is randomly rotated between \(\pm[0,45]\) degrees. In order for the model to learn the matching correctly, the CR positions are always calculated in a certain order, starting from the topmost position and moving clockwise. Training this model was performed in two stages (see below). In the second stage, the maximum number of spurious reflections that could appear in the image is reduced to \(3\), the dropout probability for individual CRs is reduced to \(10\%\) and the range of rotation is reduced to \(\pm[0,35]\). #### U-Net for OpenEDS 2020 We reuse the simulation created for Chugh et al. 2021 dataset [11], adjusting the polygon so that it has eight vertices corresponding to the eight IR lights in the dataset, starting from the bottom-right CR and moving clockwise. The polygon's radius is randomly sampled from the range \(w\in[0.15d,0.4d]\) where \(d=128\) pixels. As the OpenEDS 2020 dataset contained forward-facing eye images, the random rotation of the polygon is reduced to the range \(\pm[0,0.57]\) degrees. Each CR has a dropout probability of \(20\%\). The pupil luminance \(L_{p}\) is drawn from a Weibull distribution with a scale of \(25\), an offset of \(18\) and shape parameter of \(2\), while no other parameters were changed. ### Neural Network Architectures & Training Regimes #### U-Net model for pupil segmentation For the pupil segmentation task, we utilized an off-the-shelf U-Net [56] from the PyTorch Segmentation Modules library [25]. The encoder part uses a ResNet-34 backbone [30] pre-trained on ImageNet [12]. The decoder part consists of five convolutional layers of dimensions (256, 128, 64, 32, 16). The trained U-Net model accepts grayscale images of arbitrary dimensions and produces a probability map that represents the pupil segmentation. In total, the U-Net model contains 24,430,097 trainable parameters. The masks output by the U-Net (range \([0,1]\)) were binarized using a threshold of 0.99, and then postprocessed with OpenCV (version 4.7.0.68) in Python 3.10. Specifically, morphological operations were performed on the resulting binary masks to fill holes, and the pupil was selected based on shape and size criteria[50]. The center of mass of the blob was then computed and an ellipse was then fit to the selected blob. If the center of mass was closer than the radius of the ellipse's major axis to the edge of the eye image cutout, the cutout was recentered on the center of mass and inference run anew on this cutout. #### U-Net with attention mechanism In both cases, we used a modified U-Net model based on previous work [49]. The encoder and decoder consist of residual modules producing a feature map with a consistent depth, only decreasing/increasing in size using down-and upsampling respectively. The U-Net contains six residual modules with a consistent channel size of 256. The output of the U-Net are passed through two convolution blocks which produce the heat maps for the CRs and the pupil respectively. The peak in the heat maps is taken as the pixel location of each eye feature center and is found with an argmax operation. #### CNN models for pupil and CR localization In this task, the model was trained to localize the subpixel center of an eye feature (pupil or CR). Overall, four CNNs were trained: two for localizing the pupil and CR centers in data captured at 500 Hz with the FLEX setup and another two for data captured at 1000 Hz. Each model is composed of seven convolutional layers followed by two dense layers. The CNNs for CR center localization in both 500 Hz and 1000 Hz data as well as the CNN trained to detect the pupil center in 500 Hz data have the following convolution layer dimensions: (64, 64, 128, 128, 256, 256, 512) while the pupil CNN for 1000 Hz data has wider dimensions: (128, 128, 256, 256, 512, 512, 768). The CR CNNs have dense layers with sizes of (64, 32) while the pupil CNNs both have sizes of (64, 64). The CR CNNs both have a total of 6,268,386 trainable parameters while the pupil CNN for 500 Hz and 1000 Hz data have 6,270,530 and 19,671,426 trainable parameters, respectively. Each CNN model was built within the DeepTrack 2.1 library [41]. #### Model training regimes To train the U-Net for the OpenEDS 2019 dataset, we chose the AdamW [38] optimizer with an initial learning rate set to \(1e^{-4}\) and an exponential decay scheduler. The loss used is a combination of Binary Cross Entropy loss [57], Dice loss [36], and Focal loss [37]. During the training phase, the model was shown 1000 new simulated images per epoch and the validation set consisted of 400 pre-generated simulated images. The model training ran for 100 epochs reaching a natural plateau. Following our previous work [5, 39], the U-Net model used for Chugh's dataset [11] is trained in two stages. The first stage consisted of a broader range of challenging examples, aimed at enhancing the model's robustness to large variations in eye data while the second stage consisted of images that more closely represent the images captured by the eye tracker. Similar to the U-Net model for EDS 2019, we used the AdamW optimizer with an initial learning rate of \(1e^{-4}\) in the first stage and \(1e^{-5}\) in the second stage, an exponential decay scheduler, and a combination of Binary Cross Entropy loss [57], Dice loss [36], and Focal loss [37] for the loss. The generator was first configured to present the model with 20000 unique images per epoch. In the second stage, the generator is reconfigured to show 1000 images. We let the model train for 30 epochs in the first stage and 20 epochs in the second stage. We incorporated early stopping with a patience of 30 for the first stage and 5 for the second stage. Similarly, the U-Net model for EDS 2020 [52] is trained in two stages. We let the model train for 500 epochs in both stages and incorporated early stopping with a patience of 30. In the first stage, a weight of 100 is added to the Binary Cross Entropy Loss, while the rest of the parameters for both the first and second stages remain the same as the EDS2020 U-Net model. In both stages, the generator was configured to produce 1000 unique images per epoch, early stopping after 175 epochs in the first stage and 81 epochs in the second stage. Similar to the above, we adopted a two-stage approach for training each CNN, training first on simulations with harder examples and then honing in on cases that are closer to the dataset. The generator was configured to present the model with 1000 unique samples per epoch, with batch sizes of 4 for the CR CNNs, 16 for the pupil CNN at 500 Hz, and 8 for the pupil-CNN at 1000 Hz. The batch size was further reduced to 4 for both pupil CNNs during the second stage of training. Additionally, a set of pre-generated synthetic images was used for validation, with a validation set size of 300 for the CR CNNs and 600 for the pupil CNNs. We employed the mean squared error (MSE) loss function for the CR CNNs and the mean absolute error (MAE) loss function for the pupil CNNs, and to assess model performance. To train the models, we used the Adam[29] optimizer for the CR CNNs and the pupil CNN at 1000 Hz, while AdamW was used for the pupil CNN at 500 Hz. In the first stage, the initial learning rate was set to \(1e^{-4}\), which was subsequently decreased to \(1e^{-6}\) in the second stage. An exponential decay scheduler was used for the learning rate in all training regimes. The CR CNNs at 500 Hz and 1000 Hz were trained for a maximum of 700 epochs for the first and second stages, incorporating an early stopping mechanism to prevent overfitting. The first stage of the CR CNN at 500 Hz converged after 286 epochs while the second stage required 555 epochs. The 1000 Hz model reached convergence in 167 epochs for the first stage and 307 epochs for the second stage. In the first stage, the pupil CNNs are allowed to train up to 500 epochs with a patience of 20. In the second stage, the 500 Hz model is trained for up to 40 epochs with a patience of 5 while the 1000 Hz model is trained for up to 100 epochs with a patience of 10. The 500 Hz model reached convergence after 99 epochs in the first stage and 25 epochs in the second stage. The 1000 Hz model achieved convergence after 88 epochs in the first stage and 36 epochs in the second stage. In the second stage, the first convolutional layer of each model is frozen and we used an iterative approach to determine which additional layers to freeze, if any. We chose to freeze the first convolutional layer for the pupil CNNs and the 1000Hz CR CNN, and the first two layers of the CR CNN at 500Hz. ### High-Resolution Eye-Tracking Data Collection High-resolution eye images were recorded from the first, third and last author of the current paper and one further experienced participant with the FLEX setup [50, 24]. Eye movement data were simultaneously recorded with the EyeLink 1000 Plus (SR Research Ltd., Ottawa, Canada). The setup is shown in Figure 5. The EyeLink illuminator was used to illuminate the eye and create the corneal reflection used by both the EyeLink and the FLEX setups. The FLEX setup used a Basler ace acA2500-60um camera equipped with a 50-mm lens (AZURE-5022ML12M) and a near-IR long pass filter (MIDOPT LP715-37.5) that was positioned 50 cm from the participant's eyes. Two datasets were collected using the same participants and tasks: the FLEX 1) acquired images at 1000 Hz and 2) acquired images at 500 Hz. Camera and illuminator settings for the two data sets were as follows: 1. _1000 Hz_. 8-bit images were captured at 672 x 340 pixels, with camera exposure set to 882 us and gain to 12 dB. EyeLink illuminator power was 100%. 2. _500 Hz_. 8-bit images were captured at 896 x 600 pixels, with camera exposure set to 1876 us and gain to 10 dB. EyeLink illuminator power was 75%. Videos were captured with custom software that streamed the recorded frames to mp4 files using libavcodec (FFMPeg) version 5.1.1 and the libx264 h.264 encoder (preset: veryfast, crf: 17, pixel format: gray). For both datasets, simultaneous binocular eye movement recordings were performed at 1000 Hz with an EyeLink 1000 Plus (host software 5.12) in desktop setup using the center-of-mass pupil tracking mode. The EyeLink camera sensor was located 56 cm away from the participant's eyes. To synchronize the acquisition of eye images from the FLEX with eye movement data from the EyeLink, TTL triggers were sent to the EyeLink Host computer at the onset and offset of each FLEX image recording trial. The recordings took place in a dark room with no windows. Several tasks were shown on an Asus VG248QE monitor at 60 Hz (viewing distance 79 cm). Participants performed the following tasks while stabilized on a chin- and forehead rest: 1. Nine 1-second fixations in random order on a 3\(\times\)3 grid of fixation points positioned at \(h=\{-7,0,7\}\) deg and \(v=\{-5,0,5\}\) deg. 2. One 30-second fixation on a point positioned at \(h=0\) deg and \(v=0\) deg while the background luminance alternated between black and white at a cycle time of 3 s. 3. Three 30-second fixations on points positioned at \(h=\{-3.5,0,3.5\}\) deg and \(v=0\) deg on a middle grey background, with each position repeated 2 times. 4. Five rightward step-ramp pursuit stimuli from \(h=-10\) deg to \(h=10\) deg at a speed of \(2\,^{\circ}/\)s following a 200 ms leftward step. 5. Fixations on a dot that was presented for 1 second at positions \((x,0),x\in\{-7,-3.5,0,3.5,7\}\) deg, with each position repeated 6 times. 6. Fifteen fixations in random order on a dot that was presented for 1.5 seconds at positions \(h=\{-7,-3.5,0,3.5,7\}\) deg and \(v=\{-5,0,5\}\) deg, with each position repeated 6 times. The fixation point consisted of a blue disk (1.2\({}^{\circ}\) diameter) with a red point (0.2\({}^{\circ}\) diameter) placed on its center. The total recording time for each participant was approximately 8.5 min, resulting in a database containing approximately 437500 FLEX eye images per participant at 1000 Hz and 219300 images at 500 Hz, along with the EyeLink data. #### High resolution eye image analysis Image analysis was performed frame-wise and adapted from [50] and [5]. In a first stage, pupil and CR centers were localized using the thresholding method. Briefly, fixed thresholds and analysis ROIs were manually set per participant to identify the pupil and CR in the images. The analysis was performed at different pupil and CR thresholds for each participant, and the threshold that resulted in the best precision pupil and CR signals were used. Using these thresholds the images were binarized and after morphological operations to fill holes, the pupil and CR were selected based on shape and size criteria. The center of mass of these binary blobs were then computed; these will be referred to as the pupil and CR centers localized using the thresholding method. For the pupil an ellipse was furthermore fit to the binary pupil blob. In a second stage, for both the pupil and the CR, 180\(\times\)180 pixel cutouts centered on the pupil and CR center locations identified by the thresholding method were made. To determine the CR center with the CNN, as was done in [5], a black circular mask with a 48-pixel radius was applied to the cutout before feeding it into the CR CNN. Similarly, before providing the pupil cutout to the pupil CNN, a middle gray elliptical mask was applied to the cutout that was 1.4 times larger than the pupil ellipse determined in stage 1. RMS-S2S precision [20, 47, 45] of the pupil and CR center locations estimated by both the thresholding and CNN methods was computed in a 200 ms window moved over the signals, after which each trial and signal's median RMS values were determined [22, 23, 43]. We computed calibrated gaze signals by subtracting the CR center location from the pupil center location and calibrating the resulting vector with data from the 3x3 grid of fixation points from the first task. We used second-order polynomials in \(x\) and \(y\) with first-order interactions to calibrate these P-CR signals [8, 61]. To examine the quality of the resulting calibrated gaze data, we computed accuracy as the offset between the estimated gaze location and the target location for the gaze data from task 6, which involved repeated fixations on 15 targets. We determined the RMS-S2S precision of the calibrated gaze signals for all recorded trials in the same way as for the pupil and CR center signals, and computed the standard deviation of the signals using the same sliding window technique. #### Center of mass calculations In order to determine the center of mass or centroid of a feature, specifically the pupil within an image, we employed the following equations [60]: \[CoM_{x}=\sum_{j=1}^{m}\sum_{i=1}^{n}j\cdot I(i,j)/\sum_{j=1}^{m} \sum_{i=1}^{n}I(i,j) \tag{6}\] \[CoM_{y}=\sum_{j=1}^{m}\sum_{i=1}^{n}i\cdot I(i,j)/\sum_{j=1}^{m} \sum_{i=1}^{n}I(i,j) \tag{7}\] where \(I(i,j)\) represents the pixel intensity value at row \(i\) and column \(j\) in an image \(I\), and \((m,n)\) denote the dimensions of the image. These equations were also used to determine the pupil center location from the annotations provided in the OpenEDS 2019 dataset [17] and OpenEDS 2020 dataset [52]. ## Code and Data Availability We have made the code used to generate LEyes simulations and the models employed in our experiments are available on our GitHub repository at [https://github.com/gid/gid/gid/gid](https://github.com/gid/gid/gid/gid). github.com/dcnieho/Byrneetal_LEyes. We do not have permission to share the high resolution eye videos we have collected.
2309.15974
The Hrushovski Property for Compact Special Cube Complexes
We show that any compact nonpositively curved cube complex $Y$ embeds in a compact nonpositively curved cube complex $R$ where each combinatorial injective partial local isometry of $Y$ extends to an automorphism of $R$. When $Y$ is special and the collection of injective partial local isometries satisfies certain conditions, we show that $R$ can be chosen to be special and the embedding $Y\hookrightarrow R$ can be chosen to be a local isometry.
Brahim Abdenbi, Daniel T. Wise
2023-09-27T19:45:23Z
http://arxiv.org/abs/2309.15974v2
# The Hrushovski property for compact special cube complexes ###### Abstract. We show that any compact nonpositively curved cube complex \(Y\) embeds in a compact nonpositively curved cube complex \(R\) where each combinatorial injective partial local isometry of \(Y\) extends to an automorphism of \(R\). When \(Y\) is special and the collection of injective partial local isometries satisfies certain conditions, we show that \(R\) can be chosen to be special and the embedding \(Y\hookrightarrow R\) can be chosen to be a local isometry. Key words and phrases:Hrushovski Property, Special Cube Complexes, Subgroup Separability 2020 Mathematics Subject Classification: 20F65 Research supported by NSERC ## 1. Introduction A well-known theorem of Hrushovski [10] states that for any finite graph \(X\), there exists a finite graph \(Z\) containing \(X\) as an induced subgraph with the property that every isomorphism between induced subgraphs of \(X\) extends to an automorphism of \(Z\). Hrushovski's motivation stemmed from the study of automorphisms of the countable random graph. This type of property is known as the _Extension Property for Partial Automorphisms_ or the _Hrushovski Property_. More generally, a class of spaces \(\mathcal{C}\) has the Hrushovski property if for each \(X\in\mathcal{C}\), there is \(Z\in\mathcal{C}\) containing \(X\) such that partial isomorphisms between subspaces of \(X\) extend to automorphisms of \(Z\). Often, the embedding \(X\hookrightarrow Z\) is required to have certain properties. For example, in the original Hrushovski property, we require that \(X\) be an induced subgraph of \(Z\). Our generalization to _special cube complexes_ requires the embedding to be a _local isometry_. See Section 2 for definitions. Various classes of spaces were shown to have this property. For example, the Hrushovski property was established for finite metric spaces by Solecki [14] and independently by Vershick [15]. More recently, it was established for generalized metric spaces by Conant [16]. Structures of finite relational languages were shown to have this property by Herwig and Lascar [17], and by Hodkinson and Otto [1]. Similarly, the Hrushovski property was established for various classes of graphs by Herwig [10] and for hypertournaments by Huang, Pawliuk, Sabok, and Wise [12], provided that the partial isomorphisms have disjoint domains and ranges. We direct the reader to recent surveys by Nguyen Van The [18] and Lascar [19] for detailed discussions of related results. In this paper, we establish the Hrushovski property for compact nonpositively curved cube complexes, and under certain conditions, for compact special cube complexes as well. Since their introduction by Gromov [1], nonpositively curved cube complexes have been used in various lines of research within geometric group theory and played an important role in recent developments. Their connection to group theory is due to a construction by Sageev [10] that takes as input a group \(G\) and a codimension-\(1\) subgroup \(H\subset G\), and outputs a CAT(0) cube complex \(X\) (that is, \(X\) is nonpositively curved and simply connected) with a nontrivial group action \(G\curvearrowright X\). Special cube complexes were introduced by Haglund-Wise [13] as nonpositively curved cube complexes whose hyperplanes avoid certain configurations. _Hyperplanes_ are connected subspaces built from midcubes. See Section 2 for details. It was shown in [13] that a nonpositively curved cube complex \(X\) is _special_ if and only if \(\pi_{1}X\) embeds in a _raag_ (right angled Artin group). Raags are rather "simple" groups that are known to be linear and have many desirable residual properties. See Charney [1] for a brief introduction to raags. This remarkable connection led to several interesting results in combinatorial group theory and low-dimensional topology (see for example [1] and [15]), and makes special cube complexes a particularly natural generalization of graphs. So, the question of whether special cube complexes have the Hrushovski property is a natural one to pursue. Our approach to this question has similarities with the work of Herwig and Lascar who showed in [14] that the Hrushovski property for certain spaces is related to the _profinite topology_ of free groups. This is the topology generated by finite index subgroups. We make extensive use of this relationship, albeit using a different construction, namely the _horizontal quotients_ of _graphs of spaces_. A topological space \(X\) is a _graph of spaces_ if there is a graph \(\Gamma_{X}\) and quotient map \(X\to\Gamma_{X}\) that induces a decomposition of \(X\) into _vertex-spaces_\(\left\{X_{v}\,:\,v\in\Gamma_{X}^{0}\right\}\) and _thick edge-spaces_\(\left\{X_{e}\times I\,:\,e\in\Gamma_{X}^{1}\right\}\) where \(I=[-1,1]\) and each \(X_{v}\) (resp. \(X_{e}\times I\)) is the preimage of a vertex \(v\) (resp. edge \(e\)) of \(\Gamma_{X}\). The _horizontal quotient_\(X^{E}\) is obtained from \(X\) by collapsing all thick edge-spaces along their \(I\) factor. The general idea is outlined below. See Figure 1. Starting with a space \(Y\) and a collection of partial isomorphisms \(\mathcal{O}=\left\{\varphi_{i}:Y_{i}\subset Y\to Y\right\}_{i}\), we build the _realization_\(X\) of the pair \(\left\{Y,\mathcal{O}\right\}\) by attaching the mapping cylinders of each \(\varphi_{i}\) to \(Y\) in the obvious way. The resulting space \(X\) has a graph of spaces structure \(X\to\Gamma_{X}=B\) where \(B\) is a bouquet of circles. Each covering map \(\overline{B}\to B\) induces a covering map \(\overline{X}\to X\) where \(\overline{X}\to\overline{B}=\Gamma_{\overline{X}}\) is a graph of spaces decomposition so that the following diagram commutes: \[\begin{CD}\overline{X}@>{}>{}>\overline{B}\\ @V{}V{}V@V{}V{}V\\ X@>{}>{}>B\end{CD}\] For each automorphism of \(\overline{B}\), there is an automorphism of \(\overline{X}\) which preserves the commutativity of the above diagram. We then take the horizontal quotient \(\overline{X}\to\overline{X}^{E}\) and observe that any automorphism of \(\overline{X}\) descends to an automorphism of \(\overline{X}^{E}\). The horizontal quotient amounts to gluing the vertex-spaces by identifying them along copies of domains and ranges of the partial isomorphisms. Furthermore, with the right choice of \(\overline{B}\) (and thus of \(\overline{X}\)), we can ensure that \(Y\) embeds in \(\overline{X}^{E}\) and each partial isomorphism of \(Y\subset\overline{X}^{E}\) extends to an automorphism of \(\overline{X}^{E}\). This is achieved using the following theorem by Ribes-Zalesskii [10]: **Theorem 1.1** (Ribes-Zalesskii [10]).: _Let \(H_{1},\dots,H_{m}\) be finitely generated subgroups of a free group \(F\). Then \(H_{1}H_{2}\cdots H_{m}\) is closed in the profinite topology._ This generalizes a result of Hall [1] on finitely generated subgroups of free groups. Theorem 1.1 provides a way of choosing an appropriate regular cover \(\overline{X}\) so that \(\overline{X}^{E}\) has the desired properties. This, however, comes at the cost of imposing some finiteness condition on \(Y\), hence the "compactness" requirement in our results. In the case of special cube complexes, we additionally wish to have \(\overline{X}^{E}\) be special and the embedding \(Y\to\overline{X}^{E}\) be a local isometry. To this end, we require that the collection of the partial local isometries be "controlled". For example, we require that each \(\varphi_{i}\) map non-crossing hyperplanes to non-crossing hyperplanes. See Definition 3.15 and Definition 5.7 for details. This is a necessary condition in order to avoid creating artificial pathologies that would make specialness fail in \(\overline{X}^{E}\) regardless of the choice of the covering space \(\overline{X}\). In our construction, if \(\overline{\Phi}_{i}^{E}\) is an automorphism of \(\overline{X}^{E}\) that extends a partial isomorphism \(\varphi_{i}\), and if \(\varphi_{i}\) restricts to \(\varphi_{j}\), then \(\overline{\Phi}_{i}^{E}\) also extends \(\varphi_{j}\). Thus, our construction can be made more efficient by removing any \(\varphi_{j}\in\mathcal{O}\) that is a restriction of another \(\varphi_{i}\in\mathcal{O}\). One drawback of this construction is the fact that the automorphisms of \(\overline{X}^{E}\) extending \(\varphi\) and \(\varphi^{-1}\) are not necessarily inverses of each other. Similarly, a composition of partial isomorphisms does not necessarily extend to the composition of the corresponding automorphisms. One can possibly remedy this by taking further quotients of \(\overline{X}^{E}\) to ensure the desired automorphisms are equal. We have not explored this avenue. Several statements in this work could be generalized in various directions. For example, this construction works on nonpositively curved metric spaces. See Remark 5.6. However, we focus on compact nonpositively curved cube complexes and partial local isometries. This is arguably a natural generalization of the original statement about graphs. The main results in this text are: **Theorem 1.2**.: _Let \(Y\) be a compact nonpositively curved cube complex and let \(\mathcal{O}\) be the set of injective partial local isometries of \(Y\). Then \(Y\) embeds in a compact nonpositively curved cube complex \(R\) where each \(\varphi\in\mathcal{O}\) extends to an automorphism \(\Phi\in\operatorname{Aut}\left(R\right)\)._ **Theorem 1.3**.: _Let \(Y\) be a compact special cube complex and let \(\mathcal{O}\) be a controlled collection of injective partial local isometries of \(Y\). Then there exists a compact special cube complex \(R\) containing \(Y\) as a locally convex subcomplex such that each \(\varphi\in\mathcal{O}\) extends to an automorphism \(\Phi\in\operatorname{Aut}\left(R\right)\)._ In Sections 2 and 3 we provide definitions and background. Section 4 uses subgroup separability of free groups to find finite covers whose horizontal quotients have certain desired properties. In Section 5 we prove Theorem 5.4 and Theorem 5.10. **Acknowledgement**: We are extremely grateful to Frederic Haglund and Piotr Przytycki for their helpful feedback and corrections. We also thank the anonymous referees for many corrections and suggestions. ## 2. Special Cube Complexes ### Cube Complexes An \(n\)-_cube_ is a copy of \(I^{n}\) where \(I=[-1,1]\subset\mathbb{R}\) and \(n\geq 0\). Its faces are restrictions of some coordinates to \(-1\) or \(1\). A _cube complex_ is a cell complex built from cubes glued together along their faces. The dimension of a cube complex is the supremum of the dimensions of the cubes contained in it. Let \(v=\left(\epsilon_{i}\right)_{i=1}^{n}\) be a vertex of \(I^{n}\); so each \(\epsilon_{i}=\pm 1\). The \(v\)-_corner_ of \(I^{n}\) is the simplex spanned by \(\left\{w_{j}\right\}_{j=1}^{n}\) where each \(w_{j}\) is obtained from \(v\) by replacing \(\epsilon_{j}\) by \(\dfrac{\epsilon_{j}}{2}\). Let \(X\) be a cube complex and \(C\subset X\) be the image of a map \(I^{n}\to X\). An \(x\)-_corner_ of \(C\) for \(x\in X^{0}\) is the union of images of \(v\)-corners of \(I^{n}\) where \(v\mapsto x\). In general, if \(J=\prod_{i=1}^{n}\epsilon_{i}\) is an \(m\)-dimensional subcube of \(I^{n}\) where \[\epsilon_{i}\in\left\{\left\{-1\right\},\left\{1\right\},\left[-1,1\right]\right\}\] then the \(J\)-_corner_ of \(I^{n}\) is the simplex spanned by the points \(\left\{w_{j}\right\}_{j=1}^{n-m}\) obtained from \(J\) as follows: Given the _center of mass_ of \(J\), denoted by \(v=\left(t_{k}\right)_{k=1}^{n}\) where \[t_{k}=\left\{\begin{array}{ccc}0&\text{if}&\epsilon_{k}=[-1,1]\\ 1&\text{if}&\epsilon_{k}=\left\{1\right\}\\ -1&\text{if}&\epsilon_{k}=\left\{-1\right\}\end{array}\right.,\] the point \(w_{j}\) is obtained from \(v\) by replacing the \(j^{\text{th}}\) nonzero coordinate \(t\) with \(\dfrac{t}{2}\). Note that each point \(w_{j}\in\left\{w_{j}\right\}_{j=1}^{n-m}\) corresponds to a cube containing \(J\) as a codimension-\(1\) subcube. Let \(D\) be a subcube of an \(n\)-cube \(C\) of \(X\). A \(D\)-_corner_ of \(C\) is the image of a \(J\)-corner of \(I^{n}\) under a map \(I^{n}\to X\), where \(\left(I^{n},J\right)\rightarrow\left(C,D\right)\). The _link_ of \(D\) in \(X\), denoted by \(\operatorname{link}_{X}\left(D\right)\) is the union of all \(D\)-corners of cubes containing \(D\). Note that \(\operatorname{link}_{X}\left(D\right)\) is a simplex complex and it is a subspace of \(X\) but not a subcomplex. We write \(\operatorname{link}\left(D\right)\) instead of \(\operatorname{link}_{X}\left(D\right)\) when \(X\) is clear from context. A cube complex \(X\) is _simple_ if the link of each cube in \(X\) is simplicial. **Lemma 2.1**.: _A cube complex \(X\) is simple if the link of each cube of \(X\) has no loops and no bigons._ Proof.: Let \(D\subset X\) be an \(n\)-cube. Let \(\sigma_{1}\) and \(\sigma_{2}\) be distinct \(m\)-simplices in \(\operatorname{link}\left(D\right)\) with \(\sigma_{1}\cap\sigma_{2}\neq\emptyset\) and \(m\geq 1\). If \(\sigma_{1}\) is not embedded then \(\operatorname{link}\left(D\right)\) has a loop. If \(\partial\sigma_{1}=\partial\sigma_{2}\), then there exists an \(\left(n+m-1\right)\)-cube \(Y\supset D\) such that \(\operatorname{link}\left(Y\right)\) contains a bigon. Indeed, the case \(m=1\) corresponds to \(Y=D\) with a bigon in \(\operatorname{link}\left(D\right)\). For \(m\geq 2\), the \(m\)-simplices \(\sigma_{1}\) and \(\sigma_{2}\) are \(D\)-corners of distinct \(\left(n+m+1\right)\)-cubes \(A_{1}\) and \(A_{2}\) intersecting along their faces. An \(\left(m-2\right)\)-simplex \(\Delta\subset\sigma_{1}\cap\sigma_{2}\) is a \(D\)-corner of an \(\left(n+m-1\right)\)-cube \(Y\supset D\). Moreover, two distinct \(\left(m-1\right)\)-simplices containing \(\Delta\) are \(D\)-corners of distinct \(\left(n+m\right)\)-cubes \(B\supset Y\) and \(B^{\prime}\supset Y\) that are shared faces of \(A_{1}\) and \(A_{2}\). We can see that the \(Y\)-corners of \(B\) and \(B^{\prime}\) are \(0\)-simplices that are boundaries of the \(1\)-simplices corresponding to the \(Y\)-corners of \(A_{1}\) and \(A_{2}\). ### Nonpositive curvature A simple cube complex \(X\) is nonpositively curved if it satisfies Gromov's no-\(\triangle\) property [1], which requires that \(3\)-cycles in \(\operatorname{link}\left(D\right)\) bound \(2\)-simplices for each cube \(D\subset X\). An equivalent criterion for nonpositive curvature states that a cube complex is nonpositively curved if the links of its \(0\)-cubes are flag. A simplicial complex is _flag_ if any collection of \(\left(n+1\right)\) pairwise adjacent \(0\)-simplices spans an \(n\)-simplex. ### Local Isometries A subcomplex \(K\) of a simplicial complex \(L\) is _full_ if any simplex of \(L\) whose \(0\)-simplices lie in \(K\) is itself in \(K\). A subcubecomplex \(A\subset B\) is _locally convex_ if \(\operatorname{link}_{A}\left(x\right)\subset\operatorname{link}_{B}\left(x\right)\) is a full subcomplex for every \(0\)-cube \(x\in A\). A map \(X\to Y\) of cube complexes is _combinatorial_ if open cells are mapped homeomorphically to open cells, where each homeomorphism is an isometry. It is _cubical_ if for each \(0\leq k\leq\dim\left(X\right)\), the \(k\)-skeleton of \(X\) is mapped to the \(k\)-skeleton of \(Y\). A combinatorial map \(\Phi:X\to Y\) is an _immersion_ if the restriction \(\operatorname{link}\left(x\right)\rightarrow\operatorname{link}\left(\Phi \left(x\right)\right)\) is an embedding for each \(0\)-cube \(x\in X\). If \(X\) and \(Y\) are nonpositively curved and \(\operatorname{link}\left(x\right)\) embeds as a full subcomplex of \(\operatorname{link}\left(\Phi\left(x\right)\right)\) then \(\Phi\) is a _local isometry_. Equivalently, a combinatorial locally injective map \(\Phi:X\to Y\) of nonpositively curved cube complexes is a local isometry if \(\Phi\) has _no missing squares_ in the sense that if two \(1\)-cubes \(a_{1},a_{2}\) at a \(0\)-cube \(x\) map to \(\Phi\left(a_{1}\right),\Phi\left(a_{2}\right)\) that bound the corner of a \(2\)-cube at \(\Phi\left(x\right)\), then \(a_{1},a_{2}\) already bound the corner of a \(2\)-cube at \(x\). Note that when \(\Phi\) is an injective local isometry, \(\Phi\left(X\right)\) embeds as a locally convex subcomplex of \(Y\). ### Immersed Hyperplanes A _midcube_ of an \(n\)-cube is the subspace obtained by restricting one coordinate to \(0\). Note that a midcube of an \(n\)-cube is isometric to an \(\left(n-1\right)\)-cube. An _immersed hyperplane_\(H\) in a nonpositively curved cube complex \(X\) is a component of the cube complex \(M\left/\sim\right.\) where \(M\) denotes the disjoint union of midcubes of \(X\) and \(\sim\) is the equivalence relation induced by identifying faces of midcubes under the inclusion map into \(X\). A 1-cube of \(X\) is _dual_ to \(H\) if its midcube is in \(H\). We note that the edges dual to \(H\) form an equivalence class generated by _elementary parallelisms_ of 1-cubes, where two 1-cubes are _elementary parallel_ if they appear on opposite sides of a 2-cube. The _carrier_ of \(H\), denoted by \(N\left(H\right)\), is the cubical neighborhood of \(H\) formed by the union of the closed cubes whose intersection with \(H\) is nonempty. ### Special Cube Complexes An immersed hyperplane \(H\) in \(X\)_self-crosses_ if it contains two distinct midcubes from the same cube. It is _two-sided_ if the combinatorial immersion \(H\to X\) extends to \(H\times I\to X\). In this case, the 1-cubes dual to \(H\) can be oriented in such a way that any two dual 1-cubes lying in the same 2-cube are oriented in the same direction. An immersed hyperplane that is not two-sided is _one-sided_. \(H\)_self-osculates_ if it is dual to two oriented 1-cubes that share the same initial or terminal 0-cube and do not form a corner of a 2-cube. Two distinct immersed hyperplanes, \(H,H^{\prime}\), _cross_ if they contain distinct midcubes of the same cube. They _osculate_ if they are dual to two 1-cubes that share a 0-cube and do not form a corner of a 2-cube. Two distinct immersed hyperplanes _inter-osculate_ if they both cross and osculate. See Figure 2. A nonpositively curved cube complex is _special_ if it satisfies the following: 1. No immersed hyperplane self-crosses; 2. No immersed hyperplane is one-sided; 3. No immersed hyperplane self-osculates; 4. No two immersed hyperplanes inter-osculate. ## 3. Horizontal Quotient of a Graph of Spaces ### Graph of Spaces An _undirected graph_\(\Gamma\left(V,E\right)\) is a 1-dimensional \(CW\)-complex whose _vertices_ and _edges_, denoted by \(V=\Gamma^{0}\) and \(E=\Gamma^{1}\), are the 0-cells and open 1-cells, respectively. There exist two _incidence_ maps \(\tau_{1},\tau_{2}:E\to V\) mapping each edge \(e\in E\) to its _boundary vertices_, \(\tau_{1}\left(e\right),\ \tau_{2}\left(e\right)\) called initial and terminal vertex, respectively. A _graph of spaces \(X\)_ with underlying graph \(\Gamma\left(V,E\right)\), _vertex-spaces_\(\left\{X_{v}\right\}_{v\in V}\), and _thick edge-spaces_\(\left\{X_{e}{\times}I\right\}_{e\in E}\) is a topological space \(X\) obtained as a quotient of \(\left\{X_{v}\right\}_{v\in V}\) and \(\left\{X_{e}{\times}I\right\}_{e\in E}\) in the following manner: for each edge \(e\in E\) with boundary vertices \(v_{1}=\tau_{1}\left(e\right),v_{2}=\tau_{2}\left(e\right)\), the corresponding thick edge-space \(X_{e}\times I\) is attached to the vertex-spaces \(X_{v_{1}},X_{v_{2}}\) via _attaching maps_ which are also denoted by \(\tau_{1}:X_{e}\times\{-1\}\to X_{v_{1}}\) and \(\tau_{2}:X_{e}\times\{1\}\to X_{v_{2}}\). For simplicity, the isomorphic subspaces \(X_{e}\times\{-1\}\subset X_{v_{1}}\) and \(X_{e}\times\{1\}\subset X_{v_{2}}\) are referred Figure 2. From left to right: Self-crossing, one-sidedness, self-osculation, and inter-osculation. to as _edge-spaces_ of \(X_{v_{1}}\) and \(X_{v_{2}}\), respectively. In this text, we always assume \(X_{e}\) is connected and the attaching maps of \(X_{e}\times I\) are injective and combinatorial. The graph \(\Gamma\left(V,E\right)\) is the quotient of \(X\) obtained by mapping \(X_{v}\) to \(v\) and \(X_{e}\times(-1,1)\) to \(e\) for each \(v\in V\) and \(e\in E\). We will henceforth denote a graph of spaces \(X\) with underlying graph \(\Gamma_{X}\) by the corresponding canonical quotient map \(X\to\Gamma_{X}\). ### Horizontal Quotient Let \(X\to\Gamma_{X}\) be a graph of spaces and let \(E\) be the edge set of \(\Gamma_{X}\). Given an edge \(e\in E\), let \(\sim_{e}\) be the equivalence relation on \(X_{e}\times I\) where for all \(s,t\in I=[-1,1]\), we have \((x,t)\sim_{e}(y,s)\) if and only if \(x=y\). Let \(X^{e}=X/\sim_{e}\) be the corresponding quotient. The _horizontal quotient_ of \(X\) along the edge \(e\), denoted by \(q_{e}:X\to X^{e}\), is the quotient map \(X\to X^{e}=X/\sim_{e}\). In general, if \(E^{\prime}=\{e_{1},\ldots,e_{n}\}\subset E\), then the horizontal quotient of \(X\) along \(E^{\prime}\) is the quotient \(X\to X^{E^{\prime}}=X/\sim_{E^{\prime}}\) where \(\sim_{E^{\prime}}\) is the equivalence relation spanned by \(\sim_{e}\) for \(e\in E^{\prime}\). When \(E^{\prime}=E\), we call \(X^{E}\)_the seamless graph of spaces_ associated with \(X\) and the corresponding map is the _horizontal quotient_ which we denote by \(q:X\to X^{E}\). (This terminology was introduced in [10]). Note that the letter \(E\) in \(X^{E}\) is generic in the sense that it refers to the set of all edges of a given graph. For example, given two graphs of spaces \(X\to\Gamma_{X}\) and \(Y\to\Gamma_{Y}\), their horizontal quotients will both be denoted by \(X^{E}\) and \(Y^{E}\), respectively, even when \(\Gamma_{X}\neq\Gamma_{Y}\). The horizontal quotient \(q\) is _strict_ if the restriction of \(q\) to each vertex-space is an embedding. The \(E\)-_parallelism class_ of a subset \(A\subset X\) is \(q^{-1}\left(q\left(A\right)\right)\), that is, the set of all points of \(X\) mapping to \(q\left(A\right)\). When \(A\) is a point, \(q^{-1}\left(q\left(A\right)\right)\) is the _horizontal graph_ associated to \(A\). Note that the restriction of the map \(X\to\Gamma_{X}\) to a horizontal graph in \(X\) is an immersion since the attaching maps of thick edge-spaces are embeddings. In particular, if \(X\to\Gamma_{X}\) is a tree of spaces, then \(q\) is strict since the horizontal graphs are trees that intersect each vertex-space of \(X\) in at most one point. When \(X\) is a graph of cube complexes, an \(n\)-cube \(C\subset X\) is _vertical_ if \(q\left(C\right)\) is also an \(n\)-cube. **Remark 3.1**.: In the case of a graph of cube complexes \(X\), we make the following remarks: 1. The quotient \(X^{E}\) is not necessarily a cube complex as cubes of \(X\) may be quotiented to simplices in \(X^{E}\). 2. When \(q\) is strict, the restriction of \(q\) to each cube \((C\times I)\subset(X_{e}\times I)\), where \(C\subset X_{e}\) corresponds to the orthogonal projection (along \(I\)) onto \(C\). Then \(q\) is cubical and \(X^{E}\) is a cube complex. Moreover, \(q\) preserves the orientation of edges in \(X_{e}\). 3. When \(X\) is nonpositively curved, the horizontal quotient \(X^{E}\) is not necessarily nonpositively curved. See Figure 3. **Lemma 3.2**.: _Let \(X\to\Gamma_{X}\) be a graph of cube complexes with a strict horizontal quotient. Then for each immersed hyperplane \(U\xrightarrow{f}X^{E}\), there exists an immersed hyperplane \(V\xrightarrow{g}X\), with \(f\left(U\right)=\left(q\circ g\right)\left(V\right)\). Furthermore,_ 1. _if_ \(V\) _is two-sided then so is_ \(U\)_;_ 2. _if_ \(U\xrightarrow{f}X^{E}\) _self-crosses, then_ \(V\xrightarrow{g}X\) _self-crosses._ _Consequently, if the hyperplanes of \(X\) are two-sided/embedded then so are the hyperplanes in \(X^{E}\)._ Proof.: Since \(q\) is strict, it is cubical and so \(X^{E}\) is a cube complex. Let \(U\xrightarrow{f}X^{E}\) be an immersed hyperplane. Then the parallelism class of \(1\)-cubes dual to \(U\) lifts to a parallelism class of \(1\)-cubes in \(X\). The latter corresponds to an immersed hyperplane \(V\xrightarrow{g}X\) that quotients onto \(U\), and so \(f\left(U\right)=\left(q\circ g\right)\left(V\right)\). Now suppose \(V\xrightarrow{g}X\) is a two-sided immersed hyperplane. If \(g\left(V\right)\subset X_{v}\) for some vertex-space \(X_{v}\), then \(q\left(g\left(V\right)\right)\) is two-sided since \(q\) is a strict horizontal quotient and thus restricts to an embedding on each vertex-space. If on the other hand, \(g\left(V\right)\) has nonempty intersection with some edge-space \(X_{e}\times I\) attached to vertex-spaces \(X_{v_{1}},X_{v_{2}}\), then there exist vertical \(1\)-cubes \(A_{1}\in X_{v_{1}}\) and \(A_{2}\in X_{v_{2}}\) dual to \(g\left(V\right)\) that lie on opposite sides of a \(2\)-cube \(B\subset X_{e}\times I\). Since \(V\) is two-sided, there is a consistent way of orienting \(A_{1}\) and \(A_{2}\) so that their initial points lie on the same \(1\)-cube of \(B\). Taking the horizontal quotient along the edge-space \(X_{e}\times I\), induces an orientation on \(q\left(A_{1}\right)=q\left(A_{2}\right)\) consistent with the orientation of the vertical \(1\)-cubes of \(q\left(g\left(V\right)\right)\). By taking consecutive quotients along all the edge-spaces intersecting \(g\left(V\right)\), the two-sidedness is preserved at each stage and the claim follows. Finally, suppose \(U\xrightarrow{f}X^{E}\) is not injective. Then there exists a \(2\)-cube \(S\subset X^{E}\) where \(f\left(U\right)\) self-crosses. The preimage of \(S\) contains a \(2\)-cube where the immersed hyperplane \(g\left(V\right)\) self-crosses. **Remark 3.3**.: Let \(X\rightarrow\Gamma_{X}\) be a graph of cube complexes and let \(q:X\to X^{E}\) be the horizontal quotient. Let \(V\xrightarrow{g}X\) be an immersed hyperplane. Then \(\left(q\circ g\right)\left(V\right)\) is not necessarily the image of an immersed hyperplane in \(X^{E}\). Indeed, not all midcubes of \(X\) map to midcubes of \(X^{E}\). In particular, each immersed hyperplane \(g\left(V\right)=X_{e}\times\left\{0\right\}\subset\left(X_{e}\times\left[-1,1\right]\right)\) projects to a subcomplex \(q\left(g\left(V\right)\right)\subset X^{E}\) that is not a hyperplane. Figure 3. The horizontal quotients of these graphs of cube complexes fail the link condition for nonpositive curvature. Indeed, from bottom left to bottom right: \(\operatorname{link}\left(v\right)\) is a circle, a bigon, and a \(3\)-cycle that does not bound a \(2\)-simplex. In each case, \(\operatorname{link}\left(v\right)\) is not flag. **Definition 3.4**.: Let \(X\rightarrow\Gamma_{X}\) be a graph of cube complexes and \(q:X\to X^{E}\) be the horizontal quotient. Let \(x\in X^{E}\) be a \(0\)-cube and let \(q^{-1}\left(x\right)\) be the corresponding horizontal graph. Let \(\Gamma_{0}\subset\Gamma_{X}\) be the image of \(q^{-1}\left(x\right)\) under the quotient \(X\rightarrow\Gamma_{X}\). Let \(V_{0}\) and \(E_{0}\) be the vertices and edges of \(\Gamma_{0}\) and let \(\left\{X_{v}\ :\ v\in V_{0}\right\}\) and \(\left\{X_{e}\times I\ :\ e\in E_{0}\right\}\) be the corresponding vertex-spaces and thick edge-spaces in \(X\), respectively. Let \(\left\{x_{1},\ldots\right\}\) be the \(0\)-cubes of \(q^{-1}\left(x\right)\). _The induced graph of links_ of \(x\) is the graph of spaces \(Y\subset X\) with underlying graph \(q^{-1}\left(x\right)\), whose vertex-spaces are \(\operatorname{link}_{X_{v_{i}}}\left(x_{i}\right)\) and whose thick edge-spaces are \(\left(\operatorname{link}_{X_{e_{ij}}}\left(x_{i}\right)\times I\right)\), where \(X_{v_{i}}\in\left\{X_{v}\ :\ v\in V_{0}\right\}\) is the vertex-space containing \(x_{i}\) and \(X_{e_{ij}}\times I\in\left\{X_{e}\times I\ :\ e\in E_{0}\right\}\) are the thick edge-spaces containing \(x_{i}\). Note that taking the quotient \(X\to X^{E}\) induces a quotient \(Y\to Y^{E}\) where \(\operatorname{link}_{X^{E}}\left(x\right)=Y^{E}\). **Remark 3.5**.: When the edge-spaces of \(X\) are embedded locally convex subcomplexes, the edge-spaces of an induced graph of links are embedded full subcomplexes. However, the vertex-spaces of an induced graph of links are not necessarily connected. **Lemma 3.6**.: _Let \(Y=A\cup_{{}_{C}}B\) where \(A,B\) are simplicial complexes and \(C\) embeds as a full subcomplex in \(A\) and \(B\). Then \(Y\) is simplicial and \(A\) embeds as a full subcomplex of \(Y\)._ Proof.: We show the nonempty intersection of two simplices is a simplex. Let \(\sigma_{1},\sigma_{2}\subset Y\) be simplices with \(\sigma_{1}\cap\sigma_{2}\neq\emptyset\). Each simplex of \(Y\) is either in \(A\) or in \(B\). Suppose \(\sigma_{1}\subset A,\ \sigma_{2}\subset B\) with \(\sigma_{1}\not\subset B\) and \(\sigma_{2}\not\subset A\). Let \(Z\) be the set of \(0\)-simplices of \(\sigma_{1}\cap\sigma_{2}\) and note that \(Z\subset C\). Then \(Z\) spans simplices \(\delta_{1}\subset A\) and \(\delta_{2}\subset B\). Since \(C\) is full in \(A\) and \(B\), we see that \(\delta_{1}\) and \(\delta_{2}\) are the same simplex of \(C\). That is, \(\sigma_{1}\cap\sigma_{2}\) is a simplex. To show \(A\hookrightarrow Y\) is full, we show that whenever a set of \(0\)-simplices \(S\subset A\) spans a simplex \(\Delta\), we have \(\Delta\subset A\). Indeed, suppose \(\Delta\subset B\), then \(S\subset C\). But \(C\) is full in \(B\) and so \(\Delta\subset C\subset A\). **Lemma 3.7**.: _Let \(Y=A\cup_{{}_{C}}B\) where \(A,B\) are flag complexes and \(C\) embeds as a full subcomplex in \(A\) and \(B\). Then \(Y\) is flag and \(A\) embeds as a full subcomplex of \(Y\)._ Proof.: \(Y\) is simplicial by Lemma 3.6. To show flagness, let \(K\subset Y\) be an \(n\)-clique. We claim that \(K\subset A\) or \(K\subset B\). We proceed by induction on \(n\). The base case \(n=0\) is trivial. Suppose the claim holds for all cliques of size \(\leq n\) and let \(K\) be an \((n+1)\)-clique. By induction, every proper subclique of \(K\) lies in either \(A\) or \(B\). Without loss of generality, let \(\sigma_{1}\in K^{0}\) be a \(0\)-simplex with \(\sigma_{1}\notin A\). Then \(\sigma_{1}\in B\) and for any \(0\)-simplex \(\sigma_{2}\in K^{0}\), the \(1\)-simplex \(\sigma_{1}\sigma_{2}\) lies in \(B\). Indeed, if \(\sigma_{1}\sigma_{2}\) lies in \(A\), then \(\sigma_{1}\) lies in \(A\) which is a contradiction. Therefore, \(\sigma_{2}\in B\) and so \(K^{0}\subset B\). Moreover, given \(0\)-simplices \(\sigma_{2}\) and \(\sigma_{3}\) in \(K^{0}\), the \(1\)-simplex \(\sigma_{2}\sigma_{3}\) lies in \(B\). To see this, suppose \(\sigma_{2}\sigma_{3}\in A\). Then and \(\sigma_{2}\) and \(\sigma_{3}\) lie in \(A\cap B=C\). But \(C\) is full in \(A\) and so \(\sigma_{2}\sigma_{3}\in C\subset B\). Since \(B\) is flag, \(K\) bounds a simplex. Let \(K\subset Y\) be a clique such that \(K^{0}\subset A\). Then by the previous part, \(K^{1}\subset A\) and it spans a simplex \(\Delta\subset A\). Hence \(A\) embeds as a full subcomplex of \(Y\) **Lemma 3.8**.: _Let \(Y\) be a tree of spaces where each vertex-space is a flag complex and each edge-space embeds as a full subcomplex in its vertex-space. Then \(Y^{E}\) is flag._ Proof.: Any failure of flagness arises in a quotient of a finite subtree. Therefore, it suffices to prove the claim for finite trees. This follows by induction from Lemma 3.7. Note that a full subcomplex of a full subcomplex is full. **Corollary 3.9**.: _Let \(\widehat{X}\rightarrow\Gamma_{\widehat{X}}\) be a tree of nonpositively curved cube complexes where the attaching maps of edge-spaces are injective local isometries. Then \(\widehat{X}^{E}\) is nonpositively curved._ Proof.: Let \(x\) be a \(0\)-cube in \(\widehat{X}^{E}\) and let \(Y\) be the corresponding induced graph of links with underlying graph \(q^{-1}\left(x\right)\). Since \(q^{-1}\left(x\right)\) immerses in \(\Gamma_{\widehat{X}}\), it is a tree. Then \(Y\) is a tree of flag complexes with embedded full edge-spaces. By Lemma 3.8, the horizontal quotient \(Y^{E}\) is flag, and so is \(\operatorname{link}_{\widehat{X}^{E}}\left(x\right)\). **Definition 3.10**.: Let \(X\) be a graph of cube complexes with horizontal quotient \(q:X\to X^{E}\). Let \(G\) be a connected subgraph of a horizontal graph in \(X\). Then: 1. A hyperplane \(U\)_osculates with_\(G\) if \(U\) is dual to a vertical \(1\)-cube whose initial or terminal \(0\)-cube lies in \(G\). 2. A two-sided hyperplane \(U\)_self-osculates at_\(G\) if \(U\) is dual to oriented vertical \(1\)-cubes \(a\) and \(b\) whose initial (or terminal) \(0\)-cubes \(t_{a}\) and \(t_{b}\) lie in \(G\), where \(q\left(a\right)\) and \(q\left(b\right)\) are not consecutive \(1\)-cubes of a \(2\)-cube in \(X^{E}\), and \(q\left(a\right)\neq q\left(b\right)\). When \(t_{a}\neq t_{b}\), the hyperplane \(U\)_remotely self-osculates at_\(G\), in which case we say \(X\) has _remote self-osculation_. 3. A pair of distinct crossing hyperplanes \(U\) and \(V\)_inter-osculate at_\(G\) if there are vertical \(1\)-cubes \(a\) and \(b\), with \(a\) dual to \(U\) and \(b\) dual to \(V\), with boundary \(0\)-cubes \(t_{a}\) and \(t_{b}\) lying in \(G\), but \(q\left(a\right)\) and \(q\left(b\right)\) are not consecutive \(1\)-cubes of a \(2\)-cube in \(X^{E}\). When \(t_{a}\neq t_{b}\), the hyperplanes \(U\) and \(V\)_remotely inter-osculate at_\(G\) in which case we say \(X\) has _remote inter-osculation_. Note that Definition 3.10 agrees with the definitions in Section 2.5 when \(t_{a}=t_{b}\). **Remark 3.11**.: Remote self-osculations and inter-osculations in \(X\) are not actual self-osculations and inter-osculations, but they project to self-osculations/inter-osculations under the horizontal quotient \(q:X\to X^{E}\) whenever \(q\) is cubical. **Lemma 3.12**.: _Let \(X\) be a graph of cube complexes and suppose \(X\) has no one-sided, self-crossing, self-osculating, or inter-osculating hyperplanes. If the horizontal quotient \(q:X\to X^{E}\) is cubical and \(X^{E}\) has self-osculation/inter-osculation then \(X\) has remote self-osculation/inter-osculation._ Proof.: Let \(U\xrightarrow{f}X^{E}\) be a self-osculating hyperplane. By Lemma 3.2, there is a hyperplane \(V\xrightarrow{g}X\) with \(q\circ g\left(V\right)=f\left(U\right)\). Since \(X\) has no self-crossing hyperplanes, \(g\) and (hence) \(f\) are embeddings, and so we can identify \(U\) and \(V\) with their images. Since the hyperplanes of \(X\) are \(2\)-sided, and \(q\) is orientation-preserving, the \(1\)-cubes of \(X^{E}\) can be oriented consistently with the orientations of \(1\)-cubes of \(X\). Let \(a_{u}\) and \(b_{u}\) be distinct oriented \(1\)-cubes dual to \(U\) that share the \(0\)-cube \(t\) where the self-osculation occurs. We can assume without loss of generality that \(t\) is the terminal \(0\)-cube of \(a_{u}\) and \(b_{u}\). Let \(a_{v}\) and \(b_{v}\) be oriented \(1\)-cubes dual to \(V\) and mapping to \(a_{u}\) and \(b_{u}\), respectively. Let \(G=q^{-1}\left(t\right)\) be the horizontal graph mapping to \(t\). Let \(t_{a}\) and \(t_{b}\) be terminal points of \(a_{v}\) and \(b_{v}\). See Figure 4. Then \(t_{a}\) and \(t_{b}\) lie in \(G\) and since \(X\) has no self-osculating hyperplanes, \(t_{a}\neq t_{b}\). Since \(q\left(a_{v}\right)=a_{u}\neq b_{u}=q\left(b_{v}\right)\), the hyperplane \(V\) remotely self-osculates at \(G\). Let \(U_{1}\) and \(U_{2}\) be inter-osculating hyperplanes in \(X^{E}\), and let \(V_{1}\) and \(V_{2}\) be the crossing hyperplanes in \(X\) mapping to \(U_{1}\) and \(U_{2}\), respectively. Suppose the inter-osculation occurs at \(1\)-cubes \(a_{u_{1}}\) and \(b_{u_{2}}\) dual to \(U_{1}\) and \(U_{2}\) and meeting at a \(0\)-cube \(t\). Let \(a_{v_{1}}\) and \(b_{v_{2}}\) be \(1\)-cubes dual to \(V_{1}\) and \(V_{2}\) and mapping to \(a_{u_{1}}\) and \(b_{u_{2}}\), respectively. Since \(X\) has no inter-osculating hyperplanes, \(G=q^{-1}\left(t\right)\) is nontrivial and contains the distinct \(0\)-cubes \(t_{a}\) and \(t_{b}\) of \(a_{v_{1}}\) and \(b_{v_{2}}\). Moreover, since \(a_{u_{1}}\) and \(b_{u_{2}}\) do not form a consecutive pair of edges of a \(2\)-cube, \(V_{1}\) and \(V_{2}\) remotely inter-osculate at \(G\). **Lemma 3.13**.: _Let \(X\) be a graph of cube complexes and let \(G\) be a horizontal graph in \(X\). Suppose \(X\) has no self-osculation and no inter-osculation._ 1. _If a hyperplane_ \(U\) _of_ \(X\) _remetely self-osculates at_ \(G\)_, then_ \(G\cap N\left(U\right)\) _is disconnected._ 2. _If crossing hyperplanes_ \(U\) _and_ \(V\) _of_ \(X\) _remetely inter-osculate at_ \(G\)_, then_ \(G\cap\left(N\left(U\right)\cup N\left(V\right)\right)\) _is disconnected._ Proof of \(\left(1\right)\).: Let \(U\) be a remotely self-osculating hyperplane in \(X\). Let \(a\) and \(b\) be the oriented \(1\)-cubes dual to \(U\) with terminal \(0\)-cubes \(t_{a}\) and \(t_{b}\) in \(G\), as in Definition 3.10. Then \(t_{a},t_{b}\in N\left(U\right)\) and so \(G\cap N\left(U\right)\neq\emptyset\). We claim that \(t_{a}\) and \(t_{b}\) lie in distinct components of \(G\cap N\left(U\right)\). Suppose otherwise. Since \(t_{a}\neq t_{b}\), there is a nontrivial horizontal path \(\gamma\to G\cap N\left(U\right)\) from \(t_{a}\) to \(t_{b}\). Express \(\gamma\) as a concatenation of horizontal \(1\)-cubes, \(\gamma=e_{1}\cdots e_{n}\), where \(e_{1}\) contains \(t_{a}\) and \(e_{n}\) contains \(t_{b}\). Since the attaching maps of edge-spaces are injective, any horizontal \(1\)-cube in \(N\left(U\right)\) lies in a \(2\)-cube whose opposite \(1\)-cube is also horizontal. Since \(X\) has no self-osculation, the hyperplane \(U\) does not self-osculate, and so there is a \(2\)-cube \(S_{1}\subset N\left(U\right)\) that contains \(a\) and \(e_{1}\). Let \(a_{1}\) and \(e_{1}^{\prime}\) be the \(1\)-cubes in \(S_{1}\) opposite to \(a\) and \(e_{1}\), respectively. Then \(e_{1}^{\prime}\subset N\left(U\right)\) is horizontal and intersects \(a\) in its initial \(0\)-cube \(i_{a}\). Furthermore, the \(1\)-cubes \(a_{1}\subset S_{1}\) and \(e_{2}\subset\gamma\) share a \(0\)-cube. By the same argument, there is a \(2\)-cube \(S_{2}\subset N\left(U\right)\) containing \(a_{1}\) and \(e_{2}\), where the opposite \(1\)-cube of \(e_{2}\) is a horizontal \(1\)-cube \(e_{2}^{\prime}\) that shares a common \(0\)-cube with \(e_{1}^{\prime}\) and \(a_{1}\). By induction, there is a sequence of horizontal \(1\)-cubes \(e_{1}^{\prime},\ldots,e_{n}^{\prime}\) in \(N\left(U\right)\) where \(e_{1}^{\prime}\) intersects \(a\) in its initial \(0\)-cube \(i_{a}\) and where \(e_{n}^{\prime}\) intersects \(b\) in its initial \(0\)-cube \(i_{b}\). We distinguish two cases. See Figure 5. Figure 4. The hyperplane \(V\) osculates with \(G=q^{-1}\left(t\right)\) at two points \(t_{a}\) and \(t_{b}\). Case 1: There is a sequence \(e_{1}^{\prime},\ldots,e_{n}^{\prime}\) that forms a connected horizontal path from \(i_{a}\) to \(i_{b}\). In this case there is a ladder from \(a\) to \(b\) showing that \(q\left(a\right)=q\left(b\right)\) which is a contradiction. Case 2: No sequence \(e_{1}^{\prime},\ldots,e_{n}^{\prime}\) forms a horizontal path from \(i_{a}\) to \(i_{b}\). Then there is a sequence \(e_{1}^{\prime},\ldots,e_{n}^{\prime}\) and consecutive 1-cubes \(e_{j}\) and \(e_{j+1}\) of \(\gamma\) meeting at a 0-cube \(x\), where the corresponding horizontal 1-cubes \(e_{j}^{\prime}\) and \(e_{j+1}^{\prime}\) do not intersect. Then \(x\) is a point of self-osculation for \(U\) which is a contradiction. Proof of \((2)\).: Let \(U\) and \(V\) be remotely inter-osculating hyperplanes in \(X\). Let \(a\) and \(b\) be the vertical 1-cubes dual to \(U\) and \(V\), respectively, with boundary 0-cubes \(t_{a}\neq t_{b}\) in \(G\), as in Definition 3.10. We claim that \(t_{a}\) and \(t_{b}\) lie in distinct components of \(G\cap\left(N\left(U\right)\cup N\left(V\right)\right)\). Suppose otherwise. Then there is a nontrivial horizontal path \(\gamma\rightarrow\left(N\left(U\right)\cup N\left(V\right)\right)\) from \(t_{a}\) to \(t_{b}\). Let \(\gamma=\gamma_{u}\cdot\gamma_{v}\), where \(\gamma_{u}\to N\left(U\right)\) and \(\gamma_{v}\to N\left(V\right)\), and suppose without loss of generality that \(\gamma_{u}\) is nontrivial. Let \(x\in\gamma_{u}\cap\gamma_{v}\) and let \(a_{x}\) and \(b_{x}\) be the vertical 1-cubes dual to \(U\) and \(V\) with boundary 0-cube \(x\). Let \(\gamma_{u}=e_{1}\cdots e_{n}\) be the horizontal path from \(t_{a}\) to \(x\). As in part (1), there is a sequence \(e_{1}^{\prime},\ldots,e_{n}^{\prime}\) that forms a path in \(N\left(U\right)\) since otherwise, \(U\) self-osculates which is a contradiction. So, \(a\) and \(a_{x}\) lie in the same parallelism class. Similarly, if \(\gamma_{v}\) is nontrivial, the 1-cubes \(b\) and \(b_{x}\) are in the same E-parallelism class. If \(\gamma_{v}\) is trivial, then \(x=t_{b}\) and \(b=b_{x}\). So we have shown that both \(a\) and \(b\) are in the same E-parallelism classes as the consecutive 1-cubes \(a_{x}\) and \(b_{x}\). By assumption, \(U\) and \(V\) remotely inter-osculate, and so \(a_{x}\) and \(b_{x}\) do not bound a corner of a square. But this means that \(U\) and \(V\) inter-osculate at \(x\) which is a contradiction. **Definition 3.14**.: Let \(X\) be a cube complex. A subcomplex \(X^{\prime}\subset X\)_self-osculates_ if there is a hyperplane \(U^{\prime}\) of \(X^{\prime}\) that extends to a hyperplane \(U\) of \(X\) dual to a 1-cube whose intersection with \(X^{\prime}\) consists of 0-cubes. **Definition 3.15**.: A graph of cube complexes is _controlled_ if for each thick edge-space \(X_{e}\times I\) attached to vertex-spaces \(X_{v_{1}}\) and \(X_{v_{2}}\), the following hold for each \(i\in\{1,2\}\): 1. distinct hyperplanes of \(X_{e}\) extend to distinct hyperplanes of \(X_{v_{i}}\) (wall-injectivity); 2. non-crossing hyperplanes of \(X_{e}\) extend to non-crossing hyperplanes of \(X_{v_{i}}\) (cross-injectivity); Figure 5. Case 1 on the left. Case 2 on the right. 3. the edge-space \(X_{e}\) is non self-osculating in \(X_{v_{i}}\). **Lemma 3.16**.: _Let \(\widehat{X}\to\Gamma_{\widehat{X}}\) be a controlled tree of cube complexes and suppose each vertex-space of \(\widehat{X}\) has embedded hyperplanes. Then each hyperplane \(U\) of \(\widehat{X}\) dual to a vertical \(1\)-cube splits as a tree of spaces \(U\to\Gamma_{U}\) so that the following diagram commutes:_ _Moreover, \(\Gamma_{U}\to\Gamma_{\widehat{X}}\) is an embedding, each hyperplane splits as a tree of connected spaces, each of which embeds in \(\widehat{X}\), and consequently, \(U\) embeds in \(\widehat{X}\) and \(U\cap X_{v}\) is connected for each vertex-space \(X_{v}\subset\widehat{X}\)._ Proof.: Let \(U\to\Gamma_{U}\) be a graph of spaces decomposition induced by \(\widehat{X}\to\Gamma_{\widehat{X}}\). Since \(U\) is dual to a vertical \(1\)-cube, \(U\) has nonempty intersection with at least one vertex-space. The vertex-spaces of \(U\) are the components of intersections with the vertex-spaces of \(\widehat{X}\), and likewise for edge-spaces. Wall-injectivity implies that \(U\cap X_{v}\) is a single hyperplane for each vertex-space \(X_{v}\) intersecting with \(U\). So \(\Gamma_{U}\to\Gamma_{\widehat{X}}\) is an immersion and thus an injection. Therefore, \(\Gamma_{U}\) is a tree and \(U\to\widehat{X}\) is an embedding. **Lemma 3.17**.: _Let \(\widehat{X}\to\Gamma_{\widehat{X}}\) be a controlled tree of cube complexes and let \(X_{e}\) be an edge-space in a vertex-space \(X_{v}\). Let \(U\subset\widehat{X}\) be an embedded hyperplane dual to a vertical \(1\)-cube \(a\in X_{v}\). If \(a\cap X_{e}\) consists of \(0\)-cubes then \(U\cap X_{e}=\emptyset\). See Figure 6._ Proof.: By Lemma 3.16, the intersection \(U\cap X_{v}\) is connected. Since \(X_{e}\) is not self-osculating in \(X_{v}\), we have \(U\cap X_{e}=\emptyset\). **Lemma 3.18**.: _Let \(\widehat{X}\xrightarrow{p}\Gamma_{\widehat{X}}\) be a controlled tree of cube complexes with embedded hyperplanes. Then \(\widehat{X}\) has no remote self-osculation/inter-osculation._ Proof.: The horizontal graphs of \(\widehat{X}\) are trees that intersect each vertex-space of \(\widehat{X}\) in at most one \(0\)-cube. Suppose \(U\) is a hyperplane that remotely self-osculates at a Figure 6. The edge-space \(X_{e}\) osculates with the hyperplane \(U\). If \(X_{e}\cap U\neq\emptyset\), then either \(X_{e}\) self-osculates (left) or wall-injectivity fails in some edge-space \(X_{e^{\prime}}\) (middle). \(X_{e}\cap U=\emptyset\) (right). horizontal tree \(T\). By Lemma 3.13, \(T\cap N\left(U\right)\) is not connected. Let \(K_{1}\) and \(K_{2}\) be components of \(T\cap N\left(U\right)\). Let \(t_{1},t_{2}\in T\) be the closest \(0\)-cubes in \(T\) with \(t_{1}\in K_{1}\) and \(t_{2}\in K_{2}\). Let \(a_{1}\) be the \(1\)-cube dual to \(U\) and containing \(t_{1}\). Let \(\gamma=e_{1}\cdots e_{n}\) be the shortest horizontal path in \(T\) from \(t_{1}\) to \(t_{2}\), where each \(1\)-cube \(e_{i}\) is in the edge-space \(X_{e_{i}}\). Note that \(\gamma\) is nontrivial since \(t_{1}\neq t_{2}\). The \(1\)-cube \(e_{1}\) with initial \(0\)-cube \(t_{1}\) does not lie in \(N\left(U\right)\) for otherwise, the terminal \(0\)-cube of \(e_{1}\) is in \(K_{1}\) and is closer to \(t_{2}\). Then \(a_{1}\) is not in \(X_{e_{1}}\). Since \(a_{1}\cap X_{e_{1}}\) consists of \(0\)-cubes, we have by Lemma 3.17, \(U\cap X_{e_{1}}=\emptyset\). On the other hand, since \(U\) splits as a graph of spaces \(U\rightarrow\Gamma_{U}\) where \(\Gamma_{U}\) is a subtree of \(\Gamma_{\widehat{X}}\), the image \(\left(\gamma\rightarrow\Gamma_{\widehat{X}}\right)\hookrightarrow\Gamma_{U}\) and so \(U\cap X_{e_{1}}\neq\emptyset\) which is a contradiction. Suppose \(U\) and \(V\) are hyperplanes that remotely inter-osculate at a horizontal tree \(T\). By Lemma 3.13, \(T\cap\left(N\left(U\right)\cup N\left(V\right)\right)\) is not connected. Let \(t_{1}\in N\left(U\right)\) and \(t_{2}\in N\left(V\right)\) be the closest \(0\)-cubes lying in distinct components of \(T\cap\left(N\left(U\right)\cup N\left(V\right)\right)\). Let \(\gamma_{1}=e_{1}\cdots e_{n}\) be the nontrivial horizontal path from \(t_{1}\) to \(t_{2}\), where each \(1\)-cube \(e_{i}\) lies in \(X_{e_{i}}\). Let \(a_{1}\) and \(a_{2}\) be the \(1\)-cubes dual to \(U\) and \(V\) and containing \(t_{1}\) and \(t_{2}\), respectively. As in part (1), we have \(a_{1}\notin X_{e_{1}}\) and \(a_{2}\notin X_{e_{n}}\), and so by Lemma 3.17, we have \(U\cap X_{e_{1}}=\emptyset\) and \(V\cap X_{e_{n}}=\emptyset\). Since \(\widehat{X}\) is a tree of spaces, each pair of vertex-spaces is joined by at most one edge-space. Thus, \(U\cap X_{e_{1}}=\emptyset\) implies \(U\cap X_{e_{i}}=\emptyset\) for all \(1\leq i\leq n\). Similarly, \(V\cap X_{e_{1}}=\emptyset\) for all \(1\leq i\leq n\). Since \(U\) crosses \(V\), there is a \(0\)-cube \(x\in N\left(U\right)\cap N\left(V\right)\), and a path \(\gamma_{2}=f_{1}\cdots f_{m}\) from \(t_{2}\) to \(t_{1}\) passing through \(x\), where \(f_{j}\in\left(N\left(U\right)\cup N\left(V\right)\right)\). The concatenation \(\gamma_{1}\cdot\gamma_{2}\) projects to a closed path in the tree \(\Gamma_{\widehat{X}}\). Since \(\gamma_{1}\) is horizontal, \(\gamma_{1}\xrightarrow{p}\Gamma_{\widehat{X}}\) is an embedding. Hence there is a \(1\)-cube \(f_{j}\in\gamma_{2}\) so that \(p\left(f_{j}\right)=p\left(e_{1}\right)\). If \(f_{j}\in N\left(U\right)\), then \(U\cap X_{e_{1}}\neq\emptyset\) and if \(f_{j}\in N\left(V\right)\), then \(V\cap X_{e_{1}}\neq\emptyset\), both leading to contradictions. **Proposition 3.19**.: _Let \(\widehat{X}\rightarrow\Gamma_{\widehat{X}}\) be a controlled tree of nonpositively curved cube complexes with embedded locally convex edge-spaces. Let \(q:\widehat{X}\rightarrow\widehat{X}^{E}\) be the horizontal quotient. If \(\widehat{X}\) is special then so is \(\widehat{X}^{E}\)._ Proof.: By Corollary 3.9, \(\widehat{X}^{E}\) is nonpositively curved. Since \(\widehat{X}\) is a tree of spaces, the horizontal quotient \(q:\widehat{X}\rightarrow\widehat{X}^{E}\) is strict. By Lemma 3.2, each hyperplane of \(\widehat{X}^{E}\) is embedded and two-sided. By Lemma 3.12, self-osculation/inter-osculation in \(\widehat{X}^{E}\) arises from remote self-osculation/inter-osculation in \(\widehat{X}\). By Lemma 3.18, \(\widehat{X}\) has no remote self-osculation/inter-osculation. ## 4. Subgroup Separability The collection of finite index cosets of a group \(F\) forms a basis for the _profinite topology_ on \(F\). The multiplication and inversion are continuous with respect to this topology. A subset \(S\subset F\) is _separable_ if it is closed in the profinite topology. A subgroup \(H\subset F\) is separable if and only if \(H\) is the intersection of finite index subgroups. **Theorem 4.1** (Ribes-Zalesskii [1]).: _Let \(H_{1},\ldots,H_{m}\) be finitely generated subgroups of a free group \(F\). Then \(H_{1}H_{2}\cdots H_{m}\) is closed in the profinite topology._ It follows that \(g_{1}H_{1}g_{2}H_{2}\cdots g_{m}H_{m}\) is also closed in the profinite topology, for finitely generated subgroups \(H_{i}\subset F\) and \(g_{i}\in F\) with \(1\leq i\leq m\). Starting with a tree of nonpositively curved cube complexes \(\widehat{X}\to\Gamma_{\widehat{X}}\) and using separability properties of the free group action on \(\widehat{X}\), we find compact quotients \(\widehat{X}\to\overline{X}\) where the horizontal quotient \(\overline{X}\to\overline{X}^{E}\) is cubical, \(\overline{X}^{E}\) is nonpositively curved with well-behaved hyperplanes whenever \(\widehat{X}\) is controlled and special. **Lemma 4.2**.: _Let \(X\to\Gamma_{X}\) be a compact graph of cube complexes with one vertex-space \(Y\). Then \(X\) has a finite regular cover \(\overline{X}\) such that:_ 1. \(\overline{X}\to\Gamma_{\overline{X}}\) _is a graph of spaces whose vertex-spaces are isomorphic to_ \(Y\)_;_ 2. _The horizontal quotient_ \(\overline{X}\to\overline{X}^{E}\) _is strict._ _Furthermore, any finite regular cover \(\overline{X}^{\prime}\to\overline{X}\) induced by a cover of underlying graphs \(\Gamma^{\prime}_{\overline{X}}\to\Gamma_{\overline{X}}\) satisfies properties 1 and 2._ Proof.: We find a covering space that splits as a graph of cube complexes with vertex-spaces isomorphic to \(Y\) and whose horizontal quotient is strict. The underlying graph \(X\to\Gamma_{X}\) is a bouquet of circles. Let \(\widetilde{\Gamma_{X}}\to\Gamma_{X}\) be the universal covering map and let \(\widehat{X}\to X\) be the corresponding covering map so that the following diagram commutes: Then \(\pi_{1}\Gamma_{X}\) acts freely and cocompactly on \(\widehat{X}\). Let \(N\subset\pi_{1}\Gamma_{X}\) be a finite index normal subgroup, and let \(N\backslash\widehat{X}=\overline{X}\to X\) be the covering map induced by \(N\backslash\Gamma_{\widehat{X}}=\Gamma_{\overline{X}}\to\Gamma_{X}\) so that the following diagram commutes: Then \(\overline{X}\) is a graph of cube complexes where each vertex-space is isomorphic to \(Y\). We need to choose \(\overline{X}\), and thus \(N\), so that no vertex-space has two points in the same \(E\)-parallelism class. In our cubical setting, it is sufficient to ensure that no two \(0\)-cubes of a vertex-space of \(\overline{X}\) lie in the same \(E\)-parallelism class. Recall that the attaching maps of edge-spaces are assumed to be injective. By compactness, there are finitely many \(0\)-cubes \(\{C_{i}\}_{i=1}^{n}\subset X^{0}\). Let \(K_{i}\) be the subgroup generated by the horizontal closed paths based at \(C_{i}\) for each \(i\). Fix a \(0\)-cube \(C_{i}\). Then \(K_{i}\) is finitely generated since \(X\) is compact. Moreover, since horizontal paths immerse in the underlying graph, the map \(X\to\Gamma_{X}\) induces an injective homomorphism \(K_{i}\to\pi_{1}\Gamma_{X}\). Identify \(K_{i}\) with its image. Let \(\{\gamma_{ij}\}_{j=1}^{m}\) be the set of all embedded non-closed horizontal paths between \(0\)-cubes \(C_{i}\) and \(C_{j}\). Each \(\gamma_{ij}\) maps to an essential closed path in \(\Gamma_{X}\) and thus represents a nontrivial element \(w_{ij}\in\pi_{1}\Gamma_{X}\). Furthermore, \(w_{ij}\notin K_{i}\). Indeed, since the attaching maps of edge-spaces are injective, the horizontal graphs immerse in \(\Gamma_{X}\), and so the elements represented by \(\gamma_{ij}\) are distinct from elements of \(K_{i}\). In particular, the products of finitely many cosets \(K_{i}w_{ii_{1}}K_{i_{1}}w_{i_{1}i_{2}}K_{i_{2}}\cdots\) does not contain the identity element. Note that there are finitely many such products of cosets. By Theorem 4.1, there exists a finite index normal subgroup \(N\unlhd\pi_{1}\Gamma_{X}\) that is disjoint from all such multiple cosets. Let \(p:\overline{X}\to X\) be the covering map corresponding to \(N\). Let \(Z\subset\overline{X}\) be a vertex-space and \(\overline{C_{i}},\overline{C_{j}}\in Z\) be \(0\)-cubes mapping to \(0\)-cubes \(C_{i},C_{j}\in X\). Then, \(\overline{C_{i}}\) and \(\overline{C_{j}}\) are not in the same \(E\)-parallelism class of \(\overline{X}\). Indeed, if \(\overline{\gamma}\) is a horizontal path in \(\overline{X}\) from \(\overline{C_{i}}\) to \(\overline{C_{j}}\), then \(p\left(\overline{\gamma}\right)\) is a horizontal path \(\gamma\) in \(X\) which represents an element in \(K_{i}w_{ii_{1}}K_{i_{1}}w_{i_{1}i_{2}}K_{i_{2}}\cdots w_{i_{n}j}K_{j}\), where \(w_{ii_{1}},w_{i_{1}i_{2}},\ldots,w_{i_{n}j}\) are the elements of \(\pi_{1}\Gamma_{X}\) representing non closed embedded paths between \(0\)-cubes \(C_{i},C_{i_{1}},C_{i_{2}}\cdots,C_{j}\), respectively. However, \(N\) contains no such elements, and thus \(q:\overline{X}\rightarrow\overline{X}^{E}\) is strict. Finally, any normal finite index subgroup \(N^{\prime}\subset N\) induces a finite cover \(\overline{X}^{\prime}\rightarrow\overline{X}\to X\) with the same properties as \(\overline{X}\). That is, \(\overline{X}^{\prime}\) splits as a graph of spaces with vertex-spaces isomorphic to \(Y\) and the horizontal quotient \(\overline{X}^{\prime}\rightarrow\overline{X}^{\prime E}\) is strict. **Lemma 4.3**.: _Let \(X\rightarrow\Gamma_{X}\) be a graph of nonpositively curved cube complexes and \(q:X\to X^{E}\) be a strict horizontal quotient, where \(X^{E}\) is nonpositively curved. Let \(Y\) be a vertex-space of \(X\). If \(X\) has no inter-osculating hyperplanes, then \(q\left(Y\right)\subset X^{E}\) is a locally convex subcomplex._ Proof.: It suffices to show that \(q\left(Y\right)\) has no missing squares in \(X^{E}\). To do so, we show that for each \(0\)-cube \(y\in q\left(Y\right)\), the inclusion \(\operatorname{link}_{q\left(Y\right)}\left(y\right)\subset\operatorname{link }_{X^{E}}\left(y\right)\) is full. Let \(y\in q\left(Y\right)\) be a \(0\)-cube, and let \(e\in\operatorname{link}_{X^{E}}\left(y\right)\) be a \(1\)-simplex whose boundary \(0\)-simplices \(x_{1}\) and \(x_{2}\) lie in \(\operatorname{link}_{q\left(Y\right)}\left(y\right)\) with \(e\notin\operatorname{link}_{q\left(Y\right)}\left(y\right)\). Since \(q\) is strict, there are consecutive \(1\)-cubes \(a_{1},a_{2}\in q\left(Y\right)\) containing \(y\) that are identified with consecutive \(1\)-cubes of a \(2\)-cube \(S_{e}\not\subset q\left(Y\right)\). Since \(X^{E}\) is nonpositively curved, \(e\) is the only \(1\)-simplex containing \(x_{1}\) and \(x_{2}\) and so \(a_{1}\) and \(a_{2}\) are not consecutive \(1\)-cubes of a \(2\)-cube in \(q\left(Y\right)\). Then \(X\) contains inter-osculating hyperplanes \(U_{1},U_{2}\) which cross in \(S_{e}^{\prime}\subset q^{-1}\left(S_{e}\right)\), and are dual to \(a_{1}^{\prime}\subset q^{-1}\left(a_{1}\right)\) and \(a_{2}^{\prime}\subset q^{-1}\left(a_{2}\right)\), respectively, where \(a_{1}^{\prime},a_{2}^{\prime}\) share a common \(0\)-cube \(y^{\prime}\in q^{-1}\left(y\right)\) but don't bound a corner of a \(2\)-cube in \(Y\). A contradiction. See Figure 7. The strategy for obtaining \(\overline{X}^{E}\) that is special, is to use multiple coset separability properties of \(F\) acting on \(\widehat{X}\) to obtain a compact special cube complex \(\overline{X}\) whose horizontal quotient \(\overline{X}^{E}\) is special. The property that hyperplanes are embedded and \(2\)-sided is preserved under the map \(\overline{X}\rightarrow\overline{X}^{E}\). However, non-inter-osculation Figure 7. Inter-osculation arising from consecutive \(1\)-cubes not bounding a \(2\)-cube in \(Y\). and non-self-osculation are not necessarily preserved by \(\overline{X}\to\overline{X}^{E}\). We are therefore forced to revisit and prove a more powerful form of Theorem 5.8, that provides an intermediate cover \(\overline{X}\) for which \(\overline{X}^{E}\) retains all desired properties. **Lemma 4.4**.: _Let \(\widehat{X}\to\Gamma_{\widehat{X}}\) be a controlled tree of compact nonpositively curved cube complexes with isomorphic vertex-spaces. Let \(F\) be a free group acting freely and cocompactly on \(\Gamma_{\widehat{X}}\) and \(\widehat{X}\), so that \(\widehat{X}\to\Gamma_{\widehat{X}}\) is \(F\)-equivariant. Suppose \(\widehat{X}\) is special. Then there is a finite index normal subgroup \(N\subset F\) and a covering map \(\widehat{X}\to N\backslash\widehat{X}=\overline{X}\) where \(\overline{X}\) splits as a graph of cube complexes whose horizontal quotient \(\overline{X}^{E}\) contains no self-osculating hyperplanes and no inter-osculating hyperplanes._ Proof.: Since \(\widehat{X}\) has no self-crossing hyperplanes, we can identify each immersed hyperplane with its image in \(\widehat{X}\). We first find a finite graph of cube complexes \(\overline{X}\) whose horizontal quotient has no inter-osculating hyperplanes. We do so by finding an appropriate finite index subgroup \(N\subset F\) and taking the quotient \(N\backslash\widehat{X}=\overline{X}\). Note that Lemma 4.2 allows us to pass to a finite cover, if necessary, to ensure that the horizontal quotient is a cube complex. By Lemma 3.12, the horizontal quotient \(\overline{X}^{E}\) has inter-osculation if \(\overline{X}\) has remote inter-osculation. Remote inter-osculation in \(\overline{X}\) occurs if there are crossing hyperplanes \(A,B\) of \(\widehat{X}\) and an element \(g\in F\) such that \(gB\) and \(A\) osculate with a horizontal graph \(T\) in \(\widehat{X}\). Such an element is called a _remote inter-osculator_ at \(T\). Let \(\mathcal{R}\subset F\) be the set of remote inter-osculators. We characterize the elements of \(\mathcal{R}\) and use subgroup separability to find a finite index subgroup of \(F\) that is disjoint from the set \(\mathcal{R}\). By \(F\)-cocompactness, there are finitely many \(F\)-orbits of horizontal graphs. Let \(\left\{T_{i}\right\}_{i=1}^{m}\) be their representatives. For each tree \(T_{i}\in\left\{T_{i}\right\}_{i=1}^{m}\) there are finitely many \(\operatorname{Stab}\left(T_{i}\right)\)-orbits of hyperplanes that osculate with \(T_{i}\). Let \(\left\{A_{ij}\right\}_{j=1}^{r_{i}}\) be their representatives. Similarly, for each hyperplane \(A_{ij}\in\left\{A_{ij}\right\}_{j=1}^{r_{i}}\), there are finitely many \(\operatorname{Stab}\left(A_{ij}\right)\)-orbits of hyperplanes crossing \(A_{ij}\). Let \(\left\{B_{ijk}\right\}_{k=1}^{s_{ij}}\) be their representatives. See Figure 8. For each \(B_{ijk}\) and \(A_{ir}\), if there is an element \(h_{ijkr}\) mapping \(B_{ijkr}\) to \(A_{ir}\), then the set of all elements \(g\) with \(gB_{ijk}=A_{ir}\) is: \[\mathcal{O}_{ijkr}=\operatorname{Stab}\left(A_{ir}\right)h_{ijkr}\operatorname {Stab}\left(B_{ijk}\right)\] Figure 8. The hyperplane \(B_{ijk}\) crosses \(A_{ij}\) which osculate with the horizontal graph \(T_{i}\). The element \(g\) maps \(B_{ijk}\) to \(A_{ir}\) which also osculates with \(T_{i}\). Furthermore, by precomposing \(g\in\mathcal{O}_{ijkr}\) with elements of \(\operatorname{Stab}\left(A_{ij}\right)\operatorname{Stab}\left(T_{i}\right)\), postcomposing \(g\) with elements of \(\operatorname{Stab}\left(T_{i}\right)\), and then taking the union over \(j,k,r\), we obtain the set of remote inter-osculators at \(T_{i}\): \[\mathcal{O}_{i}=\bigcup_{jkr}\operatorname{Stab}\left(T_{i}\right) \operatorname{Stab}\left(A_{ir}\right)h_{ijkr}\operatorname{Stab}\left(B_{ijk} \right)\operatorname{Stab}\left(A_{ij}\right)\operatorname{Stab}\left(T_{i}\right)\] Let \(\mathcal{O}=\bigcup_{i}\mathcal{O}_{i}\). Each horizontal graph \(T\) is a translate of some \(T_{i}\). Thus each remote inter-osculator at \(T\) is conjugate to an element of \(\mathcal{O}\). By assumption, \(\widehat{X}\) contains no inter-osculating hyperplanes. By Lemma 3.18, \(\widehat{X}\) has no remote inter-osculation and thus, \(1_{F}\notin\mathcal{O}\). By cocompactness, the stabilizers are finitely generated. By Theorem 4.1, the set \(\mathcal{O}\) is closed in the profinite topology, and so there exists a finite index normal subgroup \(N\) disjoint from \(\mathcal{O}\), and hence disjoint from \(\mathcal{R}\). Then the horizontal quotient of \(N\backslash\widehat{X}\to\left(N\backslash\widehat{X}\right)^{E}\) has no inter-osculating hyperplanes. Similarly, to find \(\overline{X}\to\overline{X}^{E}\) with no self-osculating hyperplanes, we use the same method and follow the steps sketched below. An element \(g\in F\) gives rise to self-osculation in \(\overline{X}^{E}\) if \(gA=A^{\prime}\) where \(A\) and \(A^{\prime}\) are hyperplanes osculating with the same horizontal graph \(T\). Such elements are called _remote self-osculators_ at \(T\). The set of remote self-osculators at \(T_{i}\) is: \[\mathcal{S}_{i}=\bigcup_{jr}\operatorname{Stab}\left(T_{i}\right) \operatorname{Stab}\left(A_{ir}\right)h_{ijr}\operatorname{Stab}\left(A_{ij} \right)\operatorname{Stab}\left(T_{i}\right)\] Then any remote self-osculator is conjugate to an element of \(\mathcal{S}=\bigcup_{i}\mathcal{S}_{i}\). By Lemma 3.18, we have \(1_{F}\notin\mathcal{S}\). Then there exists a finite index normal subgroup \(N^{\prime}\subset F\) such that \(N^{\prime}\backslash\widehat{X}\to\left(N^{\prime}\backslash\widehat{X}\right) ^{E}\) has no self-osculating hyperplanes and the following diagram commutes: The map \(\widehat{X}\to\left(N\cap N^{\prime}\right)\backslash\widehat{X}=\overline{X}\) provides the desired covering map. **Remark 4.5**.: By taking double covers, if necessary, we can ensure that the hyperplanes in \(\overline{X}\) are two-sided, which, by Lemma 3.2, means that the hyperplanes of \(\overline{X}^{E}\) are two sided as well. Moreover, any finite cover induced by a finite index subgroup of \(N\cap N^{\prime}\) has the properties stated in Lemma 4.4. Up until this point, we have shown how to find a compact quotient where the pathologies precluding specialness do not appear in the horizontal quotients. In the remainder of this section, we show how to ensure that the horizontal quotient is nonpositively curved. **Definition 4.6** (\(k\)-corners).: For \(k\in\{1,2,3\}\), a _\(k\)-cycle of squares_ is a planar complex \(S_{k}\) formed by gluing \(k\) squares around a vertex \(v\). A \(k\)-cycle of squares has \(k\) hyperplanes \(\{\alpha_{i}\mid 1\leq i\leq k\}\) and \(k\) codimension-2 hyperplanes \(\{\beta_{j}\mid 1\leq j\leq k\}\). Recall that a codimension-2 hyperplane is the intersection of two distinct pairwise intersecting hyperplanes, and the carrier of a codimension-2 hyperplane is the cubical neighborhood containing the intersection. See Figure 9. Let \(X\) be a cube complex and \(D\subset X\) be an \(n\)-cube. An \((n+2)\)-dimensional _\(k\)-corner of \(X\) at \(D\)_ is a combinatorial immersion \((Z_{k},I^{n})\rightarrow(X,D)\) where \(Z_{k}=S_{k}\times I^{n}\) and \(I^{n}\) is identified with \(\{v\}\times I^{n}\) in \(Z_{k}\). We write \(Z_{k}\to X\) when the map \(I^{n}\to D\) is clear from the context. A \(k\)-corner is _empty_ if \((Z_{k},I^{n})\rightarrow(X,D)\) does not extend to \(\left(I^{n+3},I^{n}\right)\rightarrow(X,D)\). Note that 1-corners and 2-corners are always empty. Furthermore, under the immersion \(Z_{k}\to X\), hyperplanes map to hyperplanes and crossing hyperplanes to crossing hyperplanes. **Remark 4.7**.: Nonpositive curvature can be expressed in terms of \(k\)-corners. Specifically, a cube complex is nonpositively curved if it has no empty \(k\)-corners. Indeed, if \(\operatorname{link}_{X}\left(D\right)\) has a loop [a bigon] then \(X\) has a 1-corner [a 2-corner] at \(D\). Furthermore, if the no-\(\triangle\) property fails at \(D\), then \(X\) has an empty 3-corner at \(D\). We also note that if \(X\) has an empty \(k\)-corner at \(D\), then \(\operatorname{link}_{X}\left(x\right)\) is not flag for each 0-cube \(x\) of \(D\). **Definition 4.8** (\(k\)-precorners).: Let \(X\rightarrow\Gamma_{X}\) be a graph of cube complexes and let \(q:X\to X^{E}\) be the horizontal quotient where \(q\) is cubical. Let \(Z_{k}\xrightarrow{\varphi}X^{E}\) be an \((n+2)\)-dimensional \(k\)-corner and let \(\left\{A_{i}=\alpha_{i}\times I^{n}\mid 1\leq i\leq k\right\}\) be hyperplanes of \(Z_{k}=S_{k}\times I^{n}\) where \(\left\{\alpha_{i}\mid 1\leq i\leq k\right\}\) are the hyperplanes of \(S_{k}\). Let \(\left\{B_{j}=\beta_{j}\times I^{n}\mid 1\leq j\leq k\right\}\) be codimension-2 hyperplanes of \(Z_{k}=S_{k}\times I^{n}\) where \(\left\{\beta_{j}\mid 1\leq j\leq k\right\}\) are the codimension-2 hyperplanes of \(S_{k}\). Let \(\left\{H_{i}\xrightarrow{h_{i}}X\mid 1\leq i\leq k\right\}\) be the immersed hyperplanes of \(X\) such that \(\varphi\left(A_{i}\right)\subset\left(q\circ h_{i}\right)\left(H_{i}\right)\), and let \(N\left(H_{i}\right)\rightarrow X\) be their immersed carriers. The \((n+2)\)-dimensional \(k\)-_precorner_\(P_{k}\) over the \((n+2)\)-dimensional \(k\)-corner \(Z_{k}\) is the disjoint union of the corresponding immersed carriers \(N\left(H_{i}\right)\to X\) amalgamated along the carriers of the codimension-2 hyperplanes of \(H_{i}\) that contain the preimages \(h_{i}^{-1}\left(q^{-1}\left(B_{j}\right)\right)\). See Figure 10. Note that there is a _global_ map \(h:P_{k}\to X\) that restricts to \(h_{i}\) on each immersed hyperplane \(H_{i}\). A \(k\)-precorner \(P_{k}\xrightarrow{h}X\) over a \(k\)-corner \(Z_{k}\xrightarrow{\varphi}X^{E}\) is _empty_ if \(Z_{k}\xrightarrow{\varphi}X^{E}\) is empty. \(P_{k}\xrightarrow{h}X\) is _trivial_ if if \(\varphi\) lifts to a combinatorial map \(Z_{k}\to X\) such that the following diagram commutes: Figure 9. 1-cycle, 2-cycle, and 3-cycle of squares with their dual curves. **Remark 4.9**.: The map \(P_{k}\xrightarrow{h}X\) induces a splitting of \(P_{k}\) as a graph of spaces as in the following commutative diagram: Specifically, the vertex-spaces of \(P_{k}\) are the components of the preimages of vertex-spaces of \(X\) and the edge-spaces of \(P_{k}\) are the components of the preimages of edge-spaces of \(X\). The graph \(\Gamma_{P_{k}}\) is the quotient of \(P_{k}\) obtained by identifying vertex-spaces and edge-spaces of \(P_{k}\) with vertices and edges of \(\Gamma_{P_{k}}\), respectively. The composition \(P_{k}\to X\to\Gamma_{X}\) induces a graph morphism \(\Gamma_{P_{k}}\to\Gamma_{X}\) that maps vertices to vertices and open edges to open edges. Note that when a \(k\)-precorner \(P_{k}\to X\to X^{E}\) over a \(k\)-corner \(Z_{k}\xrightarrow{\varphi}X^{E}\) is trivial, the lift of \(\varphi\) maps \(Z_{k}\) into a vertex-space of \(X\). This induces a map \(Z_{k}\to P_{k}\) whose range lies in a vertex-space of \(P_{k}\) such that the following diagram commutes: **Lemma 4.10**.: _Let \(\widehat{X}\to\Gamma_{\widehat{X}}\) be a tree of nonpositively curved cube complexes where the attaching maps of edge-spaces are injective local isometries. Let \(\widehat{X}\to\widehat{X}^{E}\) be the horizontal quotient and let \(P_{k}\to\widehat{X}\) be a \(k\)-precorner over a \(k\)-corner \(Z_{k}\xrightarrow{\varphi}\widehat{X}^{E}\). Then \(P_{k}\) is trivial and hence nonempty._ Proof.: Let \(T\subset\widehat{X}\) be a minimal connected subtree of spaces containing \(k\) cubes \(\left\{C_{i}\subset P_{k}\right\}_{i=1}^{k}\) that map onto \(\varphi\left(Z_{k}\right)\). Then \(T\) is finite since any \(k\) cubes mapping onto \(\varphi\left(Z_{k}\right)\) must lie in a finite connected subcomplex of \(\widehat{X}\). Note that the minimality is under inclusion and over all possible collections of \(k\) cubes mapping onto \(\varphi\left(Z_{k}\right)\). Let \(T\to\Gamma_{T}\) be the underlying tree. We claim that \(\Gamma_{T}\) is a vertex. Note that if \(k=1\) then there is only one cube that lies in a single vertex-space which by the minimality of \(T\), implies that \(\Gamma_{T}\) is a vertex. So we can assume \(2\leq k\leq 3\). Suppose that \(\Gamma_{T}\) has a spur \(e\) incident on vertices \(v_{1}\) and \(v_{2}\), where \(\deg\left(v_{1}\right)=1\). Let \(T_{e}\) be the corresponding edge-space attached to the vertex-spaces \(T_{v_{1}}\) and \(T_{v_{2}}\). By the minimality of \(T\), we can assume without loss of generality that \(T_{v_{1}}\) contains exactly one cube \(C_{i}\). There exist distinct immersed hyperplanes \(H_{1}\to\widehat{X}\) and \(H_{2}\to\widehat{X}\) that cross in \(C_{i}\) and extend to \(T_{v_{2}}\) through \(T_{e}\). Since the attaching maps are local isometries, \(C_{i}\) must be in the edge-space. But in that case, the edge-space \(T_{e}\times[-1,1]\) contains \(C_{i}\times[-1,1]\) and so the vertex-space \(T_{v_{2}}\) contains \(C_{i}\times\{-1\}\). Therefore, there exists a proper subtree \(T^{\prime}\subset T\) containing \(k\) cubes mapping onto \(\varphi\left(Z_{k}\right)\), contradicting the minimality of \(T\). Since \(\Gamma_{T}\) is finite and has no spurs, \(T\) is a vertex-space. Moreover, \(\widehat{X}\) is a tree of spaces, and so the restriction \(q|_{{}_{T}}:T\to\widehat{X}^{E}\) is an embedding. This provides the required map \(Z_{k}\to T\subset\widehat{X}\). So \(P_{k}\) is trivial. By assumption, the vertex-spaces of \(\widehat{X}\) are nonpositively curved. By Remark 4.7, \(Z_{k}\) (and hence \(P_{k}\)) is a nonempty \(k\)-corner (\(k\)-precorner). **Definition 4.11**.: Let \(X\to\Gamma_{X}\) be a graph of cube complexes and let \(F\) be a group acting on \(X\). Given \(k\in\{1,2,3\}\), a \(k\)_-chain_ is an ordered \((k+1)\)-tuple of immersed hyperplanes \(\left(H_{t}\right)_{t=0}^{k}\) where \(H_{t-1}\) crosses \(H_{t}\) for all \(1\leq t\leq k\). See Figure 11. An element \(g\in F\) is a _closing element_ if \(g\) maps \(H_{k}\) in some \(k\)-chain \(\left(H_{t}\right)_{t=0}^{k}\) to \(H_{0}\) giving rise to an empty \(k\)-precorner. We say \(\left(H_{t}\right)_{t=0}^{k}\) is _closed_ by \(g\). **Remark 4.12**.: Let \(\widehat{X}\) be a tree of compact isomorphic cube complexes and let \(F\) be a group acting freely and cocompactly on \(\widehat{X}\). If for some subgroup \(G\subset F\), the quotient \(G\setminus\widehat{X}\) has an empty \(k\)-precorner, then \(G\) contains a closing element of some \(k\)-chain in \(\widehat{X}\). Note that closing elements map codimension-2 hyperplanes to codimension-2 hyperplanes. **Definition 4.13**.: Let \(B\) be a compact bouquet of circles and let \(X\to B\) be a graph of cube complexes with one compact vertex-space. Let \(\widehat{X}\to X\) be the covering map induced by the universal covering map \(\widetilde{B}\to B\) so that the following diagram commutes: Figure 11. From left: 1-chain, 2-chain, and 3-chain. Figure 10. A 2-precorner. \(\Gamma_{\widehat{X}}=\widetilde{B}\)\(X\)\(\Gamma_{\widehat{X}}=\widetilde{B}\)\(\Gamma_{\widehat{X}}=\widetilde{B}\)\(X\)\(B\)\(\Gamma_{\widehat{X}}=\widetilde{B}\)\(\Gamma_{\ _Case \(k=1\)_: Let \(\{U_{r}\mid 1\leq r\leq n_{1}\}\) be \(\operatorname{Stab}\left(A_{1}\right)\)-representatives of orbits of codimension-2 hyperplanes in \(N\left(A_{1}\right)\). Then \(U=fa_{0}^{\prime}a_{1}U_{r}\) for some \(a_{1}\in\operatorname{Stab}\left(A_{1}\right)\) and \(U_{r}\in\{U_{r}\mid 1\leq r\leq n_{1}\}\). So \(gU=V\ \Rightarrow\ gfa_{0}^{\prime}a_{1}U_{r}=fa_{0}V_{s}\ \Rightarrow\left(a_{0}^{-1}f^{-1}gfa_{0}^{\prime}a_{1}\right)U_{r}=V_{s}\). Therefore \(\left(a_{0}^{-1}f^{-1}gfa_{0}^{\prime}a_{1}\right)\in J_{C}\), for \(C=\left(A_{t}\right)_{t=0}^{1}\), and so \[f^{-1}gf\in\operatorname{Stab}\left(A_{0}\right)J_{C}\operatorname{Stab}\left( A_{1}\right)\operatorname{Stab}\left(A_{0}\right)\] _Case \(k=2\)_: We have \(B_{2}=fa_{0}^{\prime}a_{1}A_{2}\) for some \(A_{1}\in L_{1}\) and \(a_{1}\in\operatorname{Stab}\left(A_{1}\right)\). Then \(U=fa_{0}^{\prime}a_{1}a_{2}U_{r}\) where \(a_{2}\in\operatorname{Stab}\left(A_{2}\right)\) and \(U_{r}\) is a \(\operatorname{Stab}\left(A_{2}\right)\)-representative in \(\{U_{r}\mid 1\leq r\leq n_{1}\}\). So, \(gU=V\ \Rightarrow\ g\left(fa_{0}^{\prime}a_{1}a_{2}\right)U_{r}=fa_{0}V_{s}\). Therefore, \[f^{-1}gf\in\operatorname{Stab}\left(A_{0}\right)J_{C}\operatorname{Stab}\left( A_{2}\right)\operatorname{Stab}\left(A_{1}\right)\operatorname{Stab}\left(A_{0}\right)\] _Case \(k=3\)_: Similarly, \(U=fa_{0}^{\prime}a_{1}a_{2}a_{3}U_{r}\) where \(a_{3}\in\operatorname{Stab}\left(A_{3}\right)\) and \(U_{r}\) is a \(\operatorname{Stab}\left(A_{3}\right)\)-representative in \(\{U_{r}\mid 1\leq r\leq n_{1}\}\). Thus, \(\left(gU=V\right)\Rightarrow g\left(fa_{0}^{\prime}a_{1}a_{2}a_{3}\right)U_{r}= fa_{0}V_{s}\), and so \[f^{-1}gf\in\operatorname{Stab}\left(A_{0}\right)J_{C}\operatorname{Stab}\left( A_{3}\right)\operatorname{Stab}\left(A_{2}\right)\operatorname{Stab}\left(A_{1} \right)\operatorname{Stab}\left(A_{0}\right)\] See Figure 12 for case \(k=2\). **Lemma 4.15**.: _Let \(B\) be a compact bouquet of circles and let \(X\to B\) be a graph of cube complexes with one compact nonpositively curved vertex-space and embedded locally convex edge-spaces. Let \(\widehat{X}\to X\) be the covering map induced by the universal covering map \(B\to\widetilde{B}\) where \(\widehat{X}\) is a tree of compact nonpositively curved cubes complexes and the following diagram commutes:_ _Let \(F=\pi_{1}B\) be the free group acting freely and cocompactly on \(\Gamma_{\widehat{X}}=\widetilde{B}\) inducing a free cocompact \(F\)-action on \(\widehat{X}\). Then there exists a compact graph of cube complexes \(\overline{X}\to\Gamma_{\overline{X}}\) and a regular covering map \(\widehat{X}\to\overline{X}\) such that the following diagram commutes and the horizontal quotient \(\overline{X}\to\overline{X}^{E}\) is nonpositively curved:_ Figure 12. Case \(k=2\) _Furthermore, any intermediate covering map \(\widehat{X}\to\overline{X}^{\prime}\to\overline{X}\) induced by a finite index normal subgroup of \(\pi_{1}\Gamma_{\overline{X}}\) splits as a graph of cube complexes with nonpositively curved horizontal quotient._ Proof.: Using Lemma 4.2, we can ensure that any finite cover \(\overline{X}\) we find below admits a cubical horizontal quotient. Fix collections \(L\) and \(\mathcal{C}\) as in Definition 4.13. Let \[\mathcal{O}=\bigcup_{1\leq k\leq 3}\,\bigcup_{C\in\mathcal{C}}\,(\operatorname{ Stab}\,(A_{0})\,J_{C}\operatorname{Stab}\,(A_{k})\cdots\operatorname{Stab}\,(A_{0}))\] where \(C=\left(A_{t}\right)_{t=0}^{k}\) and \(J_{C}\) is as in Definition 4.13. Note that the elements of \(\mathcal{O}\) are closing elements by definition. Any empty \(k\)-precorner in \(\overline{X}\) results from a \(k\)-chain in \(\widehat{X}\) that is closed by some element \(g\in F\). By Lemma 4.14, any closing element in \(F\) is conjugate to some element in \(\mathcal{O}\). By Lemma 4.10, \(\widehat{X}\) admits only trivial \(k\)-precorners where each trivial \(k\)-precorner is over a \(k\)-corner that lifts into a single vertex-space of \(\widehat{X}\). By assumption, the vertex-spaces of \(\widehat{X}\) are nonpositively curved and thus contain only nonempty \(k\)-corners. Thus \(\widehat{X}\) contains no closed \(k\)-chains and so, \(1_{F}\notin\mathcal{O}\). By Theorem 4.1, there exists a finite index normal subgroup \(G\triangleleft F\) that is disjoint from \(\mathcal{O}\). Let \(\overline{X}=G\backslash\widehat{X}\to G\backslash\Gamma_{\widehat{X}}=\Gamma_ {\overline{X}}\) and \(\widehat{X}\to\overline{X}\) be the corresponding compact quotient and the regular covering map, respectively. By Remark 4.12, \(\overline{X}\) contains no empty \(k\)-precorners, and thus the horizontal quotient \(\overline{X}^{E}\) has no empty \(k\)-corners. By Remark 4.7, \(\overline{X}^{E}\) is nonpositively curved. Finally, we note that any finite index normal subgroup of \(G\) contains no closing elements and so, the corresponding finite cover splits as a graph of spaces with nonpositively curved horizontal quotient. ## 5. The Construction **Definition 5.1**.: Let \(Y\) be a compact nonpositively curved cube complex, and let \(Y^{\prime}\subset Y\) be a subcomplex. The map \(\varphi:Y^{\prime}\subset Y\to Y\) is a _partial local isometry_ if \(\varphi\) is a local isometry and both \(Y^{\prime}\) and \(\varphi\left(Y^{\prime}\right)\) are locally convex subcomplexes of \(Y\). **Definition 5.2**.: Let \(Y\) be a nonpositively curved cube complex and let \(\mathcal{O}=\left\{\varphi_{j}:Y_{j}\subset Y\to Y\right\}_{j=1}^{n}\) be a collection of injective partial local isometries of \(Y\) where each \(Y_{j}\) is connected. The _realization_ of the pair \(\left(Y,\mathcal{O}\right)\) is the cube complex \(X\) obtained as the following quotient space: \[X\ =\ Y\bigsqcup_{j=1}^{n}\left(Y_{j}\times I\right)\Big{/}\{(y,0)\sim y,\ (y,1)\sim\varphi_{j}\left(y\right),\ \forall\ y\in Y_{j}\}_{j=1}^{n}\] The space \(X\) decomposes as a graph of spaces via the map \(X\to B\) with \(Y\mapsto v\) and \(Y_{j}\times I\mapsto\gamma_{j}\) where \(B\) is the bouquet of \(n\) circles \(\left\{\gamma_{j}\right\}_{j=1}^{n}\) incident to a vertex \(v\). **Lemma 5.3**.: _Let \(\overline{X}\rightarrow\Gamma_{\overline{X}}\) be a compact graph of cube complexes with a strict horizontal quotient \(\overline{X}\rightarrow\overline{X}^{E}\) and isomorphic vertex-spaces. Let \(\Phi\in\operatorname{Aut}\left(\Gamma_{\overline{X}}\right)\) and let \(\overline{\Phi}\in\operatorname{Aut}\left(\overline{X}\right)\) be a combinatorial automorphism that maps vertex-spaces to vertex-spaces isometrically. Suppose that the left square of the diagram below commutes. Then there exists an automorphism \(\overline{\Phi}^{E}\in\operatorname{Aut}\left(\overline{X}^{E}\right)\) such that the right square of the diagram below commutes:_ Proof.: Define \(\overline{\Phi}^{E}:\overline{X}^{E}\rightarrow\overline{X}^{E}\) by \(\overline{\Phi}^{E}\left(y\right)=q\left(\overline{\Phi}\left(q^{-1}\left(y \right)\right)\right)\). Then \(\overline{\Phi}^{E}\) is well-defined. Indeed, \(q^{-1}\left(y\right)\subset\overline{X}\) is either a point or a horizontal graph. Since \(\overline{\Phi}\) is a combinatorial automorphism, it maps points to points and (by the commutativity of the left square) horizontal graphs to horizontal graphs. In both cases, \(q\left(\overline{\Phi}\left(q^{-1}\left(y\right)\right)\right)\) is a single point. Moreover, for each point \(x\in\overline{X}\), we have \(\overline{\Phi}^{E}\left(q\left(x\right)\right)=q\left(\overline{\Phi}\left( q^{-1}\left(q\left(x\right)\right)\right)\right)\). Since \(q\left(x\right)\) is a point, \(q^{-1}\left(q\left(x\right)\right)\) is either the point \(x\) or a horizontal graph containing \(x\). In both cases, \(\overline{\Phi}^{E}\left(q\left(x\right)\right)=q\left(\overline{\Phi}\left( q^{-1}\left(q\left(x\right)\right)\right)\right)=q\left(\overline{\Phi}\left( \left(x\right)\right)\right)\) and thus the right square commutes. By the commutativity of the left square, \(\overline{\Phi}\) permutes the vertex-spaces of \(\overline{X}\) which makes \(\overline{\Phi}^{E}\) an automorphism of \(\overline{X}^{E}\) that permutes copies of the vertex-spaces. **Theorem 5.4**.: _Let \(Y\) be a compact nonpositively curved cube complex and let \(\mathcal{O}\) be the set of injective partial local isometries of \(Y\). Then \(Y\) embeds in a compact nonpositively curved cube complex \(R\) where each \(\varphi\in\mathcal{O}\) extends to an automorphism \(\Phi\in\operatorname{Aut}\left(R\right)\)._ Proof.: We construct a compact graph of spaces \(\overline{X}\) whose horizontal quotient \(\overline{X}^{E}=R\) has the desired properties. Let \(\mathcal{O}=\left\{\varphi_{j}:Y_{j}\subset Y\to Y\right\}_{j=1}^{n}\) be the collection of injective partial local isometries of \(Y\) and let \(X\to B\) be the realization of the pair \(\left(Y,\mathcal{O}\right)\). Let \(\gamma_{j}\to B\) be the closed path giving the loop in \(B\) that corresponds to \(\varphi_{j}\). Let \(F=\pi_{1}B\) and let \(\widehat{X}\to X\) be the covering map induced by the universal covering \(\widetilde{B}\to B\) such that the following diagram commutes: Then \(\widehat{X}\rightarrow\Gamma_{\widehat{X}}=\widetilde{B}\) is a nonpositively curved tree of cube complexes. By Lemma 4.2 and Lemma 4.15, there exists a finite regular cover \(\overline{X}\to X\) that splits as a graph of spaces according to the following commutative diagram and such that the horizontal quotient \(\overline{X}\to\overline{X}^{E}\) is strict and \(\overline{X}^{E}\) is nonpositively curved. Note that each vertex-space of \(\overline{X}\) is a copy of \(Y\) according to some fixed isomorphism. Fix a vertex \(v\in\Gamma_{\overline{X}}\) and let \(\overline{X}_{v}\) be the corresponding vertex-space of \(\overline{X}\). By subgroup separability of free groups, we can assume that \(\Gamma_{\overline{X}}\) has no loops. Thus \(\overline{X}_{v}\) is adjacent to \(2n\) vertex-spaces \(\left\{\overline{X}_{v_{i}}\right\}_{i=1}^{2n}\). For each \(\varphi_{j}\in\mathcal{O}\), there are two vertex-spaces \(\overline{X}_{v_{j}}\) and \(X_{v_{2j}}\) that are joined to \(\overline{X}_{v}\) by copies of \(Y_{j}\times[-1,1]\) where \(Y_{j}\times[-1,1]\) joins a copy of \(\varphi_{j}\left(Y_{j}\right)\) in \(\overline{X}_{v}\) to a copy of \(Y_{j}\) in \(\overline{X}_{v_{j}}\), and it joins a copy of \(Y_{j}\) in \(\overline{X}_{v}\) to a copy of \(\varphi_{j}\left(Y_{j}\right)\) in \(\overline{X}_{v_{2j}}\). Each \(Y_{j}\times[-1,1]\) corresponds to a unique map \(\varphi_{j}\in\mathcal{O}\) and thus to a unique closed path \(\gamma_{j}\to B\). The lift of \(\gamma_{j}\) at \(v\) specifies a unique automorphism \(\Phi_{j}\in\operatorname{Aut}\left(\Gamma_{\overline{X}}\right)\) that maps \(v\) to \(v_{j}\). Then there is an automorphism \(\overline{\Phi}_{j}\in\operatorname{Aut}\left(\overline{X}\right)\) that maps \(\overline{X}_{v}\) to \(\overline{X}_{v_{j}}\) such that the following diagram commutes: Note that \(\overline{\Phi}_{j}\) maps copies of \(Y_{j}\) and \(\varphi_{j}\left(Y_{j}\right)\) in \(\overline{X}_{v}\) to copies of \(Y_{j}\) and \(\varphi_{j}\left(Y_{j}\right)\) in \(\overline{X}_{v_{j}}\), respectively. However, in \(\overline{X}^{E}\) the copy of \(Y_{j}\) in \(q\left(\overline{X}_{v_{j}}\right)=q\left(\overline{\Phi}_{j}\left(\overline{ X}_{v}\right)\right)\) is identified with a copy of \(\varphi_{j}\left(Y_{j}\right)\) in \(q\left(\overline{X}_{v}\right)\). By Lemma 5.3, any automorphism \(\overline{\Phi}\in\operatorname{Aut}\left(\overline{X}\right)\) induced by an automorphism of the underlying graph \(\Phi\in\operatorname{Aut}\left(\Gamma_{\overline{X}}\right)\) descends to an automorphism \(\overline{\Phi}^{E}\in\operatorname{Aut}\left(\overline{X}^{E}\right)\). So \(\overline{\Phi}_{j}^{E}\left(q\left(\overline{X}_{v}\right)\right)=q\left( \overline{\Phi}_{j}\left(\overline{X}_{v}\right)\right)=q\left(\overline{X}_{ v_{j}}\right)\). Note that \(\overline{\Phi}_{j}^{E}\) maps a copy of \(Y_{j}\) in \(q\left(\overline{X}_{v}\right)\) to a copy of \(\varphi_{j}\left(Y_{j}\right)\) in \(q\left(\overline{X}_{v}\right)\). Identify \(Y\) with \(q\left(\overline{X}_{v}\right)\) and assume \(\varphi_{j}:Y_{j}\subset q\left(\overline{X}_{v}\right)\to q\left( \overline{X}_{v}\right)\). Then \(Y\) embeds in \(\overline{X}^{E}\) and the restriction \(\overline{\Phi}_{j}^{E}|_{Y_{j}}=\varphi_{j}\). **Remark 5.5**.: Note that \(\dim\left(\overline{X}^{E}\right)=\dim\left(Y\right)\). **Remark 5.6**.: Following the _Simple Local Gluing_ Lemma in [1], Theorem 5.4 can be generalized to nonpositively curved metric spaces provided that some finiteness conditions are satisfied and the edge-spaces are locally convex, closed, and complete subspaces. **Definition 5.7**.: Let \(Y\) be a compact nonpositively curved cube complex. A collection of injective partial local isometries \(\mathcal{O}=\left\{\varphi_{j}:Y_{j}\subset Y\to Y\right\}_{j=1}^{n}\) is _controlled_ if the corresponding realization \(X\to B\) is a controlled graph of spaces. **Theorem 5.8** (Haglund-Wise [10]).: _Let \(X\) decompose as a finite graph of spaces, where each vertex-space \(X_{v}\) and edge-space \(X_{e}\) is special with finitely many hyperplanes. Then \(X\) has a finite special cover provided the attaching maps of edge-spaces satisfy the following:_ 1. _the attaching maps_ \(X_{e}\to X_{\iota(e)}\) _and_ \(X_{e}\to X_{\tau(e)}\) _are injective local-isometries;_ 2. _distinct hyperplanes of_ \(X_{e}\) _map to distinct hyperplanes of_ \(X_{\iota(e)}\) _and_ \(X_{\tau(e)}\)_;_ 3. _noncrossing hyperplanes map to noncrossing hyperplanes;_ 4. _no hyperplane of_ \(X_{e}\) _extends in_ \(X_{\iota(e)}\) _or_ \(X_{\tau(e)}\) _to a hyperplane dual to an edge that intersects_ \(X_{e}\) _in a single vertex._ **Remark 5.9**.: The finite special cover in Theorem 5.8 corresponds to a finite index subgroup \(N\) of the fundamental group of the underlying graph of \(X\) where any subgroup of \(N\) induces a cover that is special. See [10] for details. This makes Theorem 5.8 compatible with the prevailing methods of finding finite covers in this text, namely finding covers of graphs of spaces induced by finite covers of their underlying graphs. **Theorem 5.10**.: _Let \(Y\) be a compact special cube complex and let \(\mathcal{O}\) be a controlled collection of injective partial local isometries of \(Y\). Then there exists a compact special cube complex \(R\) containing \(Y\) as a locally convex subcomplex such that each \(\varphi\in\mathcal{O}\) extends to some automorphism \(\Phi\in\operatorname{Aut}\left(R\right)\)._ Proof.: Let \(X\) be the realization of the pair \(\left\{Y,\mathcal{O}\right\}\). Then \(X\) is a cube complex that splits as a graph of spaces \(X\to\Gamma_{X}\) where \(\Gamma_{X}\) is a compact bouquet of circles and \(\pi_{1}\Gamma_{X}=F\) is a free group. Since \(Y\) is compact and \(\mathcal{O}\) is controlled, \(X\) is a compact controlled graph of spaces. The claim then follows from Remark 5.9 Theorem 5.8, Theorem 5.4 Lemma 4.4, Remark 4.5, Lemma 4.3, and Lemma 3.2.
2309.09385
A Data-Driven Model for Abundances in Metal-poor Stars and Implications for Nucleosynthetic Sources
We present a data-driven model for abundances of Fe, Sr, Ba, and Eu in metal-poor (MP) stars. The production patterns for core-collapse supernovae (CCSNe) and binary neutron star mergers (BNSMs) are derived from the data of Holmbeck et al. (arXiv:2007.00749) on [Sr/Fe], [Ba/Fe], and [Eu/Fe] for 195 stars. Nearly all the data can be accounted for by mixtures of contributions from these two sources. We find that on average, the Sr contribution to an MP star from BNSMs is $\approx 3$ times that from CCSNe. Our model is also consistent with the solar inventory of Fe, Sr, Ba, and Eu. We carry out a parametric $r$-process study to explore the conditions that can give rise to our inferred production patterns and find that such conditions are largely consistent with those from simulations of CCSNe and BNSMs. Our model can be greatly enhanced by accurate abundances of many $r$-process elements in a large number of MP stars, and future results from this approach can be used to probe the conditions in CCSNe and BNSMs in much more detail.
Axel Gross, Zewei Xiong, Yong-Zhong Qian
2023-09-17T21:50:15Z
http://arxiv.org/abs/2309.09385v1
# A Data-Driven Model for Abundances in Metal-poor Stars and Implications for Nucleosynthetic Sources ###### Abstract We present a data-driven model for abundances of Fe, Sr, Ba, and Eu in metal-poor (MP) stars. The production patterns for core-collapse supernovae (CCSNe) and binary neutron star mergers (BNSMs) are derived from the data of Holmbeck et al. (2020) on [Sr/Fe], [Ba/Fe], and [Eu/Fe] for 195 stars. Nearly all the data can be accounted for by mixtures of contributions from these two sources. We find that on average, the Sr contribution to an MP star from BNSMs is \(\approx 3\) times that from CCSNe. Our model is also consistent with the solar inventory of Fe, Sr, Ba, and Eu. We carry out a parametric \(r\)-process study to explore the conditions that can give rise to our inferred production patterns and find that such conditions are largely consistent with those from simulations of CCSNe and BNSMs. Our model can be greatly enhanced by accurate abundances of many \(r\)-process elements in a large number of MP stars, and future results from this approach can be used to probe the conditions in CCSNe and BNSMs in much more detail. Galaxy chemical evolution (580); Stellar abundances (1577); Population II stars (1284); R-process (1324); Core-collapse supernovae (304); Compact objects (288) 0000-0002-4870-2886]Axel Gross 0000-0002-2882-7885]Zewei Xiong 0000-0002-4880-7885]Yong-Zhong Qian ## 1 Introduction It is well known that Type Ia (SNe Ia) and core-collapse supernovae (CCSNe) are major sources for Fe, that elements heavier than the Fe group are mainly produced by the rapid (\(r\)) and slow (\(s\)) neutron-capture processes, and that asymptotic giant branch (AGB) stars of low to intermediate masses are the site of the main \(s\)-process producing Sr and heavier elements (see e.g., Arcones & Thielemann, 2023 for a review). The spectacular multimessenger observations of GW170817 (Abbott et al., 2017) provided strong support of binary neutron star mergers (BNSMs) being a site of the \(r\)-process (e.g. Kasen et al., 2017), and many theoretical studies have been devoted to this topic both before (see e.g., Thielemann et al., 2017 for a review) and after this event (e.g., Curtis et al., 2023; Just et al., 2023; Kiuchi et al., 2023). In addition, theoretical studies suggest that CCSNe may produce some elements heavier than the Fe group (e.g., Woosley & Hoffman, 1992; Hoffman et al., 1997; Wanajo et al., 2018; Wang & Burrows, 2023) and that a subset of them may even be a site of the \(r\)-process (e.g., Nishimura et al., 2015; Siegel et al., 2019; Fischer et al., 2020). Despite the above advances, we are still far from being able to make precise ab initio predictions for the nucleosynthesis of astrophysical sources. In particular, the extreme conditions in the dynamic environments of CCSNe and BNSMs are inherently difficult to simulate, and there are large uncertainties in the nuclear input for simulating these sources and the associated \(r\)-process. On the other hand, because both sources are associated with rapidly-evolving massive stars, they are expected to have dominated the chemical evolution of the universe during the first \(\sim 1\) Gyr, before Fe contributions from SNe Ia and \(s\)-process contributions from AGB stars became significant. Consequently, metal-poor (MP) stars formed during this early epoch provide an excellent fossil record for deciphering the nucleosynthesis of CCSNe and BNSMs. For example, Qian & Wasserburg (2001, 2008) took the observed elemental abundance patterns in two MP stars as the production patterns of two distinct sources and showed that the data for other stars could be largely explained as mixtures of those two patterns. With hindsight, they should have identified those two sources as CCSNe and BNSMs rather than two distinct subsets of CCSNe. In this Letter we take a data-driven approach to infer the average production patterns of CCSNe and BNSMs. Unlike Qian & Wasserburg (2001, 2008), we do not take these patterns from two individual MP stars. Instead, we derive them from the latest data on [Sr/Fe], [Ba/Fe], and [Eu/Fe] (Holmbeck et al., 2020) provided by the \(R\)-Process Alliance (RPA) search for \(r\)-process-enhanced stars in the Galactic Halo. We attribute the pattern with dominant production of Fe and Sr to CCSNe and that with dominant production of Sr, Ba, and Eu to BNSMs. We show that nearly all the RPA data can be accounted for by mixtures of contributions from these two sources, and that their contributions over the Galactic history are also consistent with the solar inventory of Fe, Sr, Ba, and Eu (SS2). We then carry out a parametric study of the \(r\)-process to explore the conditions that can give rise to the inferred production patterns and compare such conditions with those found in simulations of CCSNe and BNSMs (SS3). Finally, we summarize our results and discuss how our approach can be greatly enhanced by accurate abundances of many \(r\)-process elements in a large number of MP stars and how future results from this approach can be used to further probe the conditions in CCSNe and BNSMs (SS4). ## 2 Model for Abundances in MP Stars We model the abundance of element E in an MP star as a mixture of contributions from two distinct sources, each with a fixed characteristic production pattern. The number ratio of E to Fe atoms in the star is given by \[\left(\frac{\rm E}{\rm Fe}\right)=x\left(\frac{\rm E}{\rm Fe}\right)_{1}+(1-x )\left(\frac{\rm E}{\rm Fe}\right)_{2}, \tag{1}\] where \(({\rm E}/{\rm Fe})_{1}\) and \(({\rm E}/{\rm Fe})_{2}\) represent the production of E relative to Fe by sources 1 and 2, respectively, and \(x\) is the fraction of Fe contributed by source 1. Because abundance data are commonly presented in terms of \([{\rm E}/{\rm Fe}]=\log({\rm E}/{\rm Fe})-\log({\rm E}/{\rm Fe})_{\odot}\), we rewrite Eq. (1) as \[10^{[{\rm E}/{\rm Fe}]}=x\times 10^{[{\rm E}/{\rm Fe}]_{1}}+(1-x)\times 10^{[{ \rm E}/{\rm Fe}]_{2}}. \tag{2}\] The RPA data (Holmbeck et al., 2020) consist of complete measurements of [Sr/Fe], [Ba/Fe], and [Eu/Fe] for 211 MP stars with \(-3\lesssim[{\rm Fe}/{\rm H}]\lesssim-1\). We suspect that 16 of these stars received \(s\)-process contributions (from relatively massive AGB stars or through binary mass transfer from former AGB companions) based on their high values of \([{\rm Ba}/{\rm Eu}]>0\), and therefore, exclude them from further consideration. As a first test of the model, we determine two sets of parameters \(\{[{\rm Sr}/{\rm Fe}]_{i},[{\rm Ba}/{\rm Fe}]_{i},[{\rm Eu}/{\rm Fe}]_{i}\}\)\((i=1,2)\) that best reproduce all the data for the remaining 195 stars taking into account the uncertainty of \(\sigma\approx 0.2\) dex for each measurement. We obtain these parameters by minimizing \[Q=\sum_{j,{\rm E}}H\left(\frac{[{\rm E}/{\rm Fe}]_{j}-[{\rm E}/{\rm Fe}]_{*,j} }{\sigma}\right), \tag{3}\] where \([{\rm E}/{\rm Fe}]_{j}\) and \([{\rm E}/{\rm Fe}]_{*,j}\) refer to the predicted value and the measured mean for the \(j\)th star, respectively, and \(H(y)\) is the Huber loss function defined by \(H(y)=y^{2}/2\) for \(|y|\leq 1\) and \(H(y)=|y|-1/2\) for \(|y|>1\). The use of \(H(y)\) reduces sensitivity to the outliers in the data. With \(\{[{\rm Sr}/{\rm Fe}]_{1},[{\rm Ba}/{\rm Fe}]_{1},[{\rm Eu}/{\rm Fe}]_{1}\}=\{ -0.49,-3.00,-0.77\}\) and \(\{[{\rm Sr}/{\rm Fe}]_{2},[{\rm Ba}/{\rm Fe}]_{2},[{\rm Eu}/{\rm Fe}]_{2}\}=\{ 0.90,0.91,1.30\}\), we find that all the data on [Sr/Fe], [Ba/Fe], and [Eu/Fe] for 140 (190) out of the 195 stars can be reproduced within \(1\sigma\) (\(2\sigma\)). Note that the fraction \(x\) of Fe contributed by source 1 to each star is also optimized during the above calculation. The above production patterns are inferred from a mathematical procedure without considering the mechanisms of synthesizing Fe, Sr, Ba, and Eu. In this sense, they represent important observational constraints on the sources for these elements. Sources 1 and 2 have drastically different production of Sr, Ba, and Eu relative to Fe. With respect to the solar composition, source 1 has high Fe production while source 2 has high production of Sr, Ba, and Eu. The extremely low production of Ba by source 1 with \([{\rm Ba}/{\rm Fe}]_{1}=-3.00\) is especially striking. Because the above characteristics of sources 1 and 2 bear strong resemblance to those of CCSNe and BNSMs, respectively, we carry out a second test of our model by incorporating predefined features of these two sources. Specifically, we assume that CCSNe produce Fe and Sr but no Ba or Eu while BNSMs produce Sr, Ba, and Eu but no Fe. In this simplified but well-motivated case, the production patterns are characterized by \([{\rm Sr}/{\rm Fe}]_{\rm SN}\), \([{\rm Ba}/{\rm Sr}]_{\rm NSM}\), and \([{\rm Eu}/{\rm Ba}]_{\rm NSM}\). The values of [Sr/Fe], [Ba/Fe], and [Eu/Fe] for a star are given by \[[{\rm Sr}/{\rm Fe}] = \left[{\rm Sr}/{\rm Fe}\right]_{\rm SN}+\log(1+\alpha), \tag{4}\] \[{\rm Ba}/{\rm Fe}] = \left[{\rm Sr}/{\rm Fe}\right]_{\rm SN}+\left[{\rm Ba}/{\rm Sr} \right]_{\rm NSM}+\log\alpha,\] (5) \[{\rm[Eu}/{\rm Fe}] = \left[{\rm Ba}/{\rm Fe}\right]+\left[{\rm Eu}/{\rm Ba}\right]_{ \rm NSM}, \tag{6}\] where \(\alpha\) is the ratio of the Sr contribution from BNSMs to that from CCSNe. Note that \(\alpha\) plays a similar role to \(x\) in Eq. (2) because now only Sr is produced by both sources. By minimizing \(Q\) in Eq. (3), we find that with \([{\rm Sr}/{\rm Fe}]_{\rm SN}=-0.54\), \([{\rm Ba}/{\rm Sr}]_{\rm NSM}=0.00\), and \([{\rm Eu}/{\rm Ba}]_{\rm NSM}=0.43\), all the data on [Sr/Fe], [Ba/Fe], and [Eu/Fe] for 141 (189) out of the 195 stars can be reproduced within \(1\sigma\) (\(2\sigma\)). So the same level of agreement with the data is achieved for the model represented by Eq. (2) and that by Eqs. (4)-(6). We focus on the latter in the discussion below. Next, we estimate the uncertainties in the inferred production patterns using Bayesian techniques. For the \(j\)th star, the likelihood of reproducing the data \(D_{j}=\{[{\rm Sr}/{\rm Fe}]_{*,j},[{\rm Ba}/{\rm Fe}]_{*,j},[{\rm Eu}/{\rm Fe}]_{*,j}\}\) by the model with the parameters \(M=\{[{\rm Sr}/{\rm Fe}]_{\rm SN},[{\rm Ba}/{\rm Sr}]_{\rm NSM},[{\rm Eu}/{ \rm Ba}]_{\rm NSM}\}\) and \(A_{j}=\log\alpha_{j}\) is \[P_{j}(D_{j}|M,A_{j})={\cal N}_{j}({\rm Sr}){\cal N}_{j}({\rm Ba}){\cal N}_{j}({ \rm Eu}), \tag{7}\] where, for example, \({\cal N}_{j}({\rm Sr})\) is the normal distribution of \([{\rm Sr}/{\rm Fe}]_{j}\) centered at \([{\rm Sr}/{\rm Fe}]_{*,j}\) with a standard deviation of \(\sigma\). Assuming uniform prior probabilities for \(M\) and all the \(A_{j}\)'s, we obtain the posterior probability of the model \[P(M,\{A_{j}\}|\{D_{j}\})=\frac{\prod_{j}P_{j}(D_{j}|M,A_{j})}{P(\{D_{j}\})}, \tag{8}\] where \(P(\{D_{j}\})=\int dM\prod_{j}\int dA_{j}P_{j}(D_{j}|M,A_{j})\). Various marginal distributions can be obtained by integrat ing \(P(M,\{A_{j}\}|\{D_{j}\})\) over the parameters of no concern. For example, \(P(\left[\rm{Sr/Fe}\right]_{\rm SN})\), \(P(\left[\rm{Sr/Fe}\right]_{\rm SN}\), \(P(\left[\rm{Ba/Sr}\right]_{\rm NSM})\), and similar distributions are presented in Fig. 1. We find \(\left[\rm{Sr/Fe}\right]_{\rm SN}=-0.45^{+0.09}_{-0.11}\), \(\left[\rm{Ba/Sr}\right]_{\rm NSM}=0.03\pm 0.05\), and \(\left[\rm{Eu/Ba}\right]_{\rm NSM}=0.44\pm 0.02\). The optimal values found above by the second test of the model are in good agreement with these results (within \(1\sigma\)). Because CCSNe are the only source for Fe and BNSMs are the only source for Ba and Eu, the model predicts \[10^{\left[\rm{Sr/Fe}\right]} = 10^{\left[\rm{Sr/Fe}\right]_{\rm SN}}+10^{\left[\rm{Ba/Fe}\right] -\left[\rm{Ba/Sr}\right]_{\rm NSM}}, \tag{9}\] \[\left[\rm{Eu/Ba}\right] = \left[\rm{Eu/Ba}\right]_{\rm NSM}. \tag{10}\] The above predictions are compared with the data in Fig. 2. Taking into account the measurement errors, we find that nearly all the stars follow these predictions. The same is also true of the relation between [Sr/Fe] and [Eu/Fe] (not shown), which follows from the above two predictions. In Fig. 2a, one star lies far below while three lie significantly above the relation between [Sr/Fe] and [Ba/Fe]. We will discuss these anomalous stars further in SS4. The normalized histogram of the optimal \(A_{j}\) for each star is shown in Fig. 3. We take the algebraic mean of all the marginal distributions of \(A_{j}\) to be the distribution of \(A=\log\alpha\) for MP stars, which is also shown in Fig. 3. This distribution is very similar to the histogram and gives \(A=0.47^{+0.29}_{-0.35}\), which means that on average, the Sr contribution to a star from BNSMs is \(\alpha=10^{A}\approx 3\) times that from CCSNe. Assuming that CCSNe and BNSMs have operated the same way over the Galactic history, we expect that the solar system material represents the average mixture of their contributions very well, and therefore, \[\alpha_{\odot}\approx\frac{10^{\left[\rm{Sr/Fe}\right]_{\rm NSM}}}{10^{\left[ \rm{Sr/Fe}\right]_{\rm SN}}}\frac{\left(\rm{Fe/H}\right)_{\odot}}{\left(\rm{ Fe/H}\right)_{\odot,\rm SN}}\approx 3, \tag{11}\] where \(\left(\rm{Fe/H}\right)_{\odot,\rm SN}\) is the CCSN contribution to the solar Fe inventory, and we have taken \(\left(\rm{Eu/H}\right)_{\odot,\rm NSM}\approx\left(\rm{Eu/H}\right)_{\odot}\) because almost all of the solar Eu inventory came from the \(r\)-process. With \(\left[\rm{Sr/Eu}\right]_{\rm NSM}=-\left[\rm{Ba/Sr}\right]_{\rm NSM}-\left[ \rm{Eu/Ba}\right]_{\rm NSM}=-0.47\) and \(\left[\rm{Sr/Fe}\right]_{\rm SN}=-0.45\), we obtain \(\left(\rm{Fe/H}\right)_{\odot,\rm SN}\approx\left(\rm{Fe/H}\right)_{\odot}/3\). Studies of Galactic chemical evolution (e.g., Matteucci & Greggio, 1986; Timmes et al., 1995) estimated that CCSNe contributed \(\approx 1/3\) to \(2/3\) of the solar Fe inventory. In view of the very different approaches taken here and in those studies, it is remarkable that they give similar results. Our model predicts that the net \(r\)-process contribution from CCSNe and BNSMs to the solar Sr inventory is \[\frac{\left(\rm{Sr/H}\right)_{\odot,r}}{\left(\rm{Sr/H}\right)_{\odot}}\approx \frac{1+\alpha_{\odot}}{\alpha_{\odot}}\times 10^{\left[\rm{Sr/Eu}\right]_{\rm NSM}} \approx 0.45, \tag{12}\] Figure 1: Corner plot for parameters characterizing the production patterns of CCSNe and BNSMs. The marginal distribution of each parameter as well as \(1\sigma\), \(2\sigma\), and \(3\sigma\) contours for pairs of parameters are shown. Filled circles indicate the best-fit parameters, which are close to those (crosses) inferred from a different method. Figure 3: Normalized histogram of optimal \(A_{j}\) for each star and distribution of \(A\) for MP stars. The latter (black curve) is taken to be the algebraic mean of all the marginal distributions of \(A_{j}\). Figure 2: Comparison of the model with the data (crosses) for (a) [Sr/Fe] and [Ba/Fe], and (b) [Eu/Ba] and [Fe/H]. The dark curve corresponds to the best-fit parameters and the gray region reflects \(1\sigma\) uncertainties in the parameters. The error bars indicate the uncertainties in each measurement. and that the net \(r\)-process contribution from BNSMs to the solar Ba inventory is \[\frac{\left(\rm Ba/H\right)_{\odot,r}}{\left(\rm Ba/H\right)_{\odot}}\approx 10^{- \left[\rm Eu/Ba\right]_{\rm NSM}}\approx 0.36. \tag{13}\] The conventional way of estimating the solar \(r\)-process inventory of an element was to subtract the \(s\)-process contribution from its net solar inventory. Because the \(s\)-process contribution was estimated from a parametric model, this procedure could have large uncertainties. For example, for \({}^{88}\)Sr, the dominant isotope of Sr, Goriely (1999) estimated that the \(r\)-process contributed \(\approx 23\%\) of its solar inventory, but with a possible range of (0-27)%. While the higher end of this range is \(\approx 1.6\) times lower than our estimate, the discrepancy is less in terms of the \(s\)-process contribution, with his estimate (73%) being \(\approx 1.3\) times higher than ours (55%). Goriely (1999) also estimated that the \(r\)-process contributed \(\approx 15\%\) of the solar inventory of \({}^{135,137,138}\)Ba, the dominant isotopes of Ba, but with a possible range of (0-38)%. Our estimate is just below the higher end of this range. Considering the drastically different approach used here, we regard our estimates of the \(r\)-process contributions to the solar inventory of Sr and Ba as quite reasonable. Qian & Wasserburg (2001) advocated similar estimates to make their model fit the data on MP stars available then. The difference is that our estimates are the direct consequences of our model while theirs were introduced into their model as corrections. ## 3 Implications for CCSNe and BNSMs Because CCSNe and BNSMs are inherently difficult to simulate and there are large uncertainties in the nuclear input for simulating these sources and the associated \(r\)-process, it is not very instructive to compare the production patterns calculated from specific simulations with the average patterns derived above from the data on MP stars. Instead, we carry out a parametric study of nucleosynthesis to explore the astrophysical conditions that may produce our inferred patterns, and then compare such conditions with those found in simulations. In each parametric run, we follow the expansion of some ejecta that could occur in CCSNe or BNSMs. We start each run at time \(t=0\) from an initial state with temperature \(T_{0}=10\) GK, density \(\rho_{0}\), and electron fraction \(Y_{e,0}\). The subsequent evolution of density is specified by \[\rho(t)=\begin{cases}\rho_{0}e^{-t/\tau},&t\leq t_{\rm tr},\\ \rho_{0}e^{-t_{\rm tr}/\tau}(t_{\rm tr}/t)^{3},&t>t_{\rm tr},\end{cases} \tag{14}\] where \(\tau\) is a characteristic timescale and \(t_{\rm tr}=3(1-\ln 0.6)\tau\) corresponds to the transition between the two expansion regimes as suggested by Lippuner & Roberts (2015). We determine \(\rho_{0}\) from \(\phi_{0}=k_{B}^{4}T_{0}^{3}m_{N}/(\hbar^{3}c^{3}\rho_{0})\), where \(k_{B}\) is the Boltzmann constant, \(\hbar\) is the Planck constant, \(c\) is the speed of light, and \(m_{N}\) is the nucleon rest mass. We assume nuclear statistical equilibrium between \(T_{0}\) and \(T=8\) GK and use a reaction network to evolve the nuclear composition for \(T<8\) GK. For a specific set of \(\phi_{0}\), \(\tau\), and \(Y_{e,0}\), the change of composition at each time step is accompanied by energy release. We calculate the corresponding change of temperature based on the thermodynamics associated with this energy release and the expansion as described in Mendoza-Temis et al. (2015). Consequently, \(T(t)\), \(Y_{e}(t)\), and the entropy \(S(t)\) are updated along with \(\rho(t)\) at each time step. We logarithmically sample \(\phi_{0}\) between 1 and \(100\,k_{B}\) per baryon and \(\tau\) between \(10^{-3}\) and 1 s, and uniformly sample \(Y_{e,0}\) between 0.05 and 0.55. Each parameter takes 21 values, so there are a total of 9261 parametric runs. We use the same nuclear reaction network as employed in Collins et al. (2023) and adopt the nuclear mass model FRDM for the \(r\)-process (see Mendoza-Temis et al., 2015). Because the evolution at \(T\lesssim 5\) GK is more pertinent to the final nucleosynthesis outcome, we use the expansion timescale \(\tau_{\rm exp}=|d\ln\rho(t)/dt|^{-1}\) along with the \(Y_{e}\) and \(S\) at \(T=5\) GK to characterize the outcome of each parametric run below. Our main interest is in the \(r\)-process production of Sr, Ba, and Eu, which requires neutron-rich conditions with \(Y_{e}<0.5\). For fixed \(Y_{e}\), the qualitative outcome of this nucleosynthesis approximately correlates with \(S^{3}/\tau_{\rm exp}\) (e.g., Hoffman et al., 1997). As shown in Fig. 4, regions of \(\log(S^{3}/\tau_{\rm exp})\) and \(Y_{e}\) fall into three categories, in which significant production occurs for Sr only, Ba and Eu only, and all of them, respectively. As a quantitative criterion, an element is produced significantly if its mass fraction exceeds 10% of the corresponding value for the solar \(r\)-process pattern. It was expected long ago that CCSNe are a significant source for Sr (e.g., Woosley & Hoffman, 1992) with the production occurring in the neutrino-driven wind (e.g., Hoffman et al., 1997). The parameter region with significant production of Sr only in Fig. 4 can be compared to the conditions found in CCSN models (e.g., Roberts et al., 2010; Wanajo et al., 2018; Xiong et al., 2019, 2020; Sieverding et al., 2020; Wang & Burrows, 2023). The relevant ejecta have \(S\sim 10\)-\(80\,k_{B}\) per baryon and \(\tau_{\rm exp}\sim 0.01\)-\(0.6\) s, with lower \(S\) associated with smaller \(\tau_{\rm exp}\). The corresponding range of \(\log(S^{3}/\tau_{\rm exp})\) is \(\sim 5\)-\(6\), where \(S\) is in units of \(k_{B}\) per baryon and \(\tau_{\rm exp}\) is in units of s. These ejecta typically have \(Y_{e}\sim 0.45\)-\(0.55\), but some models have \(Y_{e}\) as low as \(\sim 0.38\) (e.g., Wanajo et al., 2018), and more models can have similarly low \(Y_{e}\) if \(\nu_{e}\) and \(\bar{\nu}_{e}\) are appropriately mixed with sterile species that do not interact with matter (e.g., Xiong et al., 2019). It can be seen from Fig. 4 that those CCSNe with ejecta having \(\log(S^{3}/\tau_{\rm exp})\sim 5\)-\(6\) and \(Y_{e}\sim 0.38\)-\(0.45\) would be a significant source for Sr. With a typical Fe yield of \(\sim 0.1\,M_{\odot}\), CCSNe should produce \(\sim 10^{-6}\,M_{\odot}\) of Sr on average to account for \(\left[\rm Sr/Fe\right]_{\rm SN}=-0.45\). This amount is broadly consistent with the mass of the relevant ejecta (e.g., Wang & Burrows, 2023). BNSMs are favored as the dominant source for \(r\)-process elements beyond Sr because they have very neutron-rich ejecta. The various components of their ejecta also facilitate the production of a wide range of \(r\)-process elements (e.g., Kiuchi et al., 2023; Curtis et al., 2023; Just et al., 2023). The most neutron-rich component is the tidally-disrupted dynamical ejecta, which have \(Y_{e}\sim 0.05\)-0.45, \(S\sim 3\)-\(30\,k_{B}\) per baryon, and \(\tau_{\rm exp}\lesssim 0.01\) s. Subsequent to the formation of an accretion disk, the ejecta are dominated by the polar neutrino-driven wind during the life of the hypermassive neutron star (HMNS) and by the outflow, or torus, associated with viscous heating following the collapse of the HMNS into a black hole (BH). Both the HMNS wind and the BH torus interact with neutrinos, and therefore, have more correlated conditions than the dynamical ejecta. The HMNS wind is more affected by neutrinos and has \(Y_{e}\sim 0.3\)-0.53, \(S\sim 15\)-\(60\,k_{B}\) per baryon, and \(\tau_{\rm exp}\sim 4\times 10^{-3}\) to 0.03 s. The BH torus has \(Y_{e}\sim 0.23\)-0.45, \(S\sim 10\)-\(30\,k_{B}\) per baryon, and \(\tau_{\rm exp}\sim 0.01\) to 0.3 s. The conditions for each of the above three ejecta components from the model symn1-a6 simulated by Just et al. (2023) are shown in terms of \(\log(S^{3}/\tau_{\rm exp})\) and \(Y_{e}\) in Fig. 4. The amount of ejecta in each component as a function of \(Y_{e}\) can be found in their Fig. 3a. It can be seen from these two figures that both the HMNS wind and the BH torus mostly have significant production of Sr only, with a very small fraction of the latter having significant production of Ba and Eu only. In contrast, the dynamical ejecta mostly have significant production of Sr only or Ba and Eu only, with a small fraction having significant production of Sr, Ba, and Eu. Although the nuclear input for the \(r\)-process in Just et al. (2023) was different from that in our parametric study, their results on nucleosynthesis in the three ejecta components are in good qualitative agreement with the above discussion. We note that while regions of significant production for individual elements are well defined in terms of \(S^{3}/\tau_{\rm exp}\) and \(Y_{e}\), the production ratio for a pair of elements is much more sensitive to the detailed conditions, and therefore, is better defined in terms of \(S\), \(\tau_{\rm exp}\), and \(Y_{e}\). For example, Fig. 5 shows contours of \(\left[{\rm Eu/Ba}\right]-\left[{\rm Eu/Ba}\right]_{\rm NSM}\) as functions of \(\log S\) and \(\log\tau_{\rm exp}\) for \(Y_{e}=0.1\) and 0.25, respectively. It can be seen that for fixed \(Y_{e}\), [Eu/Ba] can change drastically for a constant value of \(S^{3}/\tau_{\rm exp}\). Nonetheless, the regions of significant production in Fig. 4 provide good guidance to the conditions that could give rise to the inferred BNSM production pattern. In fact, the three categories of regions can be combined in numerous ways to obtain this pattern. For example, most of the points in the Ba-Eu-only region reproduce the inferred \(\left[{\rm Eu/Ba}\right]_{\rm NSM}\) to within a few dexes, so a mixture can easily give the correct result. Then, such a mixture can be mixed further with any of a large number of points in the Sr-only region to give the inferred \(\left[{\rm Ba/Sr}\right]_{\rm NSM}\). The above mixtures can also be Figure 4: Regions of \(\log(S^{3}/\tau_{\rm exp})\) and \(Y_{e}\) for significant production of Sr only, Ba and Eu only, and all of them, respectively. Conditions in the dynamical ejecta, HMNS wind, and BH torus from the BNSM model symn-n1-a6 simulated by Just et al. (2023) are shown for comparison. obtained with the reasonable requirement that each type of ejecta involved contribute, for example, at least 10% of the total mass. Therefore, we expect that when averaged over various BNSMs, the superposition of the \(r\)-process production in the dynamical ejecta, HMNS wind, and BH torus would account for the inferred BNSM pattern. ## 4 Discussion and Conclusions We have presented a data-driven model for abundances of Fe, Sr, Ba, and Eu in MP stars with \(-3\lesssim\rm[Fe/H]\lesssim-1\). The production patterns of two distinct sources are derived from the RPA data on [Sr/Fe], [Ba/Fe], and [Eu/Fe] for 195 stars (Holmbeck et al., 2020). We simplify these two sources as CCSNe producing Fe and Sr but no Ba or Eu and BNSMs producing Sr, Ba, and Eu but no Fe. Nearly all the data can be accounted for by mixtures of contributions from these two sources, which are characterized by \(\rm[Sr/Fe]_{SN}=-0.45\), \(\rm[Ba/Sr]_{NSM}=0.03\), and \(\rm[Eu/Ba]_{NSM}=0.44\). We find that on average, the Sr contribution to an MP star from BNSMs is \(\approx 3\) times that from CCSNe. Assuming that CCSNe and BNSMs have operated the same way over the Galactic history, we find that CCSNe contributed \(\approx 1/3\) of the solar Fe inventory, in agreement with estimates from studies of Galactic chemical evolution that use Fe yields and rates of occurrence for CCSNe and SNe Ia. The \(r\)-process contributions to the solar inventory of Sr and Ba predicted by our model are also reasonable in comparison with estimates based on subtraction of the \(s\)-process contributions when the large uncertainties in the latter approach are taken into account. Four stars show large deviations from the relation between [Sr/Fe] and [Ba/Fe] predicted by our model (see Fig. 2a). The star J09471921-4127042 with \(\rm[Fe/H]=-2.67\) has a very low value of \(\rm[Sr/Fe]=-1.57\) while J15230675-7930072 with \(\rm[Fe/H]=-2.55\) has a high value of \(\rm[Sr/Fe]=0.71\). Their low [Fe/H] values suggest that they might have formed from materials enriched by a few special CCSNe and BNSMs. On the other hand, for J06195001-5312114 with \(\rm[Fe/H]=-2.06\) and J06320130-2026538 with \(\rm[Fe/H]=-1.56\), their high values of \(\rm[Sr/Fe]=1.00\) and 1.44, respectively, may reflect that due to the rarity of BNSMs, deviations from our average BNSM production pattern can still occur up to \(\rm[Fe/H]\sim-1.6\). Of course, it is also possible that more than two distinct production patterns are required to characterize CCSNe and BNSMs, even in the average sense. Unfortunately, with the data on [Sr/Fe], [Ba/Fe], and [Eu/Fe] only, we are unable to derive three well-defined production patterns. Clearly, exploration of more than two distinct production patterns requires precise measurements of more \(r\)-process elements in a large number of MP stars. We have also carried out a parametric study to explore the conditions in CCSNe and BNSMs that may give rise to our inferred production patterns. We find that such conditions are largely consistent with the results from simulations. We emphasize that the production ratio for a pair of elements is much more sensitive to the detailed conditions (see Fig. 5). However, to narrow down the combinations of \(Y_{e}\), \(S\), and \(\tau_{\rm exp}\) that can give rise to a production pattern, we need an extensive pattern covering many elements. Therefore, data on abundances of more \(r\)-process elements are required to probe the conditions in CCSNe and BNSMs in more detail. For this purpose and for exploring the possibility of more than two distinct production patterns, we strongly urge large surveys of MP stars to cover many \(r\)-process elements in addition to Sr, Ba, and Eu. ## Acknowledgments We thank Oliver Just for providing the nucleosynthesis conditions from the BNSM model sym-n1-a6. This work was supported in part by the US Department of Energy under grant DE-FG02-87ER40328 (A.G. and Y.Z.Q.) and by the European Research Council under ERC Advanced Grant KILONOVA No. 885281 of the European Union's Horizon 2020 research and innovation program (Z.X.). The parametric nucleosynthesis calculations were performed on the VIRGO cluster at GSI. The results from these calculations and the data on MP stars were analyzed with resources of the Minnesota Supercomputing Institute.
2308.00088
The physics of optical computing
There has been a resurgence of interest in optical computing over the past decade, both in academia and in industry, with much of the excitement centered around special-purpose optical computers for neural-network processing. Optical computing has been a topic of periodic study for over 50 years, including for neural networks three decades ago, and a wide variety of optical-computing schemes and architectures have been proposed. In this paper we provide a systematic explanation of why and how optics might be able to give speed or energy-efficiency benefits over electronics for computing, enumerating 11 features of optics that can be harnessed when designing an optical computer. One often-mentioned motivation for optical computing -- that the speed of light $c$ is fast -- is not a key differentiating physical property of optics for computing; understanding where an advantage could come from is more subtle. We discuss how gaining an advantage over state-of-the-art electronic processors will likely only be achievable by careful design that harnesses more than one of the 11 features, while avoiding a number of pitfalls that we describe.
Peter L. McMahon
2023-07-31T19:00:04Z
http://arxiv.org/abs/2308.00088v1
# The physics of optical computing ###### Abstract There has been a resurgence of interest in optical computing over the past decade, both in academia and in industry, with much of the excitement centered around special-purpose optical computers for neural-network processing. Optical computing has been a topic of periodic study for over 50 years, including for neural networks three decades ago, and a wide variety of optical-computing schemes and architectures have been proposed. In this paper we provide a systematic explanation of why and how optics might be able to give speed or energy-efficiency benefits over electronics for computing, enumerating 11 features of optics that can be harnessed when designing an optical computer. One often-mentioned motivation for optical computing--that the speed of light \(c\) is fast--is _not_ a key differentiating physical property of optics for computing; understanding where an advantage could come from is more subtle. We discuss how gaining an advantage over state-of-the-art electronic processors will likely only be achievable by careful design that harnesses more than one of the 11 features, while avoiding a number of pitfalls that we describe. ## I Introduction There has been a resurgence of interest in optical computing over the past decade, both in industry and academia [1; 2; 3; 4]. What is the fundamental physical basis upon which we can expect an optical computer to outperform an electronic computer, at least for some tasks? In this Perspectives piece we enumerate and discuss 11 features of optics and optical computing that can contribute to an advantage for an optical computer. Any optical computer that achieves an advantage in practice will likely need to harness more than one of these features. An explicit list of features can help to make clear what ingredients the architect of an optical computer has to work with. It also allows us to systematically identify the fundamental physical principles behind the operation of different proposed optical computers, aid us in analyzing what advantage they can hope to achieve, and how their designs might be improved by exploiting further features. The design of a successful optical computer must be carefully engineered to avoid bottlenecks or overhead that would outweigh the optical benefits. We discuss some of the pitfalls and approaches one can take to mitigate them. The high bar set by electronic processors has contributed to periods when there has been pessimism about the prospects for optical computing (for example, see Refs. [5; 6] from just over a decade ago). Given the continued improvements in complementary metal-oxide-semiconductor (CMOS) technology [7], **why is there now renewed excitement about optical computing7**Footnote 7: Optical correlators have been released as commercial products during several periods over the past few decades [14], so this is not a new direction even commercially, but one that has been revitalized. One of the major criticisms of optical computing has been that optical transistors are not competitive with their electronic counterparts. The current wave of interest in optical computing is primarily focused on optical-computer architectures that are not based on replicating digital logic with optical transistors. Instead of trying to construct general-purpose, digital computers, the community is largely targeting building special-purpose, analog computers. Both these shifts--to _special-purpose_, and to _analog_ processing--are important. Trying to build performant general-purpose processors with optics remains out of reach2, but one can alternatively build optical processors that are specialized to particular applications for which completely error-free operation is not necessary. Footnote 7: Some of the major criticisms of optical computing has been that optical transistors are not competitive with their electronic counterparts. The current wave of interest in optical computing is primarily focused on optical-computer architectures that are not based on replicating digital logic with optical transistors. Instead of trying to construct general-purpose, digital computers, the community is largely targeting building special-purpose, analog computers. Both these shifts--to _special-purpose_, and to _analog_ processing--are important. Trying to build performant general-purpose processors with optics remains out of reach2, but one can alternatively build optical processors that are specialized to particular applications for which completely error-free operation is not necessary. Footnote 7: Some of the major criticisms of optical computing has been that optical transistors are not competitive with their electronic counterparts. The current wave of interest in optical computing is primarily focused on optical-computer architectures that are not based on replicating digital logic with optical transistors. Instead of trying to construct general-purpose, digital computers, the community is largely targeting building special-purpose, analog computers. Both these shifts--to _special-purpose_, and to _analog_ processing--are important. Trying to build performant general-purpose processors with optics remains out of reach2, but one can alternatively build optical processors that are specialized to particular applications for which completely error-free operation is not necessary. Footnote 7: Some of the major criticisms of optical computing has been that optical transistors are not competitive with their electronic counterparts. The current wave of interest in optical computing is primarily focused on optical-computer architectures that are not based on replicating digital logic with optical transistors. Instead of trying to construct general-purpose, digital computers, the community is largely targeting building special-purpose, analog computers. Both these shifts--to _special-purpose_, and to _analog_ processing--are important. Trying to build performant general-purpose processors with optics remains out of reach2, but one can alternatively build optical processors that are specialized to particular applications for which completely error-free operation is not necessary. Footnote 7: Some of the major criticisms of optical computing has been that optical transistors are not competitive with their electronic counterparts. The current wave of interest in optical computing is primarily focused on optical-computer architectures that are not based on replicating digital logic with optical transistors. Instead of trying to construct general-purpose, digital computers, the community is largely targeting building special-purpose, analog computers. Both these shifts--to _special-purpose_, and to _analog_ processing--are important. Trying to build performant general-purpose processors with optics remains out of reach2, but one can alternatively build optical processors that are specialized to particular applications for which completely error-free operation is not necessary. Footnote 7: Some of the major criticisms of optical computing has been that optical transistors are not competitive with their electronic counterparts. The current wave of interest in optical computing is primarily focused on optical-computer architectures that are not based on replicating digital logic with optical transistors. Instead of trying to construct general-purpose, digital computers, the community is largely targeting building special-purpose, analog computers. Both these shifts--to _special-purpose_, and to _analog_ processing--are important. Trying to build performant general-purpose processors with optics remains out of reach2, but one can alternatively build optical processors that are specialized to particular applications for which completely error-free operation is not necessary. Footnote 7: Some of the major criticisms of optical computing has been that optical transistors are not competitive with their electronic counterparts. The current wave of interest in optical computing is primarily focused on optical-computer architectures that are not based on replicating digital logic with optical transistors. Instead of trying to construct general-purpose, digital computers, the community is largely targeting building special-purpose, analog computers. Both these shifts--to _special-purpose_, and to _analog_ processing--are important. Trying to build performant general-purpose processors with optics remains out of reach2, but one can alternatively build optical processors that are specialized to particular applications for which completely error-free operation is not necessary. Footnote 7: Some of the major criticisms of optical computing has been that optical transistors are not competitive with their electronic counterparts. The current wave of interest in optical computing is primarily focused on optical-computer architectures that are not based on replicating digital logic with optical transistors. Instead of trying to construct general-purpose, digital computers, the community is largely targeting building special-purpose, analog computers. Both these shifts--to _special-purpose_, and to _analog_ processing--are important. Trying to build performant general-purpose processors with optics remains out of reach2, but one can alternatively build optical processors that are specialized to particular applications for which completely error-free operation is not necessary. There are several application areas being targeted by special-purpose optical computers presently, including: (i) neural networks [1]; (ii) scientific computing [11]; (iii) combinatorial optimization [4]; (iv) cryptography [10; 12; 13]. Matrix-vector multiplications are a key algorithmic primitive in all four application areas and are the target of much of the current research in optical computing. Fourier transforms and convolutions have applicability across neural networks, scientific computing, and cryptography, contributing to their prominence in current research.3 There is also a substantial thrust in performing computations for neural networks that are not explicitly engineered to be matrix-vector multiplications or convolutions [1; 15; 16; 17; 18; 19]. A commonality among all four application areas is that the subroutines performed optically are still useful even if they suffer from some error (noise). This is crucial since it is difficult to achieve an effective precision greater than 10 bits in any analog computer, including analog optical computers, so applications of analog optical computers should be robust to this level of noise. Neural networks are a particularly good match because, at least during inference (as opposed to training), neural networks do not suffer a substantial decrease in accuracy even if they are restricted to integer arithmetic with fewer than 8 bits of precision [1; 20].4 Footnote 4: A concern for any analog neural-network processor, including analog optical processors, is the potential for accumulation of errors in executing deep neural networks. This has recently been theoretically analyzed, with a conclusion that deleterious effects of noise accumulation can be mitigated, even in the case of correlated noise [21]. Uncorrelated noise that merely leads to an effective low-bit-precision has been shown in simulations of deep optical neural networks (having 60 optically executed layers) to yield accuracies that are the same as or better than that of digital electronic processors executing the same neural network with 8-bit integer arithmetic [22], i.e., the simulations predicted that the accumulation of error in an optical implementation of the neural network would not have a noticeable impact on accuracy versus a standard digital electronic implementation. However, for all applications of analog optical processors—not just neural networks—intitution and simulations about resilience to noise ultimately need to be validated by optical experiments. With this context, we can now give a fuller answer to why there is renewed excitement in optical computing:5 (i) _The rise of neural networks_--over the past decade, neural networks have become a dominant approach in machine learning and have become extremely compute-resource-intensive. This has led to strong interest in alternative hardware approaches specialized to neural networks, and the intrinsic resilience of neural networks to noise makes them well-suited to analog optical implementations. (ii) _CMOS improvements won't be enough to satisfy application demand_--while there has been remarkable progress in CMOS hardware [7], it is also simultaneously true that both for neural networks and for some other applications (such as combinatorial optimization), the anticipated future improvements in CMOS hardware [23] are less than users would like and will limit application capabilities [24].6 (iii) _Improvements in photonics hardware_--driven largely by the consumer-electronics and the optical-communications industries, there have been enormous advances in the scale, speed, and energy efficiency of photonic devices over the past 30 years since the last big surge of interest in optical neural networks.7 This period has also seen the development and commercialization of photonic integrated circuits [29], giving a miniaturized alternative to bulk optics; there have also been substantial developments in optical materials and devices [30; 31; 32; 33; 34; 35; 36]. Footnote 5: More directly and colloquially, one can ask: _ue tried optical computing for neural networks in the 1980s and it fizzled, so why are two trying again now?_ The three reasons we offer are: (i) _Neural networks have become important again_—people lost interest in neural networks in the 1990s and it is only over the past decade that neural networks have returned to the fore, this time stronger and more important than ever. (ii) _Electronics was advancing fast enough then, but is not now_—CMOS-technology improvements, both in transistor count and in clock frequency, rapidly outpaced the developments of any alternative technology in the 1980s, whereas now CMOS electronic processors are constrained by heat dissipation and energy costs, and the anticipated improvements in CMOS technology are insufficient to satisfy the growth of neural networks. (iii) _Photonics technology has improved a lot since the 1980s—_technologies spanning light generation, manipulation, and detection have all improved dramatically. Footnote 6: The number of parameters in neural networks—one measure of their size and computational demand—have been growing much faster than hardware improvements [25], primarily because of the finding that increased scale often leads to increased capability or accuracy [26; 27]. Footnote 7: As examples, Samsung now offers a camera with 200 million pixels [28], and 400-bigabit-per-second optical transceivers using on the order of 10 W of power are commercially available. Footnote 8: Not every optical computer for neural networks is based on similar architectures to electronic neural-network processors—and there are good reasons to deviate [18; 42]—but in the cases where the architectures and algorithms are comparable, performance analysis is simpler because one doesn’t have to disentangle the effects of different algorithms and different architectures, and can focus on the underlying physical differences: how many parallel elements are there, how fast can data be sent through them, and so on. A complementary trend in the electronics community (both in CMOS and beyond-CMOS technologies), which has provided further support for the development of optical computers for neural networks, has been the development of special-purpose electronic chips for neural-network processing [37]. In many cases these chips also perform analog rather than digital matrix-vector multiplications; this has led to the development of methods for training neural networks to work well on analog hardware, many of which are also applicable to analog optical neural networks. Both analog and digital electronic neural-network chips often have dataflow architectures, especially systolic-array architectures. They also often implement the concept of compute-in-memory, meaning that the physical element storing an element of a neural network's weight matrix, for example, is also the physical element in which the multiplication by that weight takes place [38]; often the stored values can only be updated slowly, but this is acceptable for neural-network inference or other scenarios where the weights will be reused many times. Systolic-array and especially compute-in-memory architectures can have a close mapping to optical processors in which information encoded in optical signals flows through processing elements, be they arrays of spatial-light-modulator pixels (e.g., Ref. [39]), meshes of Mach-Zehnder interferometers (e.g., Ref. [8]), crossbars of phase-change-memory cells (e.g., Ref. [40]), or networks of microring resonators (e.g., Ref. [41]). This parallel between the architectures of analog electronic neural-network processors and analog optical neural-network processors has allowed optical-computer architects to borrow insights from the electronic-processor community. Architectural similarities also make it easier to predict how the performance of future electronic and photonic implementations are likely to compare.8 There are likewise architectural and algorithmic parallels between many special-purpose electronic processors for combinatorial optimization, and optical approaches for the same application area [4]. Footnote 8: Not every optical computer for neural networks is based on similar architectures to electronic neural-network processors—and there are good reasons to deviate [18; 42]—but in the cases where the architectures and algorithms are comparable, performance analysis is simpler because one doesn’t have to disentangle the effects of different algorithms and different architectures, and can focus on the underlying physical differences: how many parallel elements are there, how fast can data be sent through them, and so on. In this Perspective, we limit ourselves to discussing _classical_ optical computing and do not review the benefits of optics for building _quantum_ computers [43]. We will also not attempt to compare optical classical computers with optical quantum computers, other than to say that both are competing against classical digital electronic computers but with rather different applications targeted for potential advantage [44]. **What do optical computers need to beat?** Before we discuss how an optical computer could beat an electronic computer, let's first briefly describe what they are up against and why this makes electronic processors such stiff competition. There is both a hardware and an algorithms or software component to this. On the hardware side, electronic processors based on CMOS transistors have enormous parallelism, with up to \(\sim\)10\({}^{11}\) transistors per chip, operating at a clock rate of between \(\sim\)1 GHz and \(\sim\)10 GHz, and a switching energy of \(<10\) aJ (i.e., \(<10^{-17}\) J) [7]. This allows modern processors to have enormous computing throughput--for example, the Nvidia H100 processor [45] can perform \(4\times 10^{15}\) 8-bit scalar multiplications per second, which corresponds to performing approximately \(4\times 10^{6}\) multiplications in parallel per clock cycle; the chip draws \(<1000\) W of power. On the software side, in parallel with the \(>50\) years of effort that has gone into improving transistor-based hardware, there has been \(>50\) years of effort in designing algorithms9, and in many cases the algorithms have been implicitly or explicitly designed to be optimized for the kinds of hardware that were or are available at the time [42], raising the barrier to entry for new hardware paradigms. Footnote 9: Ref. [23] notes that in some cases, improvements in algorithms over the past several decades have been responsible for almost as much benefit as improvements in hardware. We will now proceed to explain what physics differences between electronics and optics can contribute to an advantage for optical computers, and then in the Discussion section we will talk about why practical advantage from optical computers has remained elusive and what paths there are to achieving advantage. The 11 Features Paraphrasing H. L. Mencken, there is an explanation for optical computing's potential advantage that is neat, plausible, and wrong: the fact that light travels fast. We list below 11 features of either optics itself, or of a way computing can be done with optics, that are ingredients for the construction of optical computers; these features allow for explanations of how optics can deliver an advantage that are subtler but correct. We also address how the speed of light _is_ related to optical computing, even though it is not the cause of optical advantage. 1. **Bandwidth**: photonics has a \(\sim\)\(100,000\times\) larger bandwidth \(B\) than electronics (\(\sim\)\(500\,\mathrm{THz}\) vs \(\sim\)\(5\,\mathrm{GHz}\); see Figure 1a)10. This leads to two potential benefits: 1. There is **massive frequency-multiplexing parallelism**, e.g., there can be \(>10^{7}\) comb lines in a frequency comb [50] and \(>10^{9}\) frequency modes in a long fiber-ring cavity; data represented in each comb line (frequency mode) can be acted on in parallel (Figure 1b)--not just individually (i.e., element-wise), but also with operations that, for example, add or multiply data in different frequency modes [18]. The parallelism of optical frequency modes is commonly taken advantage of in optical communications, where wavelength-division multiplexing enables communication over a single-mode fiber at rates \(>10^{13}\) bits per second [51]; this technology can also be used for computing (e.g., Ref. [16], which used a bandwidth of \(B\sim 5\,\mathrm{THz}\)). 2. The **dynamics of optical systems can be very fast**, which can translate to very high operation speeds, which in turn can lead to higher computing throughput and lower latency11: the limit in the delay for an operation, \(\tau_{\mathrm{delay}}\gtrsim 1/B\), can be \(\sim\)\(100,000\times\) smaller for optics than electronics if the full bandwidth of optics is used. However, some subtlety is needed in the interpretation of this perspective on potential optical advantage from bandwidth. For one, the bandwidth limit on \(\tau_{\mathrm{delay}}\) is just a limit and the delay can be substantially longer than the limit if the device has a propagation length such that the time taken for light to travel through the device is long compared to \(1/B\) (i.e., a speed-of-light limit begins to dominate; see also [12]).12 For another, the delay for an individual modern electronic transistor under typical load is \(\sim\)\(1\,\mathrm{ps}\)[53] so if one compared photonics to electronics at the level of an individual switch, the bandwidth benefit of optics would be much smaller than \(\sim\)\(100,000\times\) (perhaps "only" \(\sim\)\(1,000\times\)). At the level of an entire chip, electronic processors are clocked \(\sim\)\(10-100\times\) more slowly than the circuit delays [54] would suggest are possible, largely due to limits on power dissipation [24]. In contrast, photonic processors can have low dissipation (3), and so at a system level it is a combination of intrinsic bandwidth _and_ low dissipation that gives rise to a \(\sim\)\(100,000\times\) potential system-wide bandwidth advantage for optics. Optical switching of \(\sim\)\(46\,\mathrm{fs}\) pulses has been demonstrated [55]--highlighting the fast speeds possible with THz-bandwidth optical pulses and the quasi-instantaneous nature of nonlinear-optical operations. Figure 1: **1 Bandwidth.****a**, An optical signal with bandwidth \(>300\,\mathrm{THz}\). **b**, An example of the use of frequency multiplexing in optical computing: kernel weights for a convolution are input as intensity modulations of spectral lines in a frequency comb; the use of multiple comb lines allows multiple computations to be performed in parallel. Panel a adapted from Ref. [56], © Optica Publishing. Panel b adapted from Ref. [57], © Springer Nature. 2. **Spatial parallelism**: photonic systems can exploit a large number (\(>10^{6}\)) of parallel spatial modes [58]. Consumer electronics using \(>10^{8}\) spatial modes in a \(\sim\)2.5 cm\({}^{2}\) area have been realized [28], illustrating that massive parallelism can be achieved in practice. For photonic systems in which light is confined in a single two-dimensional plane, such as in two-dimensional photonic integrated circuits, the density of photonic _components_ can be as high as \(\sim\)\(10^{6}\) per cm\({}^{2}\)[59], and we can roughly think of each component as enabling \(\geq 1\) computing operation (such as a multiplication) to be performed in parallel.13 While this component density is in absolute terms a high number, we should compare it against the spatial parallelism available in CMOS electronics, where the achieved density of transistors is \(\sim\)\(10^{10}\) per cm\({}^{2}\)[45].14 In this setting of two-dimensional photonic integrated circuits, optics is at a disadvantage versus electronics in the pure density of fabricable components (since the transistor density in electronics is \(\sim\)\(10^{4}\times\) larger than the component density in on-chip photonics).15 On the other hand, if the third spatial dimension is used [1; 64], optics may gain a several-orders-of-magnitude advantage16 in spatial parallelism because electronics is in practice limited to very modest three-dimensional integration.17 Footnote 13: Why did we write \(\geq 1\) operation and not just exactly 1 operation? There are multiple reasons. For example, one is that a single component in space can act on many frequency modes in parallel, as mentioned in 1, or on multiple polarization modes. Another is that depending on one’s definition of an operation, and one’s definition of a single component, a component may naturally perform multiple operations in a single pass of light through it, such as a single 50:50 coupler arguably performing two multiplications and two additions. Footnote 14: As another point of comparison, to give an example of a candidate future electronics technology, an analog matrix-vector-multiplier core based on a crossbar array of phase-change memory, built by IBM [60], featured 65536 phase-change-memory cells within a chip area of \(\sim\)0.6 mm\({}^{2}\). This is a density of \(\sim\)\(10^{7}\) cells per cm\({}^{2}\), and each cell can be interpreted as performing one scalar, analog multiplication per clock cycle. Footnote 15: The density of transistors versus photonic components is arguably the most relevant comparison, since transistor-based electronic processors are, in most cases, the systems to beat. However, even two-dimensional photonics can have a spatial-parallelism advantage over two-dimensional microwave electronics: for example, photonic-crystal cavities (resonators) can have areas \(\sim\)1 μm\({}^{2}\)[61; 62], whereas electronic microwave resonators are typically orders of magnitude larger (for example, see Ref. [63]). Footnote 16: Let us use an example to make a rough estimate of the kind of advantage that is in principle possible. Consider a 2D photonic device with dimensions \(L\times L\) and a 3D photonic device with dimensions \(L\times L\times L\). Assume we address each device with light having wavelength \(\lambda\approx 500\) nm and that the device length is \(L\approx 5\) cm. The number of resolvable spots in the former case is on the order of \((L/\lambda)^{2}=10^{10}\) whereas the number of resolvable voxels in the latter case is on the order of \((L/\lambda)^{3}=10^{15}\)—an advantage of \((L/\lambda)=10^{5}\) times when going from 2D to 3D. We can also compare these numbers with the counts of transistors in electronic processors: at the state-of-the-art fabrication density of \(\sim\)\(10^{10}\) transistors per cm\({}^{2}\), a 5 cm \(\times\) 5 cm chip would have \(2.5\times 10^{11}\) transistors. This is an order of magnitude greater than the number of resolvable spots in the same-area photonic device, but several orders of magnitude smaller than the number of voxels in the same-length 3D device. Of course an addressable voxel of material is not the same thing as a transistor; one ultimately needs to carefully analyze the computation and memory that is achieved using a particular device in a particular way, but these crude estimates hopefully convey two key messages: that by going from 2D to 3D devices, there can be an orders-of-magnitude increase in the achievable complexity of the device stemming from the fact that \((L/\lambda)\) can be a large number, and that while 2D photonic devices offer lower spatial parallelism than transistor-based electronic chips, moving to 3D devices may enable an orders-of-magnitude benefit in spatial parallelism for optics over electronics. Footnote 17: A typical modern electronic chip is thin—on the order of 1 mm—and comprises only tens of layers [65], whereas optical processors that are centimeters or even meters thick, e.g., using propagation through bulk crystals [64; 66] or multimode optical fiber [17], have been constructed. In the specific case of NAND memory, electronic integrated circuits have been scaled to 128 layers [67]—which suggests that for memory rather than computing, photonics has less room for advantage over electronics by extending in the third dimension. Footnote 18: The density of transistors versus photonic components is arguably the most relevant comparison, since transistor-based electronic processors are, in most cases, the systems to beat. However, even two-dimensional photonics can have a spatial-parallelism advantage over two-dimensional microwave electronics: for example, photonic-crystal cavities (resonators) can have areas \(\sim\)1 μm\({}^{2}\)[61; 62], whereas electronic microwave resonators are typically orders of magnitude larger (for example, see Ref. [63]). Footnote 19: Let us use an example to make a rough estimate of the kind of advantage that is in principle possible. Consider a 2D photonic device with dimensions \(L\times L\) and a 3D photonic device with dimensions \(L\times L\times L\). Assume we address each device with light having wavelength \(\lambda\approx 500\) nm and that the device length is \(L\approx 5\) cm. The number of resolvable spots in the former case is on the order of \((L/\lambda)^{2}=10^{10}\) whereas the number of resolvable voxels in the latter case is on the order of \((L/\lambda)^{3}=10^{15}\)—an advantage of \((L/\lambda)=10^{5}\) times when going from 2D to 3D. We can also compare these numbers with the counts of transistors in electronic processors: at the state-of-the-art fabrication density of \(\sim\)\(10^{10}\) transistors per cm\({}^{2}\), a 5 cm \(\times\) 5 cm chip would have \(2.5\times 10^{11}\) transistors. This is an order of magnitude greater than the number of resolvable spots in the same-area photonic device, but several orders of magnitude smaller than the number of voxels in the same-length 3D device. Of course an addressable voxel of material is not the same thing as a transistor; one ultimately needs to carefully analyze the computation and memory that is achieved using a particular device in a particular way, but these crude estimates hopefully convey two key messages: that by going from 2D to 3D devices, there can be an orders-of-magnitude increase in the achievable complexity of the device stemming from the fact that \((L/\lambda)\) can be a large number, and that while 2D photonic devices offer lower spatial parallelism than transistor-based electronic chips, moving to 3D devices may enable an orders-of-magnitude benefit in spatial parallelism for optics over electronics. Footnote 20: Let us use an example to make a rough estimate of the kind of advantage that is in principle possible. Consider a 2D photonic device with dimensions \(L\times L\) and a 3D photonic device with dimensions \(L\times L\times L\). Assume we address each device with light having wavelength \(\lambda\approx 500\) nm and that the device length is \(L\approx 5\) cm. The number of resolvable spots in the former case is on the order of \((L/\lambda)^{2}=10^{10}\) whereas the number of resolvable voxels in the latter case is on the order of \((L/\lambda)^{3}=10^{15}\)—an advantage of \((L/\lambda)=10^{5}\) times when going from 2D to 3D. We can also compare these numbers with the counts of transistors in electronic processors: at the state-of-the-art fabrication density of \(\sim\)\(10^{10}\) transistors per cm\({}^{2}\), a 5 cm \(\times\) 5 cm chip would have \(2.5\times 10^{11}\) transistors. This is an order of magnitude greater than the number of resolvable spots in the same-area photonic device, but several orders of magnitude smaller than the number of voxels in the same-length 3D device. Of course an addressable voxel of material is not the same thing as a transistor; one ultimately needs to carefully analyze the computation and memory that is achieved using a particular device in a particular way, but these crude estimates hopefully convey two key messages: that by going from 2D to 3D devices, there can be an orders-of-magnitude increase in the achievable complexity of the device stemming from the fact that \((L/\lambda)\) can be a large number, and that while 2D photonic devices offer lower spatial parallelism than transistor-based electronic chips, moving to 3D devices may enable an orders-of-magnitude benefit in spatial parallelism for optics over electronics. Footnote 21: A typical modern electronic chip is thin—on the order of 1 mm—and comprises only tens of layers [65], whereas optical processors that are centimeters or even meters thick, e.g., using propagation through bulk crystals [64; 66] or multimode optical fiber [17], have been constructed. In the specific case of NAND memory, electronic integrated circuits have been scaled to 128 layers [67]—which suggests that for memory rather than computing, photonics has less room for advantage over electronics by extending in the third dimension. Footnote 22: A typical modern electronic chip is thin—on the order of 1 mm—and comprises only tens of layers [65], whereas optical processors that are centimeters or even meters thick, e.g., using propagation through bulk crystals [64; 66] or multimode optical fiber [17], have been constructed. In the specific case of NAND memory, electronic integrated circuits have been scaled to 128 layers [67]—which suggests that for memory rather than computing, photonics has less room for advantage over electronics by extending in the third dimension. Footnote 23: A typical modern electronic chip is thin—on the order of 1 mm—and comprises only tens of layers [65], whereas optical processors that are centimeters or even meters thick, e.g., using propagation through bulk crystals [64; 66] or multimode optical fiber [17], have been constructed. In the specific case of NAND memory, electronic integrated circuits have been scaled to 128 layers [67]—which suggests that for memory rather than computing, photonics has less room for advantage over electronics by extending in the third dimension. Footnote 24: The density of transistors versus photonic components is arguably the most relevant comparison, since transistor-based electronic processors are, in most cases, the systems to beat. However, even two-dimensional photonics can have a spatial-parallelism advantage over two-dimensional microwave electronics: for example, photonic-crystal cavities (resonators) can have areas \(\sim\)1 μm\({}^{2}\)[61; 62], whereas electronic microwave resonators are typically orders of magnitude larger (for example, see Ref. [63]). Footnote 25: Let us use an example to make a rough estimate of the kind of advantage that is in principle possible. Consider a 2D photonic device with dimensions \(L\times L\) and a 3D photonic device with dimensions \(L\times L\times L\). Assume we address each device with light having wavelength \(\lambda\approx 500\) nm and that the device length is \(L\approx 5\) cm. The number of resolvable spots in the former case is on the order of \((L/\lambda)^{2}=10^{10}\) whereas the number of resolvable voxels in the latter case is on the order of \((L/\lambda)^{3}=10^{15}\)—an advantage of \((L/\lambda)=10^{5}\) times when going from 2D to 3D. We can also compare these numbers with the counts of transistors in electronic processors: at the state-of-the-art fabrication density of \(\sim\)\(10^{10}\) transistors per cm\({}^{2}\), a 5 cm \(\times\) 5 cm chip would have \(2.5\times 10^{11}\) transistors. This is an order of magnitude greater than the number of resolvable spots in the same-area photonic device, but several orders of magnitude smaller than the number of voxels in the same-length 3D device. Of course an addressable voxel of material is not the same thing as a transistor; one ultimately needs to carefully analyze the computation and memory that is achieved using a particular device in a particular way, but these crude estimates hopefully convey two key messages: that by going from 2D to 3D devices, there can be an orders-of-magnitude increase in the achievable complexity of the device stemming from the fact that \((L/\lambda)\) can be a large number, and that while 2D photonic devices offer lower spatial parallelism than transistor-based electronic chips, moving to 3D devices may enable an orders-of-magnitude benefit Figure 2: **2 Spatial parallelism.** Part of a state-of-the-art silicon-photonic device with 16,384 pixels on a \(10\times 11\)-mm\({}^{2}\) chip, illustrating the degree of spatial parallelism possible in modern photonic devices. Adapted from Ref. [73], © Springer Nature. 3 Nearly dissipationless dynamics: photons can propagate through free-space (or even some on-chip18) optical setups with nearly no energy loss, and perform computation by their mere propagation. How much computation? We consider the cases of _linear-_ and _nonlinear-optical_ systems: Footnote 18: For example, thin-film lithium niobate chips can have waveguide propagation losses of \(0.06\,\mathrm{dB}\,\mathrm{cm}^{-1}\)[74]. _Linear optics_: an example of this phenomenon is that a single lens effectively performs a two-dimensional Fourier transform on light that impinges on it [75]--optical correlators [14] and convolutional layers in optical neural networks [1] both take advantage of this. More generally, propagation of light through a linear-optical system can be modeled by a matrix-vector multiplication, so matrix-vector multiplication can be performed by merely shining light encoding a vector (of dimension \(N\)) in its spatial19 pattern onto an optical system [1]. As a rather extreme example, shining light through white paint can be used to perform the multiplication of a vector by a random matrix with dimension \(>10^{6}\times 10^{6}\)[76]. A variety of linear-optical systems in which the matrix can be programmed20 have also been demonstrated [1; 59], although in these cases the matrix size has generally been limited by the number of programmable elements (as such spatial-light-modulator pixels21) that can be engineered. In principle, the dissipationless nature of optical propagation can lead to matrix-vector multiplications being performed that beat the Landauer limit [77] for multiplications performed on digital electronic processors--intuitively because in a coherent setup, the optical interference that occurs is a reversible process [78]. Footnote 19: For the sake of concreteness, in this paragraph we give examples of vectors encoded in space, but this is not the only possibility: the propagation of light in just a single spatial mode can also result in nearly dissipationless computation of inputs encoded in other ways, such as in frequency or time [2]. Footnote 20: As opposed to the example of the multiplication using light propagation through white paint, in which the matrix is fixed and random. Footnote 21: Spatial light modulators with \(\sim\)\(10^{7}\) pixels are commercially available; each pixel can be used to represent a single programmable element of a matrix. _Nonlinear optics_: propagation of light through nonlinear-optical systems can also exhibit nearly dissipationless dynamics that can be harnessed for computation. For example, propagation of light through an optical medium with a nonzero second-order nonlinear-optical susceptibility, \(\chi^{(2)}\), can in general result in sum-frequency- and difference-frequency-generation processes where the optical amplitude of the output scales as the product of the amplitudes of light at two frequencies at the input, e.g., \(E_{\mathrm{out}}(\omega_{1}+\omega_{2})\propto E_{\mathrm{in}}(\omega_{1})E_{ \mathrm{in}}(\omega_{2})\)[79]. We can interpret such a nonlinear-optical process as performing a scalar multiplication of the two numbers \(E_{\mathrm{in}}(\omega_{1})\) and \(E_{\mathrm{in}}(\omega_{2})\)[18]. Nonlinear-optical dynamics enable the implementation of mathematical functions that are nonlinear--which is essential in deep neural networks [80] and in computing more generally [81]. For example, in a \(\chi^{(2)}\) process, if the frequencies of the input light are equal (\(\omega_{1}=\omega_{2}\)), then one may obtain output light at twice the frequency with amplitude \(E_{\mathrm{out}}(2\omega_{1})\propto(E_{\mathrm{in}}(\omega_{1}))^{2}\), so the function realized is \(f(x)=x^{2}\), which is nonlinear. Furthermore, just as the propagation of multiple spatial beams through a linear-optical system can be seen as performing a matrix-vector product, propagation of multiple spatial beams through a nonlinear-optical system can realize a higher-dimensional generalization of matrix-vector multiplication, namely tensor contraction involving tensors of order \(n+1\), where \(n\) is the order of the nonlinearity-optical susceptibility, \(\chi^{(n)}\). This is an impressive feature for computing [17; 18]: with the lowest-order nonlinearity, \(n=2\), the computation performed--again, by the mere propagation of the light through the system--is a tensor contraction that comprises \(\sim\)\(N^{3}\) multiplication operations, where \(N\) is again the number of spatial modes. Higher orders of optical nonlinearity can result in even larger amounts of computation being performed by a single pass of light through the system, since even-higher-order tensors are involved. The fact that computations can be performed nearly dissipationlessly in optics has two potential benefits: 1. **Higher energy efficiency**: the obvious benefit is that one can potentially harness dissipationless dynamics to perform computation using less energy than would have been needed in a different platform that did have substantial dissipation (such as electronics). 2. **Higher performance**: dissipation doesn't just cause a computation to cost more energy, but can also limit the clock speed and parallelism of a processor, ultimately limiting its total computing throughput (operations per second) and latency. Modern CMOS electronics processors are limited--both in clock speed and in three-dimensional density of transistors--by our ability to extract dissipated heat from them [24].22 By dramatically reducing dissipation per computing operation, one potentially allows for a dramatic increase in both the clock speed and spatial parallelism (number of operations performed simultaneously per unit volume). Footnote 22: Photonics has another potential benefit over electronics with regards to extracting heat from dissipation within a three-dimensional chip: whereas the loss of electrical energy in a chip is generally by the generation of heat at the point where the energy is lost—e.g., resistive heating of a wire—the situation in photonics can be quite different because the loss of optical energy is often _not_ due to absorption and accompanying generation of heat, but rather by scattering. This is true for waveguides in silicon photonics integrated circuits, for example, and suggests that if you construct a three-dimensional silicon-photonic chip, the losses of waveguides within the chip will primarily not cause heating, but instead will result in photons being scattered within the chip until they emerge at the surfaces. In summary, nearly dissipationless dynamics in optics enables us to create three-dimensional photonic chips that don’t suffer from the extreme heat-extraction challenges of three-dimensional electronic chips, and even the small photonic dissipation that does occur does not cause heating within the bulk of the chip if it is due to scattering, so we may not even need to worry about the residual photon loss causing heat-management difficulties provided that components that absorb photons are avoided. There is however a snag, namely _input/output costs_: how does the input data for the computation get loaded and the result get read out? If the input comes from an electronic memory and the result needs to be stored in an electronic memory then even though the computation itself can happen nearly "for free", one needs to convert electronic data to the optical domain for the data input, and then convert the optical answer back to the electronic domain. This memory access and transduction, which will typically also involve digital-to-analog and analog-to-digital conversion, will cost substantial energy (and be limited in speed when compared to optical bandwidths of THz). Fortunately this energy cost will only scale as the size of the input vector, \(N\), whereas the amount of computation being performed may scale as \(N^{2}\) (linear propagation) or \(N^{3}\) (or even higher powers; nonlinear propagation), and so for sufficiently large \(N\), the energy cost of the input/output will be small compared to the cost that the computation would have required in an electronic processor. Similarly, the time required for input/output for \(N\)-dimensional vectors can, for sufficiently large \(N\), be very small compared to the time the \(N^{2}\)- or \(N^{3}\)-complexity computation would have taken on an electronic processor. The loading of coefficients, such as the matrix elements in the case of linear propagation, in general also has a cost in both energy and time but this can be amortized over many runs, such as in the case of batched inference with neural networks [8]. Figure 3: **3 Nearly dissipationless dynamics.** An example of computing with _linear_ optics: light propagating through a lens undergoes a Fourier transform, and in a two-lens \(4f\) system with a scattering medium inbetween, a convolution is performed on the input light. In the absence of optical loss (e.g., from absorption in the lenses), the computation of the convolution happens without any energy loss. However, if one considers how to use this building block in an end-to-end computing system, there _is_ typically an energy cost associated with converting an electrical signal into an optical input, and there is also typically an energy cost associated with converting the optical output back into an electrical signal. Adapted from Ref. [1], © Springer Nature. **4 Low-loss transmission**: the energy cost to transmit information "long" distances with light is much lower than with electrical signals [82], mostly23 because signal attenuation (energy loss) per unit length is much higher in electrical wires than in optical fibers or waveguides (Figure 4). Footnote 23: There are several subtleties in evaluating the energy cost of optical and electrical communication, discussed in detail in Refs. [82; 83; 48], which necessitate the use of the word “mostly” here. For one, optical communications between electronic devices require transduction of signals from electrical to optical, and back to electrical, and the transduction devices have energy costs [82]. For another, electrical signal transmission along a wire requires energy that increases with length because the wire’s resistance increases with length—but this is not the end of the story: for thin wires, such as those used in CMOS electronic processors, the wire delay grows quadratically with length and to mitigate this, repeaters are used to regain a linear scaling of delay with length, and the repeaters also have an energy cost (associated with the switching of their driver transistors) [83; 48]. For on-chip photonic processors, commercial foundries such as AIM Photonics can produce silicon-nitride waveguides with losses \(\sim 0.06\,\mathrm{dB\,cm^{-1}}\) for wavelengths \(\sim 1600\,\mathrm{nm}\) to \(1640\,\mathrm{nm}\) and \(<0.25\,\mathrm{dB\,cm^{-1}}\) across the telecommunications C band (\(\sim 1530\,\mathrm{nm}\) to \(1565\,\mathrm{nm}\)) [84]. An important caveat for both free-space and on-chip optical processors is that while propagation losses between components can be very low, typically there will be losses from reflections or scattering as light propagates into or out of a component (e.g., Fresnel reflections due to mismatch in refractive index). As a result, optical processors still need careful design to avoid excessive overall optical loss. The low-loss transmission of optics is already being taken advantage of in electronic computing: optical links in datacenters [85], and even directly between chips [86], use light to communicate information over length scales from centimeters to many meters. It is anticipated that even some communications within a single chip might eventually use optics [82; 85]. A major reason that light is not already used for communications within single electronic-processor chips, especially over very short distances, is that the optoelectronic components to transduce signals between the optical and electrical domains cost both space and energy, and it is only worth paying these costs when the distance the signal needs to travel is long enough [82]. An optical computer, on the other hand, could in principle take advantage of optics for low-energy-cost, nearly-dissipationless information transmission at _all_ length scales, and without paying space or energy costs for transduction24—because the signals would already be optical. Footnote 24: An optical processor will inevitably need to use some energy for transduction, for example to load the initial input data for the computation and/or to read out the final answer, which will typically need to be in the electrical domain. But the transductions—and their costs—that would have occurred within a computation can be avoided. **FIG. 4:** **4 Low-loss transmission.** For both on-chip and off-chip transmission, the signal attenuation (in dB per meter) is orders of magnitude lower (better) with optical instead of electrical signals. **For example, electrical signals at 10 GHz have \(\sim\)10\({}^{4}\times\) higher attenuation than equivalent on-chip or off-chip transmission with optical signals. Inspired by Ref. [87, Figure 4.3]. _Data sources_: On-chip electrical interconnect: Ref. [88]; Off-chip electrical coaxial cable: Ref. [89]; On-chip optical interconnect: Ref. [90]; Off-chip optical fiber: Ref. [91; Figure 22.2] and Ref. [92]. This figure is intended to give a heuristic comparison; it does not comprehensively cover all transmission technologies, but is based on just a few illustrative examples that convey the relevant orders of magnitude. For more examples and details, see: Ref. [93] (electrical interconnects and cables); Ref. [88] (on-chip electrical interconnects with different dimensions); Ref. [94] (electrical interconnects on printed circuit boards); Ref. [95] (integrated-photonics waveguides with lithium niobate). 5. **Optical beams and "wires" can cross whereas electrical wires cannot**: optical beams can pass through one another without suffering from cross-talk25, and optical on-chip "wires" (waveguides; see Figure 5) can cross with very low cross-talk--not just in principle, but also in practice in the presence of fabrication imperfections. In contrast, electrical wires need their own region of isolated physical space, and in addition to not being able to pass through one another, also often suffer from cross-talk even if they are merely close to one another [97]. This provides the possibility for photonic processors to be more compact than electronic processors when interconnect is an important contributor to processor size, although the use of optical beams for communicating information is not without its own cross-talk challenges due to diffraction, scattering, and unwanted reflections [98].26 One can interpret the ability for optical beams to cross as a key enabler of many free-space, spatially-multiplexed optical implementations of convolution and matrix-vector multiplication [1]. For example, in implementations (e.g., Ref. [100]) of matrix-vector multipliers that use arrays of lenses for fan-out (Figure 7b), the rays between the input vector and the fanned-out copies cross. The crossing supports the implementation, in principle, of large convolutions and dense matrix-vector multiplications in small volumes. Optical switches such as the one shown in Figure 6a provide another example of where crossing of beams enables a more compact design. Figure 5: **Optical beams and “wires” can cross.** It is not only in free space that optical paths can cross: in integrated photonics, waveguides can pass through one another with minimal impact on the signal propagation. The waveguide crossing in this figure had a crosstalk of \(<-50\,\mathrm{dB}\). Adapted from Ref. [101], © Optica Publishing. 6 Optical beams can be steered programmably at high speed whereas electrical wires are either fixed or reconfigureable only slowly**: free-space optical beams can easily be redirected (for example, using an acousto-optic deflector, with a delay on the order of microseconds), enabling the creation of reconfigurable optical interconnects [102; 103]. In contrast, electrical wires on chips are fixed at the time of fabrication, and wires joining nodes in an interconnect between processors, boards, or racks can only be moved slowly (typically on the order of seconds).27 Footnote 27: How do electronic processors deal with quasi-fixed interconnects? The disadvantage of having a fixed network is typically mitigated by using multi-hop communications—relying on there being a path between a sender and a receiver involving some intermediate nodes—and switching, which achieves fast rerouting of signals within a fixed network topology. These strategies come with the cost of increased latency and potential bandwidth bottlenecks. Figure 6: **Optical beams can be steered programmably.****a**, Optical beams inside a micro-electro-mechanical systems (MEMS) optical switch can be rerouted on timescales on the order of milliseconds using arrays of MEMS-actuated micromirrors. **b**, Optical-tweezer beams can be reconfigured to trap atoms in arbitrary geometries in 3D; the results shown here are from an experiment in which a liquid-crystal-based spatial light modulator was used to program the beams; such modulators can also updated on a timescale on the order of milliseconds. Part a reproduced from Ref. [104], © IEEE. Part b adapted from Ref. [105], © Springer Nature. 7]**Fan-in (summation) and fan-out (copying) work differently in optics**: copying data to be processed in parallel (fan-out), and summing the outputs from a number of parallel-processing units (fan-in) are important primitives in parallel processing. Both can be implemented in optics in a way that is different than in electronics, and have different tradeoffs [102; 106]. Optics has a potential advantage from supporting large (\(>1000\)) fan-in and fan-out without the \(RC\) and \(LC\) delays28 of fan-in/fan-out with electrical wires, for which fan-in/fan-out is typically kept lower than 10 in digital processors, necessitating multiple buffering stages (and hence further delay) whenever larger fan-in/fan-out is needed [107; 108]. Footnote 28: As Ref. [102] points out, when evaluating an optical scheme, one needs to take care to evaluate the \(RC\) and \(LC\) delays of photodetectors that are involved. In free space, fan-in of signals encoded in spatial modes can be performed by directing beams to a common point in space (e.g., via the use of a lens; see Figure 7a), at which there could be, for example, a photodetector (if the next processing step required conversion from optical to electrical signals), a holographic element (to combine the beams traveling in different directions into a beam that travels in one direction, albeit at the cost of loss of optical power) [102], or an intensifier (which can amplify the summed beams and re-emit a single optical signal) [100]. Fan-out of a signal in a single spatial mode to multiple spatial modes can also be performed conceptually easily in free space, where it happens essentially without any special engineering effort (Figure 7b): imagine an optical display (such as a light-emitting-diode display on a cell phone) that emits in multiple directions--multiple people looking at the display from different vantage points can all see the same image, and we can interpret what happened is that multiple copies of the data on the display were made and transmitted to different receivers (people).29 Arrays of lenslets (microlenses) can be used to collimate the image copies [100; 109].30 Footnote 29: Another example of optical fan-out in everyday life is in a kaleidoscope. Footnote 30: Free-space fan–out can also be implemented and understood in the Fourier domain [110]. Both fan-in and fan-out for spatial modes can also readily be implemented in integrated-photonics platforms [111]. However, in an on-chip setting light propagation is typically practically restricted to be in a single plane, whereas in free space it is natural for signals to propagate in all three dimensions, enabling a much higher degree of fan-in and fan-out. For this reason, it is easier to imagine gaining an advantage over on-chip electronic processors (which are also quasi-planar) from the use of optical fan-in or fan-out in free-space settings. Up to here, we have discussed fan-in and fan-out in the context of spatial modes. For optical computers using frequency or temporal modes, fan-in and fan-out may be realized using other means than the spatial approaches referred to so far. For example, fan-out of data input as electronic signals can be performed in the frequency domain by modulating an optical frequency comb [57], and weighted fan-in can be performed using wavelength-division multiplexing, including in on-chip platforms [2]. To reason about _why_ or _when_ optical fan-in or fan-out may have an advantage over electrical fan-in or fan-out, it is useful to consider the bandwidth (1) and low-loss transmission possible in optics (4), and that optical beams can cross (5).31 However, the fan-in/fan-out possibilities of optics (this point, 6) are distinct from the potential benefits of bandwidth, low-loss transmission and beam-crossing in optics, and it is fruitful to think of fan-in and fan-out in optics as special features that can be used in an optical-computing architecture, even though they may also use other features of optics to operate well. Footnote 31: Teasing out the source of a potential advantage can be quite subtle. For example, fan-in arguably plays an important role in enabling vector-vector or matrix-vector multiplication engines that use extremely small amounts of optical energy per multiplication [39]—in which the amount of optical energy needed to achieve a particular signal-to-noise ratio for a vector-vector dot product is fixed regardless of the vector size—but similar efficiency can be achieved with optoelectronic fan-in, in which summation is performed in the electrical domain [78; 112]. Purely analog electronic approaches to compute vector-vector dot products can also show favorable energy consumption versus digital electronic approaches [113], so for any given computing scheme using optical fan-in, one can ask: which part of the potential benefit comes from performing the summation in an analog rather than digital fashion, and which part comes from using optics instead of electronics? Figure 7: **Optical fan-in and fan-out.****a**, Fan-in can be performed in free space using a lens; here, a lens causes many beams to converge on a single-pixel detector. **b**, Fan-out can be performed in free space using an array of lenses, where each lens “captures” a copy of the incoming image. Part a adapted from Ref. [39], (C) Springer Nature. Part b reproduced from Ref. [114], (C) Adobe Systems. 8. **One-way propagation**: light naturally propagates in one direction, from its source to a receiver32, whereas electrical signals can propagate backwards (Figure 8). In electronic processors, backwards propagation (from inputs to other inputs, or from the output to the inputs) can cause unwanted dynamics as well as unnecessary power consumption. This leads to an advantage of optics over electronics for some analog architectures. While backwards propagation is a general feature of electrical circuits--without isolating elements such as buffers or diodes in a circuit, any time there is a voltage difference between two connected circuit nodes there will be a current flow between them, even if those two nodes are inputs--concerns about backwards propagation have arisen mostly in the context of analog crossbar-array processors, related to their fan-in stage [115] and also the _sneak path_ issue [116]. Analog _optical_ matrix-vector-product engines [1] generally feature one-way propagation, avoiding some of the issues that arise in analog _electronic_ matrix-vector-product engines (i.e., crossbar arrays), and there is a broader notion of optics providing natural isolation [117] that can be useful in computing. A caveat is that while perfectly one-way propagation is possible if light does not pass through any interfaces, any useful optical processor will involve at least some interfaces (e.g., light going from air into a glass lens), and as a consequence have some unavoidable reflections. The reflections can be made small by appropriate choices of geometry and materials but will never be completely eliminated. In many cases there may be an engineering tradeoff between, for example, the compactness of the optical processor and the magnitude of the reflections (i.e., the one-way-ness) in the system. Footnote 32: More pedantically, one can construct optical systems in which the propagation is naturally one-way; if one, for example, forms an optical cavity in part of the system, then the situation becomes more complicated. Figure 8: **One-way propagation.** An electrical fan-in (weighted sum of voltage inputs \(v_{i}\) by conductance weights \(g_{i}\)) exhibiting undesired backwards flow of current. The current contributions from the input \(v_{0}\) to the output (desired) and, if \(v_{1}<v_{\rm out}\), from the input \(v_{0}\) to the input \(v_{1}\) (undesired) are shown in pink. Only the current contributions from \(v_{0}\) to the output and to \(v_{1}\) are illustrated here, but in general current will flow backwards from the common node \(v_{\rm out}\) to \(v_{i}\) if \(v_{i}<v_{\rm out}\). In contrast, one-way, forward-only propagation of light in a fan-in is shown in Figure 7(a). Adapted from Ref. [115], © IEEE. 9]**Adiabatic, least-action and least-power-dissipation principles in physics have different realizations in optical and electrical systems**: there are general physics principles--such as adiabaticity, the Principle of Least Action and the Principle of Least Energy Dissipation--that can lead to a physical system heuristically solving optimization problems [118], and different variations of these principles can be leveraged to construct optimization machines (such as Ising machines [4])33. Footnote 33: Given how central optimization is in machine learning, and especially in neural networks, computers designed to perform optimization are often also well-suited to perform machine learning—so an advantage on optimization can quite plausibly be translated into an advantage in machine learning too. Similarly, one can recast the problem of solving partial differential equations as a variational optimization problem [119], providing another potential application of physics optimization principles to a broader class of computations. For example, Fermat's Principle of Least Time for optics states that light follows the path that minimizes its time to travel between two points34, but this principle doesn't have a direct analog in electrical circuits--so a computer performing optimization using Fermat's principle is more natural to try create with optics. Footnote 34: Feynman gave an explanation of this principle with a path-integral formulation in which the light can take all possible paths but only the paths that constructively interfere contribute substantially, and paths with substantially different propagation times than Fermat’s solution destructively interfere [120]. This perspective is possibly helpful for thinking about how to design optimization machines that use Fermat’s principle. Onsager's Principle of Least Energy Dissipation can apply in both optics and in electronics, but the behavior and resulting computing performance may be different because of differences in the underlying physics. For example, lasers and parametric oscillators in optics have a threshold when gain is equal to loss, and the fact that they will first oscillate in the mode with lowest loss can be used to design optical Ising machines [121; 122]. Electrical circuits, including oscillators, also have dynamics that heuristically minimize the energy dissipated [118], but they are not identical to lasers or optical parametric oscillators and in general will have different behavior. It is an open question whether, or in which situations, optics systems using Onsager's principle have an advantage over electronics realizations, but the possibility is one that a designer of an optical computer may wish to explore.35 Footnote 35: The question has multiple facets: if the equations governing the optics and electronics dynamics were identical, one might still achieve an advantage of optics over electronics for some of the other reasons described in this article, such as bandwidth. However, one can also ask if the differences between the underlying equations lead to different behavior beyond a faster timescale resulting from higher bandwidth, or a larger system size resulting from larger spatial parallelism—in other words, differences beyond the other optics-vs-electronics distinctions drawn so far. Figure 9: **Optimization principles.****a**, The Principle of Least Time in optics. Light travels between starting point A and ending point B by taking the path of least time. A computational interpretation is that the light solves an optimization problem (of finding the path of least time), given the constraints of where the path starts and ends. **b**, A network of oscillators—which in optics could, for example, be optical parametric oscillators or laser oscillators—will in principle oscillate in the collective mode/configuration corresponding to the lowest loss if the gain is set to be equal to the minimum loss. Panel a adapted from Ref. [120], © Princeton University Press. Panel b adapted from Ref. [122], © Springer Nature. 10 The quantum nature of light is accessible at room temperature:36 it is possible to store and process information encoded with single optical-frequency photons, and it is possible to detect individual optical photons with low noise. This is in contrast to the situation at microwave frequencies, where thermal noise at room temperature rapidly swamps any information stored in single photons, and low-noise single-photon detection is not available.37 Footnote 36: In this paper, we do not consider quantum information processing [123]; here, when we talk of operating in the quantum regime, we mean in the sense that light comprises photons and we are operating at such low powers that the quantum noise and discrete nature of the light is relevant to modeling the operation of the computer. The topic of using quantum phenomena such as entanglement to build quantum computers is exciting but beyond the scope of this paper; Ref. [124] provides a helpful description delineating the first and second quantum revolutions, and it is only the former that we consider here. Footnote 37: The quantum nature of microwave photons is accessible at temperatures \(\sim\)10 mK, but such cold temperatures are generally only achievable using a dilution refrigerator, which is bulky and expensive (in money and energy). For classical information processing, the fact that small numbers of photons can be manipulated and measured naturally leads to a potential reduction in energy cost versus if more photons were needed for reliable operation [39; 78]. It is also possible to produce and measure squeezed states of light at room temperature [125]; the reduced noise in squeezed states could prove useful in classical information processing, for example for achieving higher numerical precision with a fixed energy budget (average number of photons). The lack of a strong single-photon nonlinearity in optics, which is an advantage for communicating information without cross-talk but can be a disadvantage for processing information with small numbers of photons, can be circumvented using single-photon detection. The nonlinearity of the detection process itself is a feature one can use [1; 78; 126], but it is also possible to use photodetection to probabilistically induce nonlinear operations across multiple optical modes [127].38 Footnote 38: Ref. [127] develops and motivates probabilistic nonlinear operations for use in quantum computing, but these operations could potentially also be used for classical computing. Figure 10: **The quantum nature of light is accessible at room temperature.****a**, The energy of optical photons is much higher than that of the thermal energy scale \(k_{\mathrm{B}}T\) at room temperature (\(T_{\mathrm{room}}\approx 300\,\mathrm{K}\)), whereas microwave photons have much lower energy than \(k_{\mathrm{B}}T_{\mathrm{room}}\). Consequently thermal noise “drowns out” quantum effects of microwave signals at room temperature, but quantum effects in optical signals can be observed. **b**, An array of 250,000 single-photon detectors, which is sensitive to light at visible wavelengths and operates at room temperature. Part b reproduced from Ref. [128], © IEEE. 11]**Wave physics**39: it is easy to observe the wave nature of individual photons--observing interference of single photons in a Mach-Zehnder interferometer is an undergraduate lab experiment [129], and photon coherence is well-preserved in on-chip photonic processors [130]--but it is difficult to observe the wave nature of individual electrons40. Footnote 39: This point could have been presented as just a part of _The quantum nature of light is accessible at room temperature_, since the wave-particle duality and the wave behavior of both photons and of electrons is part of quantum physics, but we have elevated it to being its own point because the wave nature of electrons being difficult to observe and exploit is not just due to cryogenic temperatures being required—on-chip electron coherence lengths are also much more dependent on the properties of the material host than on-chip photon coherence lengths. Footnote 40: Even in advanced on-chip electron-transport experiments, the electron coherence length is less than \(\sim\)250 μm, with values between 1 and 20 μm [131] more typical, and only at cryogenic temperatures. While this is true, a counterpoint is that even though the wave nature of _individual electrons_ is impractical to observe, wave phenomena of _microwave signals in electronics_ can easily be observed and exploited for computation [132]. However, these are not wave phenomena of single electrons, but rather of signals that comprise many microwave photons. A key engineering consequence of this distinction is that electronic microwave signals have long wavelengths (e.g., GHz signals have centimeter-scale wavelengths) and this dramatically limits the possible spatial parallelism relative to the parallelism possible with optical-frequency photonic signals--leading to a potential advantage of optics over electronics (and in particular, microwaves).41 Footnote 41: A completely different kind of microwave signal can also be created and used for computation: an acoustic wave at microwave frequencies [133]. These waves can have short wavelengths despite their low frequencies, but at the cost of propagating at vastly slower speeds than photonic signals—the speed of sound instead of the speed of light—which is a disadvantage for computing with them. Figure 11: **Wave physics.** Interference can be observed in a Mach-Zehnder interferometer with only a single photon input at a time. This schematic is from an undergraduate-laboratory experiment using just a few commercial optical components, highlighting the relative ease of observing wave phenonmena at the single-photon level with optics. (The counts at Photodetector A will oscillate as a function of the position of Mirror M2, which controls a phase difference between the upper and lower arms of the interferometer.) Adapted from Ref. [129], © AAPT. 12. **The speed of light is fast**: the speed of light is often brought up as a reason for how optical computing will obtain a large speed advantage over electronic computers, but this is misleading because both optical and electrical signals can travel at roughly the same speed: in vacuum, light (and microwaves) travel at speed \(c\); in silicon-photonic waveguides, light travels at speed \(\sim\)0.4\(c\)[134]; in wires on printed circuit boards, signals can travel at speed \(\sim\)0.43\(c\)[49, Chapter 4]; and in CMOS electronic circuits, signals can travel at speed \(\sim\)0.2\(c\)[82].42 There is a mere 5\(\times\) difference between the speed of light in vacuum and the speed of signal propagation in wires in CMOS electronic processors, so the speed of light is not a key distinction of optics. The notion of "computing at the speed of light" [1] is more useful to think of as a _goal_ for an optical computer, rather than a _cause_ of advantage. The speed of light provides a physical limit on how fast a computer can operate [135] and one framing of the optical computer engineer's goal is to design a computer that leverages the benefits of optics (1-11) to reach this limit for a particular computing task, in as small a volume as possible, so that the total time for a computation is as small as possible.43 Some of the features listed above are interrelated, and some of them even have a common physical root but are listed separately because the root leads to multiple features of light or has multiple consequences for computing. For example, the large bandwidth of optics (1) relies on the large carrier frequency \(\omega\) of optical signals. The wavelength of light \(\lambda\) is directly connected with its frequency \(\omega\): \(\lambda\) is proportional to \(1/\omega\), so the large values of \(\omega\) for light make it possible to achieve large spatial parallelism (2) and to observe and exploit wave physics in small volumes (11). The fact that optical photons have a large energy \(\hbar\omega\) relative to thermal energy \(k_{\rm B}T\) at room temperature \(T\approx 300\) K 44 is directly responsible for the quantum nature of light being accessible at room temperature (10 and 11). Low-dissipation dynamics (3) and transmission of information with optics (4) are also connected with the short wavelength \(\lambda\) for optical photons, which allows tight waveguided confinement with nearly lossless dielectrics rather than with metals.45 So all six of these features are connected by the fact that \(\omega\) is large, since multiple aspects of optical physics are influenced by the value taken by \(\omega\). Footnote 44: \(k_{\rm B}\) is Boltzmann’s constant. Footnote 45: Microwave signals propagating through media such as metal coaxial cables or metal on-chip transmission lines suffer from substantial loss unless the metal is superconducting. Not all of these features are equally important for obtaining advantage in optical computing but they are also not presented in order of importance, partially because determining such an order would require knowing what ingredients future optical computers will ultimately most heavily rely on. Nevertheless, in the next section we will discuss how these features may be used and opine on which ones are most likely to be critical. ## III Discussion **How might optical computers beat electronic computers?** We will describe some strategies for the design of optical computers that may enable them to have an advantage over electronic computers. There are three main metrics of computing performance for which we might aim to achieve an advantage: **latency**, **throughput**, and **energy efficiency**. Which of the three (or which combination) one targets in designing an optical computer depends on the user's goals, but there are arguments for how optics could enable advantage in all three of these metrics.46 Footnote 46: There are several other metrics of computers that are important, such as _size_, _robustness_, _cost_, _security_ (susceptibility to hacking), and _accuracy_. We don’t have any reason to believe that an optical computer could deliver superior _accuracy_, for example, than all possible electronic computers, so accuracy is not a metric we expect an optical advantage for, but instead we will typically aim to achieve an advantage in latency, throughput, and/or energy efficiency for a specified accuracy. Similarly the other metrics (size, etc.) provide other constraints that an optical computer must satisfy to be competitive for some particular use case. We now briefly describe these metrics using a particular computing example: machine-learning inference, and even more specifically, face recognition in an image. _Latency_ (also called _delay_) refers to the time it takes for the computer to make a prediction of the name of the person in an image from the moment the computer is given the input image. _Throughput_ refers to how many inferences can be performed per second; for face recognition in images, a throughput metric is images processed per second.47_Energy efficiency_ refers to how much energy is used by the computer to complete a single inference computation with a specified accuracy; for face recognition in images, an energy-efficiency metric is joules per image processed. Footnote 47: Note that in general, (\(1/\)Latency) \(\neq\) Throughput; by pipeling [52], throughput can be much higher than the inverse of latency. As an intuitive example of this, consider a factory producing cars using an assembly line (pipeline): from start to finish, it might take the factory one day to manufacture a car (latency), but the total number of cars manufactured per day could be hundreds (throughput). There may be trade-offs when optimizing for these three metrics, so it is important to decide before starting the design of a computer what one's goals are. For example, while minimizing _latency_ is sometimes the main goal (e.g., in high-frequency trading [136]), often improving the throughput of a processor or its energy efficiency is the more important goal--and in many cases the goal will involve all three metrics, such as maximizing throughput and energy efficiency, subject to the constraint that the latency meets a particular target (e.g., in neural-network inference [137], where in many applications--such as language translation--we may require the latency to be \(<1\) second). Despite the fact that there will typically be trade-offs in the optimization of computer performance metrics (e.g., between latency and throughput), the following strategies should help in designing a computer that optimizes any combination of latency, throughput, and energy efficiency: 1. **Avoid or mitigate input/output bottlenecks and overheads.** Optical computers generally do not operate entirely with optics: typically some inputs to the computer originate in electronics, and/or the output from the computer is ultimately electronic. For example, if an optical processor is used for determining if there is a pedestrian walking in front of a self-driving car, the output needs to be electronic so that it can be input to the control systems in the car, which can use the information to actuate the brakes. If the processor uses a neural network, the trained parameters for the neural network may well be stored in electronic memory and need to be input to the processor in some way. Unfortunately the interfaces between optics and electronics can cause major bottlenecks in speed and be a major source of energy usage by a processor. For an optical processor to offer an advantage over electronic processors--in any of latency, throughput, or energy efficiency--the processor architecture needs to be designed to minimize the negative impact of transduction between optical and electrical signals, and the conversion between analog and digital signals. To illustrate some of the challenges that can arise from optics-electronics interfaces, imagine an optical processor that intrinsically has a processing bandwidth of \(100\,\mathrm{THz}\) (1). If data can only be input to the processor at a rate of \(10\,\mathrm{GHz}\), limited by, for example, the bandwidth of electro-optic modulators and digital-to-analog converters, then without careful design, the intrinsic bandwidth benefit of the optical system--which could have led to improved latency and/or improved throughput--may go to waste. Similarly, while an optical processor can be designed to perform computation on optical signals nearly dissipationlessly, there is an energy cost to optical/electrical transduction and analog/digital conversion for getting electronic data into and out of the optical processor, and these costs may be so large that they don't just dominate the total energy cost of the optical processor, but make the energy cost so high that the processor is less energy efficient than an all-electronic processor. A crucial mitigation strategy is to **re-use data** that is input as much as possible--once you have paid both the time and energy penalty for sending electronic data into an optical processor, you would like to extract as much benefit as possible from that data. This applies both to data converted into optical signals and to data that may remain as electrical signals but that nevertheless has time and energy costs to be input to the processor. Re-use of optical signals can be enabled by various forms of optical memory [138], as well as by copying via fanout (7)48. As an example of the re-use of electrical control signals, optical processors performing neural-network inference (as opposed to training) can load the neural-network weights into phase shifters that consume either little or no static power [1, 8] and then use those weights many times by performing many inference computations with them (for example, by batching individual inferences [78]). This allows both the time and energy costs of loading the weights to be amortized. Another example of data re-use in photonic neural-network processors is in convolutional neural networks: the same convolutional kernel can be applied to many different subsets of the input data, so the kernel weights can--at least conceptually--be loaded once and used many times [1, 40, 57]. Footnote 48: Consequently an optical-computer designer is usually motivated to make the fan-out factor be as large as possible. In an optical matrix-vector multiplier, fanning out \(10^{3}\) or more copies of the input vector is desirable, and likely necessary to achieve a substantial advantage over electronics. A general design principle is that it is--all else held equal--**better to perform more computations per bit of input data**. This is essentially the concept of maximizing _arithmetic intensity_ in conventional computer architecture [52]. Data re-use is one way to achieve this, but an important complementary conceptual approach is to choose computational tasks such that the optical processor for that task performs computations whose complexity scales rapidly with the input data size. For example, a computation on input data of size \(N\) that requires only \(O(N)\) operations is less attractive than one that needs \(O(N^{2})\) operations, and a computation requiring \(O(N^{3})\) operations is even better. The cost in time and energy of inputting data of size \(N\) is generally \(O(N)\), so if the computation performed by the optical system has complexity \(O(N^{2})\) (and we assume that, through a combination of 1-11, the cost of this computation in optics is far lower than it is in electronics) then there exists some threshold size such that for any \(N\) larger than the threshold, the costs of loading the data can be compensated for by the benefits of doing the \(O(N^{2})\) computations optically--leading the optical computer to outperform electronic computers even when the data-transfer costs are considered. A key practical fact is that for current speed and energy numbers for CMOS electronics, it seems likely that optical processors will need to support very large values of \(N\) (e.g., \(N>10^{4}\)) to reach the crossover point where they start delivering a throughput or energy-efficiency advantage for computations based on matrix-vector multiplication (which is an \(O(N^{2})\) computation, for square matrices) [22]. This fact motivates both scaling optical matrix-vector-multiplication processors to large sizes and designing optical processors with computations that have complexity greater than \(O(N^{2})\). From this perspective, combinatorial optimization such as Ising solving [4] is an attractive problem for optical computing because the computing effort is generally expected to scale exponentially, i.e., as \(O(2^{N})\), with respect to the number \(N\) of variables being optimized, and also with the amount of data required to specify the optimization problem49. Footnote 49: For example, an \(N\)-spin Ising problem is specified by \(O(N^{2})\) numbers. Because the cost of loading data is generally larger for optical processors than for electronic processors50, there is a strong motivation to choose algorithms for optical processors that have higher intrinsic data re-use or higher algorithmic complexity. This kind of hardware-software co-design can lead to considerable improvements when compared with fixing the algorithm based on what works well on current electronic processors and trying to forcibly design an optical processor to work in the same way. While minimizing and compensating for the costs of loading input data is crucial, it is also important to avoid having the output of data be too costly in time or energy. It is similarly beneficial to **minimize how much data needs to be output**, by doing as much of the computation and data reduction within the optical processor as possible. This design principle motivates choosing algorithms that require a large amount of computation relative to the size of the output. As an example, this is typically true in machine-learning inference--where for the overall computation the answer may be just a few tens of bits, outputting the predicted class of the input data. Footnote 50: When an optical processor loads data from electronic memory, there is not only a cost for the memory access—which an electronic processor would also have had to pay—but there is a cost for transducing the data from an electrical to an optical signal, and potentially also a digital-to-analog conversion involved, which also has a cost. 2. **Don't try to directly take on digital electronic processors at their own game.** Arguably the biggest challenge in building optical processors that surpass electronic processors in throughput or energy efficiency is overcoming the limiting performance of electronics-to-optics and optics-to-electronics conversion technology. If we start with data in electronics--as is most typically the case--and want our computed answers to end up in electronics--as is also most often the case--then we have little choice but to apply the strategies above and hope to be able to amortize the input/output costs. However, given how large state-of-the-art CMOS electronic processors are and that they have a home-ground advantage in working on data that is already in electronics, it seems likely that modern optical processors won't first gain an advantage as drop-in replacement accelerators in conventional electronic processing workflows. Instead, we can **target applications where the inputs and/or outputs are naturally optical--**and in this way eliminate the conversion costs. Machine-learning applications where the input is conventionally an image from a camera is an example [1, 100, 139]: one can replace the camera and subsequent electronic neural network with an optical neural network that directly processes the scene in front of it, e.g., in self-driving cars [140], microscopy [100], or spectroscopy. It is not necessary to replace all the electronic image-processing computation with optics if the output is ultimately going to be electronic anyway--one can adopt the strategy of using optics to pre-process the optical image data [141, 72], intelligently encoding it so that the output conversion from optics to electronics has much lower bandwidth than naively digitizing the images to begin with, which could lead to benefits in latency, throughput, and energy efficiency [100]. While image processing enables the elimination of the input conversion stage because the input can be directly optical, applications where both the input and the output are optical may be even more promising for immediate attack. Optical communications have inputs and outputs that are both optical, but current approaches involve a number of stages at which optical signals are converted to electrical signals for electronic processing, and then converted back to the optical domain. This makes optical communications signal processing a natural target for all-optical signal processing, which could reduce latency, increase throughput, and improve energy efficiency [142, 143, 144, 21]. Many neural-network models have become large enough that they can no longer practically be run on a single electronic processor, which has motivated the design of optical interconnects specifically for neural-network processing [145]. This trend provides another motivation for neural-network processing as an application for optical processors: if the electronic-processor competition needs to pay the relatively high energy costs of conversion between optics and electronics too, then these conversion costs are at least not an exclusive disadvantage of using optical processors. One can think of a single processor in an optically-interconnected datacenter for performing neural-network processing as a system whose inputs and outputs are both optical--so from this perspective, it is a promising candidate to try replace with an optical processor. 3. **Combine multiple optical features to try gain an advantage.** This might sound trite, but it is important--any optical processor that has an advantage over the best equivalent electronic processors will most likely need to take advantage of not just one of the features of optics (1) - 11), but will need to carefully combine several of them. For example, just taking advantage of the large bandwidth of optics (1) in a single spatial mode--even if we ignore for now input/output bottlenecks--is probably not sufficient to enable a throughput benefit since electronic processors compensate for lower bandwidth with enormous spatial parallelism (having on the order of \(10^{11}\) transistors in modern chips). Similarly only relying on spatial parallelism (2) will likely also be insufficient: while the spatial parallelism of optics is considerable, especially in three-dimensional systems, the the spatial parallelism of transistors is typically even more impressive.51 However, if one can combine the bandwidth and spatial-parallelism features of optics in a single system, then there is potential to surpass electronics. For example, imagine being able to process data in \(10^{7}\) spatial modes in parallel at a clock rate of \(10\,\mathrm{THz}\), or processing data in parallel in \(10^{7}\) spatial modes, each with \(10^{7}\) frequency modes--in other words, \(10^{14}\) parallel spatio-frequency modes.52 Although it is far from a solved problem how to fully take advantage of the combination of bandwidth and spatial parallelism afforded by optics, when combined with the fact that operations can be performed nearly dissipationlessly in optics (3), there is great potential for optics to outperform electronics. Footnote 52: Optical multiplication of vectors by random matrices is an exception where the spatial parallelism is so large that even very low bandwidth doesn’t prevent the system from having higher throughput that electronic processors [76]. Even in this case though, more than one property of optics is being used: for example, not just spatial parallelism (2), but also nearly dissipationless dynamics (3). (52) Where did the numbers \(10^{7}\) and \(10^{7}\) come from? They were chosen somewhat arbitrarily but as believably practical, since, for example, we already have technology—spatial light modulators—for manipulating \(10^{7}\) spatial modes. We could have even higher numbers of spatial and frequency modes though—this was an example, not a bound. Accurately predicting the future of technology is difficult, but it seems reasonable to hypothesize that of the 11 features explored in this paper, bandwidth (1), spatial parallelism (2), and nearly dissipationless dynamics (3) are most likely to play a key role in any future optical processor that does deliver an overall advantage (in latency, throughput, or energy efficiency). However, many of the other features (4 - 11) may very well end up playing important roles too, so should not be ignored--but they will probably need to be combined with one of the "big three" (1) - 3) for a processor using them to achieve an overall advantage over electronics. Many of the demonstrations of optical processors to date have shown a proof of principle of the use of some feature of optics for computing in a way that could lead to an advantage, but with a system that doesn't suitably leverage some of the other available features, ultimately leading to a prototype that is inferior to current electronic processors. An example of this from my own group is Ref. [39], which uses spatial parallelism (3) to realize \(>500,000\) scalar multiplications per pass of light through a free-space optical processor, but the prototype is extremely limited in bandwidth (1) due to the speed limits of the input and output stages, leading to performance that is ultimately many orders of magnitude worse than an electronic processor. In this work we were not expecting to beat an electronic processor but rather were aiming to demonstrate how few photons are needed for matrix-vector multiplication in optical neural networks; nevertheless, to advance this proof-of-principle system to be competitive with electronics would require dramatically increasing the system bandwidth.53 Footnote 53: Besides spatial parallelism (3), the optical processor presented in Ref. [39] also used some other features of optics, such as nearly dissipationless dynamics (3)—without which the ultra-low optical energy usage demonstrated would not have been possible, and optical fan-in (7). My opinion is that the most likely route to building an optical processor that delivers a large advantage over electronic processors in throughput or energy efficiency (or both) in the near term is by constructing a free-space optical matrix-vector-multiplier that takes advantage of large spatial parallelism (2) and nearly dissipationless dynamics (3)[1]. With a vector dimension of \(N\approx 10^{4}\) and a matrix size of \(N\times N\), it seems promising that one can achieve an advantage provided that the system can be operated at a rate of one matrix-vector multiplication per nanosecond and the surrounding electronics for input and output operate with state-of-the-art energy efficiency [22, 39]. This will require careful optical and electronic engineering to realize--it is a major engineering undertaking whose difficulty should not be underplayed--but is all based on existing technology components that can in principle be appropriately scaled. I find this candidate architecture the most promising in the near term largely because it has been well-studied and many of the necessary building blocks are fairly advanced. An optical matrix-vector-multiplier whose inputs are optical, such as when it is used as a preprocessor for visual scenes [100], would have a lower bar to deliver an advantage over electronic solutions, so I expect that if an optical matrix-vector-multiplier does outperform an electronic processor it will probably first be for an application involving optical inputs. However, I certainly don't want to give the impression that I think a free-space spatially multiplexed architecture is the only one worth pursuing. There are a multitude of other architectures [1, 2], including those based on photonic integrated circuits and on additionally taking advantage of the large bandwidth of optics (1), that are appealing and very much worth pursuing. When evaluating an optical-computing scheme--especially one relying on Features 9-11, for which it is often unclear if the optical scheme is not just _different_ from a standard digital-electronics solution, but _better_--it can be helpful to determine what the cost of simulating the scheme with a digital-electronic processor would be. For example, wave physics (11) can be simulated by digital electronic processors54, so when seeking an advantage for optics from wave phenomena, one needs to consider the cost of equivalent digital electronic approaches, and depending on the wave phenomena being exploited, the digital approaches may be competitive or outright superior. Some intuition for how wave physics in optics could be exploited to give an advantage over digital simulation of wave physics is that in the single- or few-photon regime, the optical energy used could be very small, and relates to the feature of nearly dissipationless dynamics in optics (3). As another example, least-power-dissipation principles (9) can be used to realize Ising optimizers from networks of coupled optical oscillators [4], but simulating the equations of motion of the network on a digital-electronic computer can yield the same behavior as a physical, optical implementation, so the intrinsic least-power-dissipation phenomenon doesn't automatically give rise to a computing benefit. Instead one also needs to leverage other benefits of optics, such as parallelism and low dissipation. Footnote 54: In the case of simple interference, this can be as easy as adding two complex numbers. We conclude by summarizing some of the **major outstanding challenges** that, if addressed, would move us substantially closer to realizing practically useful optical computers: * **Optical-processor architecture design.** There is a major challenge to design architectures of optical processor that most effectively use the features of optics to gain an advantage. It is not obvious that the existing optical-processor architectures (using free space or integrated photonics)--some of which are decades old [14]--are optimal, and there is an opportunity to invent refined or completely new designs to meet this challenge. * **Applications.** We need to find good applications to target with optical processors. Since one of the major roadblocks to achieving advantage with optical computing are issues associated with input/output, we want to find valuable applications where we can avoid or mitigate input/output bottlenecks and costs. For example, it has proven very difficult to build an optical matrix-vector multiplier at a scale (\(N\)) at which the input/output costs can be sufficiently amortized, even though an optical matrix-vector multiplier can perform \(O(N^{2})\) operations with input/output costs of just \(O(N)\).55 Given that even matrix-vector multiplication, with its \(O(N^{2})\) complexity, does not have a high enough ratio of computation to input data, it would be helpful to find useful subroutines, algorithms, or applications that have higher complexity than \(O(N^{2})\) for input and output data sizes \(\sim\)\(N\). An additional direction is to find applications that could benefit from other aspects of optical computing besides potential performance advantages. For example, direct optical processing of visual scenes could give a privacy advantage: an electronic processor of images captured by a camera that stores the images in memory could be hacked, but an optical processor that directly processes what is "sees" and never converts the full incoming images to electronic format could be a lot harder to maliciously copy images from. Footnote 55: For simplicity, the expressions given here assume a square matrix of size \(N\times N\). * **Nonlinearity.** Nonlinearity is crucial in many computations and a low-energy, fast, small-footprint, reliably manufacturable nonlinearity would be a useful building block. The nonlinearity need not necessarily be all-optical-- optoelectronic nonlinearity can also be useful [146], although generally one can hope to benefit from higher bandwidths and possibly lower energy consumption in all-optical nonlinearities [55]. A fast, few-photon nonlinearity capable of attojoule switching has recently been demonstrated [147]; one important direction is in scalably manufacturing the nonlinearities that have already been established. * **Cascadability.** In many computations--for example, in deep neural networks--the input data is fed not through just one function but a sequence of functions. An optical implementation of the computation then often involves passing an optical signal either through the same optical setup multiple times or through multiple different optical setups (or both). This requires being able to cascade optical processes in time or space. Three of the challenges56 that can arise in cascading optical processing stages are attenuation of the optical signal due to optical loss, effective attentuation of the optical signal due to weakness in optical nonlinearity57, and nonlinear-optical processes generating output light that is at wavelengths incompatible with being input to the next optical stage58. Designing suitably cascadable systems can be approached in multiple ways: for example, at the level of processor architecture, one may opt to insert gain into the system to compensate for the signal attenuation--which leads to further architectural and system-design decisions about the type of gain (purely optical or optoelectronic, in which case the gain is essentially provided electronically by transistors59), and its required speed, preservation of information encoded in the optical spectrum, and so on, as well as new engineering challenges in realizing suitable gain components. One may also approach cascadability challenges at the component or physical-implementation level, seeking to realize lower-loss optical systems, or materials with higher nonlinear coefficients. Footnote 57: Because optical nonlinearity is generally weak [79], less than 100% of the light input to a nonlinear stage will generally be acted on nonlinearly. * 6) can also bring benefits [148; 149]. The key question here is _how_ to engineer and fabricate programmable, large-scale, possibly dense, three-dimensional processors [150; 151; 64]. * **Energy costs for electronic and optoelectronic components.** The energy cost of optical processors is typically dominated by the energy costs of the electronic parts of the computer (for example, in an analysis of optical neural networks running large Transformer models, the optical energy used accounts for \(<1\%\) of the total energy cost [22]; see also Refs. [152; 78]). Many optical-computing schemes could benefit from--and to deliver advantage, may even require--the availability of large arrays of high-speed, low-power, and low-cost detectors, analog-to-digital converters, modulators, and digital-to-analog converters. Increasing the energy efficiency of these components is an important challenge. * **Scale.** Most optical-computing schemes rely on parallelism--be it from frequency or time multiplexing (1), or spatial multiplexing (2), or a combination--for part of how they will achieve an advantage over electronics. However, throughput and energy-efficiency advantages typically only materialize when the system size (i.e., the number of parallel operations) is very large [22].60 For example, we would like optical matrix-vector multipliers to be large enough to amortize the energy costs of loading the input vector and reading out the output vector. We would also like them to be large enough to be able to compete in throughput with electronic processors, which can perform \(>10^{6}\) 8-bit-precision scalar multiplications per nanosecond [45]--so if vectors are input at a rate of 1 GHz, we would like the optical processor to also be able to perform \(>10^{6}\) scalar multiplications in parallel. However, in optical matrix-vector multipliers made from arrays of Mach-Zehnder interferometers [1], even a state-of-the-art commercial prototype with a \(64\times 64\) array [153] does \(>100\times\) fewer parallel operations than seems necessary to compete in throughput with state-of-the-art electronics solutions. A major challenge is how to scale arrays of size \(64\times 64\) to something much larger, like \(1000\times 1000\), which would put them roughly on par with the degree of parallelism in a single state-of-the-art electronic chip [45], or \(10^{4}\times 10^{4}\), which would then be in the regime where a substantial throughput advantage could be achieved provided the system were clocked at a comparable rate to electronics (i.e., at \(\sim\)1 GHz). How can Mach-Zehnder-interferometer arrays be scaled from sizes \(\sim\)\(64\times 64\) to sizes \(\sim\)\(10^{4}\times 10^{4}\)? This is a major challenge for the community working on this approach. The challenge of scaling to achieve a far greater degree of parallelism than current prototypes is certainly not unique to optical matrix-vector multipliers or Mach-Zehnder-interferometer arrays--most optical-computing schemes face a major scaling challenge for them to be able to deliver a practical advantage. In some cases, we don't even have a solid practical roadmap for how to scale yet: for example, what is a feasible way to scale a scheme that combines spatial and frequency multiplexing (such as that in Ref. [40], using 16 spatial and 4 frequency degrees of freedom) to a point where it can achieve advantage? There is the potential for very large numbers of both spatial and frequency modes to be harnessed to perform parallel computations (e.g., \(>10^{14}\) spatio-frequency modes being operated on in parallel), but how can we reach this scale for a concrete scheme that performs useful computation? Footnote 60: The situation for _latency_, as opposed to throughput or energy-efficiency, advantages is more subtle in that it is more application-dependent: if an application requires a certain amount of highly parallelizable computation (such as matrix-vector multiplication) to be performed in as little time as possible, so long as an optical processor is large enough to perform all that computation in parallel, it is big enough and won’t necessarily benefit from larger scale (from the perspective of latency). A latency advantage could then arise from how the system is designed to minimize the time it takes to get the data into and out of the constituent parallel-processing units. But on the other hand, an optical processor could also deliver a latency advantage that is directly attributable to its scale: if it has parallelism far beyond that of an electronic processor it may achieve a throughput advantage that then will typically give a latency advantage as a side benefit for large tasks where an electronic processor would need to perform the computation in multiple stages in series on account of the task being larger than the electronic processor’s parallel-processing capacity. * **Robustness, reliability, and fabrication variation.** While many optical components, such as those appearing in consumer-electronics devices like cellphones and in optical-fiber-communications systems, are generally very reliable, there are many optical technologies that are being considered for use in optical computers that present challenges in robustness (e.g., how well they can perform in the presence of environmental perturbations such as temperature changes or mechanical vibrations), reliability (e.g., how likely they are to keep functioning correctly under normal operation conditions), and fabrication variation (e.g., how much fabricated devices will differ in specifications from their designed values). For example, many optical phase-change-memory technologies have stringent limits on how many times they can be switched, and it is desirable for these limits to be raised [154, 155]. As another example, in integrated photonics, Mach-Zehnder interferometers typically suffer from the constituent splitters have small deviations from the ideal splitting ratio due to variations in fabrication; one research direction is to improve the fabrication processes, and another is to construct designs that can compensate for these fabrication errors [156]. Generally for each photonic technology platform that might be used in an optical computer, there are open problems in how to stabilize them--passively or actively. * **Storage.** To avoid the costs of converting between electronics and optics, and to avoid the cost of electronic memory accesses (which are a dominant cost even in electronic computing [24]), we would often like to be able to store data for use in optical processing. For example, in matrix-vector multipliers, we typically want to be able to store matrices with as low energy cost as possible for maintaining the storage, but in a way that the matrix can be updated on demand many times, at reasonably high accuracy (e.g., 8 bits), and also with relatively low energy cost [59, 154]. In some applications or architectures, it is advantageous to be able to store optical signals (e.g., corresponding to intermediate calculation results) so that conversion from optics to electronics and then back to optics can be avoided. There is active study and much room for improvement in both these use cases of storage. * **Pushing toward quantum limits.** Operating optical computers in a regime where the quantum nature of light cannot be ignored, e.g., by using ultra-low optical powers where signals comprise small numbers of photons and are measured by single-photon detectors, is a path toward minimizing optical energy consumption. Optical computers will inevitably involve some electronics, if only for control or readout, and it is often the electronics energy costs that dominate [152], so it is only in some cases that there is strong benefit to minimizing the optical power used. Nevertheless, for these situations, there is much work to be done in both designing architectures and realizing practical devices that benefit from operating in the quantum regime [157, 158, 159, 160, 126].61 Footnote 61: In this paper, we do not consider quantum information processing [123]; here, when we talk of operating in the quantum regime, we mean in the sense that light comprises photons and we are operating at such low powers that the quantum noise and discrete nature of the light is relevant to modeling the operation of the computer. The topic of using quantum phenomena such as entanglement to build quantum computers is exciting but beyond the scope of this paper; Ref. [124] provides a helpful description delineating the first and second quantum revolutions, and it is only the former that we consider here. Constructing an optical computer that beats an electronic computer in any metric is challenging given how advanced electronic processors are. However, the physics of optical computing gives promise that if optical computers are carefully engineered, for certain classes of tasks--especially those involving data that is already in an optical format or that has a very high ratio of computation to data--they may deliver orders-of-magnitude benefits in latency, throughput, or energy efficiency. ## Acknowledgements I gratefully acknowledge many helpful conversations with colleagues including Daniel Brunner, Ryan Hamerly, Hideo Mabuchi, Arka Majumdar, Alireza Marandi, Edwin Ng, Tatsuhiro Onodera, Tianyu Wang, Logan Wright, and Yoshihisa Yamamoto; these conversations over several years have shaped my understanding of optical computing. I also gratefully acknowledge Sapan Agarwal for explanations about analog-electronic crossbars, and Bal Govind for discussions about electrical interconnects. I thank Maxwell Anderson, Tianyu Wang, and Fan Wu for providing detailed feedback on a draft of this manuscript. This work has been financially supported in part by the National Science Foundation (Award CCF-1918549), NTT Research, and a David and Lucile Packard Foundation Fellowship.
2305.19525
Discovering New Interpretable Conservation Laws as Sparse Invariants
Discovering conservation laws for a given dynamical system is important but challenging. In a theorist setup (differential equations and basis functions are both known), we propose the Sparse Invariant Detector (SID), an algorithm that auto-discovers conservation laws from differential equations. Its algorithmic simplicity allows robustness and interpretability of the discovered conserved quantities. We show that SID is able to rediscover known and even discover new conservation laws in a variety of systems. For two examples in fluid mechanics and atmospheric chemistry, SID discovers 14 and 3 conserved quantities, respectively, where only 12 and 2 were previously known to domain experts.
Ziming Liu, Patrick Obin Sturm, Saketh Bharadwaj, Sam Silva, Max Tegmark
2023-05-31T03:26:18Z
http://arxiv.org/abs/2305.19525v3
# Discovering New Interpretable Conservation Laws as Sparse Invariants ###### Abstract Discovering conservation laws for a given dynamical system is important but challenging. In a _thorist_ setup (differential equations and basis functions are both known), we propose the **S**parse **I**nvariant **D**etector (SID), an algorithm that auto-discovers conservation laws from differential equations. Its algorithmic simplicity allows robustness and interpretability of the discovered conserved quantities. We show that SID is able to rediscover known and even discover new conservation laws in a variety of systems. For two examples in fluid mechanics and atmospheric chemistry, SID discovers 14 and 3 conserved quantities, respectively, where only 12 and 2 were previously known to domain experts. ## I Introduction Conservation laws are important concepts in physics, yet discovering them is challenging. Ideally, the set of discovered conserved quantities should be _complete_, _independent_ and _interpretable_. Although several attempts have been made to automate the discovery process with machine learning [1; 2; 3; 4; 5; 6; 7; 8], their complicated setups and blackbox nature make it hard to guarantee all these desirable properties. This paper considers a simple yet realistic setup where all these desirable properties can be met. "Discovering conservation laws" can mean wildly different things for _experimentalists_, _computationalists_ and _theorists_, as shown in Table I. Most prior work [1; 2; 3; 4; 5; 6] takes on the experimentalist setup, assuming knowledge of neither the differential equations nor the form of conservation laws. [7] takes the computationalist setup, assuming knowledge of differential equations. This work explores the theorist setup, where both differential equations and basis functions of conservation laws are known. Admittedly, this setup is simpler than the other two, but is still realistic when theorists have the differential equations at hand and have educated guesses about the basis functions that may span the conserved quantities. We propose **S**parse **I**nvariant **D**etector (SID), an algorithm that reveals conservation Laws. SID is incredibly simple in the sense that it only requires linear algorithms (except for sparsification), so the results are much more trustworthy and interpretable than blackbox machine learning methods. Note that SID does not replace us human scientists, but rather acts as a helpful assistant: while humans need to input basis functions (i.e., _formulating_ hypotheses) to SID, SID is good at computing conserved quantities (i.e., _testing_ hypotheses) based on the given prompt. In this manner, human scientists can focus on the more creative part of the job, while SID does the technical and tedious work. This paper gives two examples where new conserved quantities are successfully discovered by SID: one in fluid mechanics, and another in atmospheric chemistry (see Table II). In the former one, although the new conserved quantities are somewhat expected in hindsight, humans alone may need several more months to find them. In the latter one, a new conserved quantity is found, which was unintended in the design of the model. ## II Method ### Problem setup We consider a first-order differential equation \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\), where \(\dot{\mathbf{x}}\doteq\frac{d\mathbf{x}}{dt}\), \(\mathbf{x}\equiv(x_{1},\cdots,x_{d})\in\mathbb{R}^{d}\) is the state vector and \(\mathbf{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is a vector field. This ODE formulation is more general than it seems: (1) Hamiltonian systems are subsumed as \(\mathbf{x}\equiv(\mathbf{x}^{\prime},\mathbf{p}^{\prime})\); (2) Higher-order differential equations (e.g., \(\ddot{\mathbf{y}}=\mathbf{f}(\mathbf{y})\)) are included as \(\mathbf{x}\equiv(\mathbf{y},\dot{\mathbf{y}},\cdots)\); (3) Partial differential equations (PDEs) become ODEs once discretized. A _conserved quantity_ is a scalar function \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Fluid (2D) & Fluid (3D) & Atmosphere \\ \hline Known & \(\mathbf{8}\) & 12 & 2 \\ \hline SID & \(\mathbf{8}\) (simpler) & \(\mathbf{14}\) & \(\mathbf{3}\) \\ \hline \end{tabular} \end{table} Table II: The number of conserved quantities known to experts and discovered by SID \begin{table} \begin{tabular}{c c c c} \hline \hline Setup & Experimentalist & Computationalist & Theorist \\ \hline Model-based & No & Yes & Yes \\ Known basis & No & No & Yes \\ \hline Independence & Partial & Yes & Yes \\ Completeness & No & Partial & Yes \\ Interpretability & Partial & Partial & Yes \\ \hline Reference & [1; 2; 3; 4; 5; 6] & [7] & This work \\ \hline \end{tabular} \end{table} Table I: Three setups of conservation law discovery \(\mathbb{R}\), such that its value remains constant along any trajectory obeying \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\)[9]. As proved in [7], a necessary and sufficient condition for \(H(\mathbf{x})\) being a conserved quantity is \(\nabla H(\mathbf{x})\cdot\mathbf{f}(\mathbf{x})=0\), since \[0=\dot{H}=\nabla H(\mathbf{x})\cdot\dot{\mathbf{x}}=\nabla H(\mathbf{x})\cdot \mathbf{f}(\mathbf{x}). \tag{1}\] Given the differential equation \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\), we hope to find a set of conserved quantities \(\{H_{1},\cdots,H_{c}\}\) which satisfies these three properties: * _Independence_: they are functionally independent, i.e., \(g(H_{1},\cdots,H_{c})=0\Rightarrow g=0\). * _Completeness_: any conserved quantity \(H\) (in the function space spanned by basis functions) can be expressed by them, i.e., there exists \(g\) such that \(H=g(H_{1},\cdots,H_{c})\). * _Interpretability_: conserved quantities can be written as (hopefully simple) symbolic formulas. ### Solving the linear equation and completeness The prior work [7] parametrizes the conserved quantities \(H_{\mathbf{\theta}}(\mathbf{x})\) as neural networks and learns the parameters \(\mathbf{\theta}\) to make \(|\nabla H_{\mathbf{\theta}}(\mathbf{x})\cdot\mathbf{f}(\mathbf{x})|^{2}\) close to zero. However, neural network training may get stuck at local minima, so the results are not reliable. Moreover, the parameterized conserved quantities are not immediately interpretable. We consider a simpler setup. Assume that we know \(H_{\mathbf{\theta}}(\mathbf{x})\) to be a linear combination of \(K\) predefined basis functions \(b_{i}(\mathbf{x})\) (\(1\leq i\leq K\)) such that \[H_{\mathbf{\theta}}(\mathbf{x})=\sum_{i=1}^{K}\theta_{i}b_{i}(\mathbf{x})\equiv \mathbf{\theta}\cdot\mathbf{b}(\mathbf{x}), \tag{2}\] where only \(\mathbf{\theta}\in\mathbb{R}^{K}\) are learnable parameters to be determined and the vector \(\mathbf{b}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\) defines the basis functions. Since the number of conserved quantities can exceed one, we define a set of parameters \(\mathbf{\Theta}\equiv\{\mathbf{\theta}_{1},\mathbf{\theta}_{2},\cdots\}\) and their corresponding functions \(H_{\mathbf{\Theta}}\equiv\{H_{\mathbf{\theta}}|\mathbf{\theta}\in\mathbf{\Theta}\}\). As shown in FIG. 1, Eq. (2) is equivalent to a neural network whose last linear layer contains the only trainable parameters. With Eq. (2), the conservation condition Eq. (1) becomes: \[\mathbf{g}(\mathbf{x})^{T}\mathbf{\theta}=0,\quad\mathbf{g}(\mathbf{x})\equiv( \nabla\mathbf{b}(\mathbf{x}))\mathbf{f}(\mathbf{x}), \tag{3}\] which is a linear equation of \(\mathbf{\theta}\). Remember that in our setup, both \(\mathbf{b}(\mathbf{x})\) and \(\mathbf{f}(\mathbf{x})\) are known, so \(\mathbf{g}(\mathbf{x})\equiv(\nabla\mathbf{b}(\mathbf{x}))\mathbf{f}(\mathbf{ x})\) is known as well. In practice, we draw \(P\) random points \(\mathbf{x}_{i}\) (\(1\leq i\leq P\)) from phase space. A solution \(\mathbf{\theta}\) should make Eq. (3) hold for all \(\mathbf{x}_{i}\), or more explicitly, \[\underbrace{\begin{pmatrix}g_{1}(\mathbf{x}_{1})&g_{2}(\mathbf{x}_{1})&\cdots &g_{K}(\mathbf{x}_{1})\\ g_{1}(\mathbf{x}_{2})&g_{2}(\mathbf{x}_{2})&\cdots&g_{K}(\mathbf{x}_{2})\\ \vdots&\vdots&&\vdots\\ g_{1}(\mathbf{x}_{P})&g_{2}(\mathbf{x}_{P})&\cdots&g_{K}(\mathbf{x}_{P})\end{pmatrix}} _{\mathbf{\theta}}=\mathbf{0}, \tag{4}\] which is simply linear regression. In practice, we apply singular value decomposition to \(\mathbf{G}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{T}\), where \(\mathbf{U}\in\mathbb{R}^{P\times P}\) and \(\mathbf{V}\in\mathbb{R}^{K\times K}\) are orthogonal matrices, \(\mathbf{\Sigma}\in\mathbb{R}^{P\times K}\) is diagonal with singular values \(0\leq\sigma_{1}\leq\sigma_{2}\leq\cdots\). We count \(\sigma_{i}\) as effectively zero if \(\sigma_{i}<\epsilon\equiv 10^{-8}\). The number of vanishing singular values, denoted \(M\), is equal to the dimensionality of the solution space (null space), which is spanned by the first \(M\) columns of \(\mathbf{V}^{T}\), denoted \(\mathbf{\Theta}^{(1)}\equiv(\mathbf{\theta}_{1}^{(1)},\mathbf{\theta}_{2}^{(1)},\cdots, \mathbf{\theta}_{M}^{(1)})\in\mathbb{R}^{K\times M}\). The linear structure obviously gives **completeness** (in the space spanned by basis functions), since any solution \(\mathbf{\theta}\) can be expressed as a linear combination of columns of \(\mathbf{\Theta}^{(1)}\). ### Interpretability In order to gain more interpretability, we want \(\mathbf{\Theta}^{(1)}\) to be sparse. Note that if \(\mathbf{R}\in\mathbb{R}^{M\times M}\) is an orthogonal matrix, the columns of \(\mathbf{\Theta}^{(2)}=\mathbf{\Theta}^{(1)}\mathbf{R}\) also form a set of complete and orthogonal solutions. Therefore we can encourage sparsity by finding and applying the orthogonal Figure 1: SID workflow: **Inputs** are differential equations, basis functions and sample points; **Outputs** are a set of conserved quantities which are complete, independent and interpretable. matrix that minimizes the following: \[\mathbf{R}^{*}=\underset{\mathbf{R}^{T}\mathbf{R}=\mathbf{I}}{\text{argmin}}\ ||\mathbf{\Theta}^{(1)}\mathbf{R}||_{1},\quad\mathbf{\Theta}^{(2)}=\mathbf{ \Theta}^{(1)}\mathbf{R}^{*}, \tag{5}\] where \(||\mathbf{M}||_{1}\equiv\sum_{ij}|M_{ij}|\) denotes the \(L_{1}\)-norm of a matrix \(\mathbf{M}\), encouraging sparsity. ### Independence Although columns of \(\mathbf{\Theta}^{(2)}\) are linearly independent, \(H_{\mathbf{\Theta}^{(2)}}\) are not guaranteed to be functionally independent. Take the 1D harmonic oscillator \(\mathbf{x}=(x,p)\), for example. Restricting basis functions to be polynomials in \(x\) and \(p\) up to the 4th order, there are two solutions: \[H_{\mathbf{\theta}_{1}}=x^{2}+p^{2},H_{\mathbf{\theta}_{2}}=H_{\mathbf{\theta} _{1}}^{2}=x^{4}+2x^{2}p^{2}+p^{2}, \tag{6}\] where \(\mathbf{\theta}_{1}\) and \(\mathbf{\theta}_{2}\) are orthogonal (hence independent), but \(H_{\mathbf{\theta}_{2}}=H_{\mathbf{\theta}_{1}}^{2}\), so they are not functionally independent. Consequently, we want a subset of \(\mathbf{\Theta}^{(2)}\), denoted \(\mathbf{\Theta}^{(3)}\), such that \(H_{\mathbf{\Theta}^{(3)}}\) is both independent and complete (i.e., can generate \(H_{\mathbf{\Theta}^{(2)}}\)). The first question is: how many elements, denoted \(c\), does \(\mathbf{\Theta}^{(3)}\) have? As shown in [7], \(c\) is equal to the rank of the following matrix: \[\mathbf{A}=\begin{pmatrix}\frac{\partial H_{\mathbf{\theta}_{1}}}{\partial x_ {1}}&\frac{\partial H_{\mathbf{\theta}_{2}}}{\partial x_{2}}&\dots&\frac{ \partial H_{\mathbf{\theta}_{M}}}{\partial x_{1}}\\ \frac{\partial H_{\mathbf{\theta}_{2}}}{\partial x_{2}}&\frac{\partial H_{ \mathbf{\theta}_{2}}}{\partial x_{2}}&\dots&\frac{\partial H_{\mathbf{\theta}_ {M}}}{\partial x_{2}}\\ \vdots&\vdots&\vdots\\ \frac{\partial H_{\mathbf{\theta}_{1}}}{\partial x_{d}}&\frac{\partial H_{ \mathbf{\theta}_{2}}}{\partial x_{d}}&\dots&\frac{\partial H_{\mathbf{\theta} _{M}}}{\partial x_{d}}\end{pmatrix} \tag{7}\] which hinges on the fact that gradients of functionally dependent functions are linearly dependent [10]. In practice, applying singular value decomposition to \(\mathbf{A}\) gives \(\mathbf{A}=\mathbf{U}^{\prime}\boldsymbol{\Sigma}^{\prime}\mathbf{V}^{\prime T}\), where \(\mathbf{U}^{\prime}\in\mathbb{R}^{d\times d}\) and \(\mathbf{V}^{\prime}\in\mathbb{R}^{M\times M}\) are orthogonal matrices, and \(\boldsymbol{\Sigma}^{\prime}\) is a diagonal matrix with singular values \(s_{1}\geq s_{2}\geq\dots\geq 0\). We count \(s_{i}\) as effectively non-zero if \(s_{i}>\epsilon=10^{-8}\). The number of non-zero singular values is equal to rank(\(\mathbf{A}\)), which is in turn equal to \(c\). After determining \(c\), we aim to obtain \(\mathbf{\Theta}^{(3)}\) by selecting \(c\) elements from \(\mathbf{\Theta}^{(2)}\). The selection process is as follows: (1) We assign each conserved quantity a complexity score (based on entropy [11]) and sort them from the simplest to the most complex. (2) Starting from an empty set \(\mathbf{\Theta}^{(3)}\), looping over element \(\mathbf{\theta}\in\mathbf{\Theta}^{(2)}\), we add \(\mathbf{\theta}\) to \(\mathbf{\Theta}^{(3)}\) if \(H_{\mathbf{\theta}}\) is independent of \(H_{\mathbf{\Theta}^{(3)}}\) (functions already added), until \(\mathbf{\Theta}^{(3)}\) contains \(c\) elements. ## III Results To better illustrate SID, we apply it to three dynamical systems. ### Systems biology Our first application is from systems biology. The Lotka-Volterra equations (LV hereafter) describe how population of many species evolve in time via interspecies interactions. We study this particular equation: \[\dot{x}=x(y-z),\dot{y}=y(z-x),\dot{z}=z(x-y) \tag{8}\] with two known conserved quantities \(H_{1}=x+y+z\) and \(H_{2}=xyz\). We define basis functions to be all polynomials up to 3rd order (including \(K=19\) terms, shown in FIG. 2b). We draw \(P=100\) data points from the standard Gaussian distribution, i.e., \(\mathbf{x}_{i}\equiv(x_{i},y_{i},z_{i})\sim\mathcal{N}(0,\mathbf{I}_{3\times 3 })\) (\(1\leq i\leq P\)). Within the function space spanned by the basis functions, \(M=4\) conserved quantities are found, since FIG. 2a top shows that there are 4 vanishing singular values \(\sigma_{i}\). The coefficients of four CQs are somewhat mixed, shown in FIG. 2b top. After sparsification (Eq. (5)), the coefficients become less entangled, shown in FIG. 2b middle, although their represented conserved quantities are still dependent. Among the 4 CQs, only \(c=2\) are independent, since FIG. 2a bottom shows that there are 2 non-vanishing singular values \(s_{i}\). FIG. 2b bottom shows the final outputs: the two conserved quantities agree with our prior knowledge. What will happen if we choose the set of basis functions to be smaller or larger? (1) smaller: if we include polynomials up to the 1st or the 2nd order (\(K=3\) or \(K=9\)) only, then only \(H_{1}\) can be discovered by SID, while \(H_{2}\) is not discovered. (2) larger: if we instead include polynomials up to order 4,5,6 (\(K=34,55,83\)), then both \(H_{1}\) and \(H_{2}\) are still discovered, although the sparsification and independence process may take longer than with only 3rd order polynomials. Detailed results are included in Appendix A. The take-home message is that SID does not replace human scientists since it requires the input of basis functions (formulate hypothesis) from human scientists. SID is good at testing hypotheses (which could be technical and tedious), however, it is human scientists who formulate hypotheses (which requires creativity). Figure 2: Three-species Lokta-Volterra equation. SID correctly discovers that: (a) there are 4 CQs (top) in polynomials up to 3rd order, yet only 2 of them (bottom) are independent. (b) coefficients of conserved quantities \(\mathbf{\Theta}^{(i)}(i=1,2,3)\), with more interpretability. ### Fluid mechanics Arguably the biggest puzzle in fluid mechanics is turbulence [12; 13]. Turbulence, and chaos in general, are due to lack of sufficient conserved quantities. Therefore, studying conserved quantities of fluid systems is relevant to understanding turbulence. As a preliminary step, we study conserved quantities of a fluid element in ideal fluid (zero viscosity and incompressible). In 2D (3D), The fluid element is a triangle (tetrahedron), which is represented by its 3 (4) vertices [14]. Effectively, we can view the system as 3 (4) "free" particles, with the only constraint being that the area (volume) of the triangle (tetrahedron) should remain unchanged. The equations of motion are included in Appendix C, which appear a bit intimidating (especially for 3D). Fluid dynamics experts (including some authors of this paper) have attempted to find the conserved quantities with pencil and paper. Given the complexity of the calculations, it is impressive that they found 8 (12) conserved quantities for 2D (3D). However, they were unsure whether there were more undiscovered conserved quantities, and whether the discovered ones were in their simplest. So we turn to SID for help. The results below are for the basis function set selected to be polynomials up to 2nd (3rd) order for 2D (3D), but more polynomial orders are also tried in Appendix C. For the 2D case, SID finds 8 conserved quantities, agreeing with experts' expectation. Interestingly, the conserved quantities found by SID appear to be simpler. In fact, all the conserved quantities discovered by SID are 1st or 2nd order polynomials, while experts found a 4th order polynomial, which we find to b a combination of two 2nd order are conserved quantities discovered by SID. For the 3D case, SID finds 14 conserved quantities, while experts found only 12. The two new conserved quantities can be interpreted as the angular momentum in the center of mass (COM) frame. They are non-trivial because it is easy to (falsely) think that the COM angular momentum is dependent on the angular momentum and the linear momentum [15]. Although humans alone will probably get the results right at the end of the day without SID, SID can take care of subtle details automatically, thus saving human experts' mental labor to a great extent. ### Atmospheric chemistry We next apply SID to a truncated atmospheric chemistry model of photochemical ozone production [16], where an exotic new conserved quantity is found. This simplified dynamical system contains 11 species and 10 reactions involved in ozone formation, including NOx, organic, and radical chemistry [17]. A key characteristic of this system is conservation of carbon and nitrogen atoms, \(H_{C}\) and \(H_{N}\), respectively. Though species in this model contain two other elements, hydrogen and oxygen, neither are conserved, as \(H_{2}O\) molecules are not one of the 11 species whose concentrations are tracked, and diatomic oxygen \(O_{2}\) is treated as an infinite source and sink due to its abundance. \(H_{C}\) and \(H_{N}\) are implied in the coefficients of a stoichiometric matrix \(\mathbf{B}\in\mathbb{Z}\)[11; 10] used in prior work to enforce conservation of atoms in machine learning surrogate models [16]. \(H_{C}\) and \(H_{N}\) can be represented by linear combinations of species concentrations, the coefficients of which form a basis for the null space of \(\mathbf{B}^{T}\). Further details are provided in Appendix B. We applied SID to simulation trajectories expecting to discover up to 2 conserved quantities which are linear combinations of concentrations. The training data are points on simulation trajectories at pressure \(P=0.95\) Figure 4: The evolution of concentrations in simulation. Under both conditions, \(CQ_{3}\) is well conserved. Figure 3: (a)(b) SID discovers 3 independent conserved quantities in the ozone photochemical production model. (c) The coefficients of these linear conserved quantities. The first two correspond to the known carbon and nitrogen conservation, while \(CQ_{3}\) is identified for the first time. atm and temperature \(T=20.0^{\circ}\)C. As shown in FIG. 3, besides \(H_{C}\) and \(H_{N}\), SID surprisingly discovers a third conserved quantity \(CQ_{3}\) that is a linear combination of species concentrations (\(C_{X}\) means the concentration of \(X\)): \[\begin{split} CQ_{3}\approx&\ 6C_{O_{3}}-5C_{NO}+C_{ NO_{2}}+3C_{HCHO}\\ &+9C_{HO_{2}}+6C_{HO_{2}H}+2C_{OH}+6C_{O}\\ &+4C_{HNO_{3}}-3C_{CO}-2.21C_{H_{2}}\end{split} \tag{9}\] This additional quantity is linearly independent of \(H_{C}\) and \(H_{N}\) and is not in the null space of \(\mathbf{B}^{T}\). \(CQ_{3}\) has a relative variation of less than \(0.1\%\) in 995 of 1000 simulated cases. Two representative simulation trajectories are shown in Figure 4, where \(CQ_{3}\) holds under different chemical and meteorological conditions. The evolving concentrations of \(O_{3}\), \(NO\), and \(NO_{2}\) are included as contrasts to the invariance of \(CQ_{3}\). We have not yet identified the underlying cause of \(CQ_{3}\), and whether it is physically exact or numerically approximate. We have ruled out symmetry corresponding to hydrogen conservation: when explicitly incorporating production of \(H_{2}O\) as an additional buildup species, SID identifies approximate hydrogen conservation as well as a fourth conserved quantity (see Appendix B.3). This implies that \(CQ_{3}\) might be a non-trivial conserved quantity that is worth thorough study in future work. ## IV Conclusions We have presented an algorithm SID to automatically discover conserved quantities from dynamical equations. In constrast to previous blackbox models, SID is guaranteed to be robust and interpretable thanks to its algorithmic simplicity. We demonstrate the power of SID on two examples in atmospheric chemistry and fluid mechanics, revealing new conserved quantities hitherto unknown to human experts. Although SID does not replace human scientists, it is a helpful assistant that can facilitate the discovery process. Promising future directions include applying SID to a broader range of applications and explicitly deal with symmetries that users may want to impose.
2308.16832
Magnon Orbital Angular Momentum of Ferromagnetic Honeycomb and Zig-Zag Lattices
By expanding the gauge $\lambda_n(k)$ for magnon band $n$ in harmonics of momentum ${\bf k} =(k,\phi )$, we demonstrate that the only observable component of the magnon orbital angular momentum $O_n({\bf k})$ is its angular average over all angles $\phi$, denoted by $F_n(k)$. For both the FM honeycomb and zig-zag lattices, we show that $F_n(k)$ is nonzero in the presence of a Dzyalloshinzkii-Moriya (DM) interaction. The FM zig-zag lattice model with exchange interactions $0<J_1< J_2$ provides a new system where the effects of orbital angular momentum are observable. For the zig-zag model with equal exchange interactions $J_{1x}$ and $J_{1y}$ along the $x$ and $y$ axis, the magnon bands are degenerate along the boundaries of the Brillouin zone with $k_x-k_y =\pm \pi/a$ and the Chern numbers $C_n$ are not well defined. However, a revised model with $J_{1y}\ne J_{1x}$ lifts those degeneracy and produces well-defined Chern numbers of $C_n=\pm 1$ for the two magnon bands. When $J_{1y}=J_{1x}$, the thermal conductivity $\kappa^{xy}(T)$ of the FM zig-zag lattice is largest for $J_2/J_1>6$ but is still about four times smaller than that of the FM honeycomb lattice at high temperatures. Due to the removal of band degeneracies, $\kappa^{xy}(T)$ is slightly enhanced when $J_{1y}\ne J_{1x}$.
R. S. Fishman, T. Berlijn, J. Villanova, L. Lindsay
2023-08-31T16:08:20Z
http://arxiv.org/abs/2308.16832v2
# Magnon Orbital Angular Momentum of Ferromagnetic Honeycomb and Zig-Zag Lattices ###### Abstract By expanding the gauge \(\lambda_{n}(\mathbf{k})\) for magnon band \(n\) in harmonics of momentum \(\mathbf{k}=(k,\phi)\), we demonstrate that the only observable component of the magnon orbital angular momentum \(O_{n}(\mathbf{k})\) is its angular average over all angles \(\phi\), denoted by \(F_{n}(k)\). For both the FM honeycomb and zig-zag lattices, we show that \(F_{n}(k)\) is nonzero in the presence of a Dzyalloshinzkii-Moriya (DM) interaction. The FM zig-zag lattice model with exchange interactions \(0<J_{1}<J_{2}\) provides a new system where the effects of orbital angular momentum are observable. For the zig-zag model with equal exchange interactions \(J_{1x}\) and \(J_{1y}\) along the \(x\) and \(y\) axis, the magnon bands are degenerate along the boundaries of the Brillouin zone with \(k_{x}-k_{y}=\pm\pi/a\) and the Chern numbers \(C_{n}\) are not well defined. However, a revised model with \(J_{1y}\neq J_{1x}\) lifts those degeneracy and produces well-defined Chern numbers of \(C_{n}=\pm 1\) for the two magnon bands. When \(J_{1y}=J_{1x}\), the thermal conductivity \(\kappa^{xy}(T)\) of the FM zig-zag lattice is largest for \(J_{2}/J_{1}>6\) but is still about four times smaller than that of the FM honeycomb lattice at high temperatures. Due to the removal of band degeneracies, \(\kappa^{xy}(T)\) is slightly enhanced when \(J_{1y}\neq J_{1x}\). + Footnote †: corresponding author: [email protected] This manuscript has been authored in part by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ([http://energy.gov/downloads/doe-public-access-plan](http://energy.gov/downloads/doe-public-access-plan)) ## I Introduction The past 13 years have seen remarkable advances in the field of "magnonics" [1; 2; 3], which focuses on the quanta of spin excitations known as magnons. One of the main goals of magnonics is the storage and processing of information. Because they can travel over centimeter distances without incurring any costs in Joule heating [4], magnons offer many advantages over electrons in the next generation of technological devices. Due to their much lower velocities, magnons are also better suited than electrons to creating small devices. In quick succession, experimentalists have discovered that magnons can produce the thermal Hall [5; 6; 7; 8; 9], Seebeck [10; 11], and Nernst [12; 13; 14] effects. Almost all previous theoretical work in magnonics has been based on the Berry curvature, which produces a fictitious magnetic field in the presence of dipole-dipole or Dzyalloshinzkii-Moriya (DM) interactions, both associated with spin-orbit (SO) coupling [15; 16]. Because it was borrowed from the theory of electronic structure [17; 18; 19], the Berry phase is usually formulated in a semi-classical language. For a Bloch function \(|u_{n}(\mathbf{k})\rangle\) with energy \(\epsilon_{n}(\mathbf{k})=\hbar\omega_{n}(\mathbf{k})\), the Berry curvature \[\mathbf{\Omega}_{n}(\mathbf{k})=\frac{i}{2\pi}\bigg{\{}\frac{\partial}{ \partial\mathbf{k}}\times\langle u_{n}(\mathbf{k})|\frac{\partial}{\partial \mathbf{k}}|u_{n}(\mathbf{k})\rangle\bigg{\}} \tag{1}\] of a ferromagnetic (FM) insulator requires that a magnon wavepacket centered at \(\mathbf{r}_{c}\) obeys the equation of motion [17; 18; 19] \[\frac{d\mathbf{r}_{c}}{dt}=\frac{\partial\epsilon_{n}(\mathbf{k})}{\hbar\, \partial\mathbf{k}}-\frac{d\mathbf{k}}{dt}\times\mathbf{\Omega}_{n}(\mathbf{k }). \tag{2}\] Therefore, the Berry curvature causes the wavepacket to bend away from the expected direction \(\partial\epsilon_{n}(\mathbf{k})/\partial\mathbf{k}\) for a free magnon with \(\mathbf{\Omega}_{n}(\mathbf{k})=0\). Prior to the first observation of the magnon Hall effect in the FM insulator Lu\({}_{2}\)V\({}_{2}\)O\({}_{7}\)[5], it was predicted by Katsura _et al._[20] based on a Kubo formula for the temperature dependence of the thermal conductivity \(\kappa^{xy}(T)\). This Kubo formula involves an integral of the Berry curvature \(\Omega_{nz}(\mathbf{k})\) perpendicular to the sample over the first Brillouin zone (BZ): \[\kappa^{xy}(T)=-\frac{k_{\mathrm{B}}^{2}T}{2\pi\hbar}\sum_{n}\int_{BZ}d^{2}k\,c _{2}\big{(}\rho(\epsilon_{n}(\mathbf{k}))\big{)}\,\Omega_{nz}(\mathbf{k}), \tag{3}\] where \(\rho(\epsilon)=1/(\exp(\epsilon/k_{\mathrm{B}}T)-1)\) is the Boltzmann distribution with zero chemical potential for magnons and \(c_{2}(\rho)\) is defined in Section V. The above expression includes the effects of the magnon edge currents travelling around the sample as well as the effects of the magnon wavepacket "self-rotation" due to its orbital motion [21; 22]. The prediction and subsequent observation of the magnon Hall effect based on the Berry curvature was one of the great early achievements in the field of magnonics. Because of SO coupling, magnons carry both spin and orbital angular momentum. By constructing a Lagrangian that produces the correct equation of motion for the magnetization \({\bf M}_{i}=2\mu_{\rm B}{\bf S}_{i}\) at site \(i\), Tsukernik _et al._[23; 24] demonstrated that the orbital angular momentum (OAM) of magnons along \({\bf z}\) can be written in terms of Bloch functions as \[O_{n}({\bf k})=-\frac{i\hbar}{2}\biggl{\{}{\bf k}\times\langle u_{n}({\bf k})| \frac{\partial}{\partial{\bf k}}|u_{n}({\bf k})\rangle\biggr{\}}\cdot{\bf z}. \tag{4}\] But Tsukernik and coworkers failed to realize that the OAM defined above is not directly observable because, unlike the Berry curvature \({\bf\Omega}_{n}({\bf k})\), \(O_{n}({\bf k})\) is not gauge invariant [25]. This can be seen by expanding the spin Hamiltonian \(H\) to second order in powers of the deviation of the spin operators \({\bf S}_{i}\) from their equilibrium values. Then the Bloch functions \(|u_{n}({\bf k})\rangle\) satisfy the eigenvalue equation \[H_{2}|u_{n}({\bf k})\rangle=\epsilon_{n}({\bf k})|u_{n}({\bf k})\rangle. \tag{5}\] Because \(H_{2}\) is translationally invariant, a new Bloch function obtained by the transformation \[|u_{n}({\bf k})\rangle\rightarrow|u_{n}({\bf k})\rangle\,e^{-i\lambda_{n}({ \bf k})} \tag{6}\] also satisfies the above eigenvalue equation. Under this gauge transformation, the Berry curvature \({\bf\Omega}_{n}({\bf k})\) of Eq. (1) remains unchanged but the OAM of Eq. (4) changes to \[O_{n}({\bf k}) \to O_{n}({\bf k})+\frac{\hbar}{2}\biggl{(}k_{x}\frac{ \partial}{\partial k_{y}}-k_{y}\frac{\partial}{\partial k_{x}}\biggr{)}\lambda _{n}({\bf k}) \tag{7}\] \[=O_{n}({\bf k})+\frac{\hbar}{2}\frac{\partial}{\partial\phi} \lambda_{n}(k,\phi),\] where the gauge \(\lambda_{n}({\bf k})=\lambda_{n}(k,\phi)\) depends only on the band index \(n\) and the two-dimensional wavevector \({\bf k}=(k,\phi)\). Quantities like \(O_{n}({\bf k})\) that depend on a gauge \(\lambda_{n}({\bf k})\) cannot be physically observed [25]. However, the average of \(O_{n}({\bf k})\) over all angles \(\phi\) \[F_{n}(k)=\int_{0}^{2\pi}\frac{d\phi}{2\pi}\,O_{n}({\bf k}) \tag{8}\] does not depend on the gauge \(\lambda_{n}({\bf k})\)[26]. Therefore, \(F_{n}(k)\) can be physically observed. Regarding the magnon Hall effect as an indirect observation of the magnon OAM, then direct observation of the magnon OAM through the angular average \(F_{n}(k)\) should be possible by coupling magnons to other quasiparticles that carry OAM. For example, magnons may couple to chiral phonons [27] in crystals with broken inversion symmetry. High-energy electron beams separated into orbital components by a grating [28] may also couple directly to the magnon OAM. In this paper, we demonstrate that \(F_{n}(k)\) is the _only_ observable component of the OAM. We then evaluate \(F_{n}(k)\) for FM honeycomb (HC) and square zig-zag (ZZ) lattices with DM interaction \(D\). Along with their antiferromagnetic (AF) counterparts, these lattices are sketched in Fig. 1. Each model is described by the general Hamiltonian \[H = -\frac{1}{2}\sum_{i,j}J_{ij}\,{\bf S}_{i}\cdot{\bf S}_{j}-D\sum_{ i,j}({\bf S}_{i}\times{\bf S}_{j})\cdot{\bf z} \tag{9}\] \[- K\sum_{i}({\bf S}_{i}\cdot{\bf z})^{2},\] with the DM interaction \(-D({\bf S}_{i}\times{\bf S}_{j})\cdot{\bf z}\) oriented along bond \(i,j\) with spin \({\bf S}_{j}\) at the endpoint and spin \({\bf S}_{i}\) at the starting point of the arrow in Fig. 1. In all four lattices, DM interactions are allowed by broken inversion symmetry. For the HC lattices, inversion symmetry is broken by lattice topology. For the ZZ lattices, inversion symmetry is broken by the different environment to either side of an exchange path when \(|J_{2}/J_{1}|\neq 1\). While the FM lattices exhibit nonzero values of \(F_{n}(k)\) when \(D\neq 0\), the AF lattices do not: \(F_{n}(k)=0\) even when \(D\neq 0\). For all four models, easy-axis anisotropy \(K\) along \({\bf z}\) is required to keep the DM interaction from tilting the spins away from the \(z\) axis. In all four lattices, the DM interactions only act between sites \(r\) of the same kind in each magnetic unit cell. For the FM lattices of Figs. 1(a) and (c), the DM interactions act between spins of type 1 or of type 2. Similarly for the AF HC lattice in Fig. 1(b). For the AF ZZ lattice in Fig. 1(d), the DM interactions act between spins of type 1, 2, 3, or 4. OAM was earlier predicted [29] to appear in the two Figure 1: Case studies: HC lattices with (a) FM interaction \(J>0\) and (b) AF interaction \(J<0\) with two sites in the magnetic unit cell each; ZZ lattices with (c) FM interactions \(0<J_{1}<J_{2}\) and two sites in the magnetic unit cell, and (d) AF interaction \(J_{1}<0\) and FM interaction \(J_{2}>0\) and four sites in the magnetic unit cell. The DM interaction \(D\) and its orientation is shown by the dashed line. Up spins are solid circles and down spins are empty circles. ZZ lattices of Figs. 1(c) and (d) with \(J_{2}\neq J_{1}\) but \(D=0\). Unfortunately, OAM without DM interactions cannot be observed due to its lack of gauge invariance [26]. In this paper, we find that a FM ZZ lattice with a nonzero DM interaction \(D\neq 0\) as in Fig. 1(c) creates a new class of materials where the effects of OAM are observable. Due to the inequivalent DM interactions on either side of each bond, the FM ZZ lattice then circumvents the "no-go" theorem of Refs. [20] and [30], which is based on the edge sharing of equivalent cells. However, because the magnon bands are always degenerate along the upper left and lower right boundaries of the BZ with \(k_{x}-k_{y}=\pm\pi/a\), the Chern numbers \(C_{n}\) of the magnon bands are not well defined. The degeneracy of the FM ZZ bands along the upper left and lower right boundaries of the BZ can be lifted by allowing the FM exchange interaction \(J_{1}\) along the \(x\) and \(y\) axis (called \(J_{1x}\) and \(J_{1y}\)) to be slightly different. The Chern numbers of the magnon bands are then well defined and given by \(\pm 1\). This paper is divided into five sections. Section II demonstrates that \(F_{n}(k)\) is the only component of \(O_{n}(k)\) that is physically observable. Sections III reviews results for the FM HC model. Section IV discusses the FM ZZ model as well as the revised model with \(J_{1y}/J_{1x}\neq 1\). Section V compares the predicted magnon Hall effects of the FM HC and ZZ models and contains a conclusion. To focus attention on the FM cases where OAM can be observed, we treat the AF HC and ZZ lattices in Appendices B and C. Since dipole-dipole interactions [15] are neglected in this paper, DM interactions provide the only source of SO coupling. ## II Components of the OAM To set the stage for the results provided in the next two sections, we briefly review the spin-wave (SW) formalism for the OAM and Berry curvature, specializing to collinear spin systems. Rotated into the local spin reference frame with \(\bar{\mathbf{z}}_{i}\) pointing along the spin direction, the spins \(\bar{\mathbf{S}}_{i}\) are given in terms of the Boson SW creation and annihilation operators \(a_{i}\) and \(a_{i}^{\dagger}\) by \(\bar{S}_{iz}=S-a_{i}^{\dagger}a_{i}\), \(\bar{S}_{i+}=S_{ix}n_{iz}+iS_{iy}=\sqrt{2}Sa_{i}\), and \(\bar{S}_{i-}=S_{ix}n_{iz}-iS_{iy}=\sqrt{2}Sa_{i}^{\dagger}\), where \(n_{iz}=1\) for up spins and \(n_{iz}=-1\) for down spins. Then the Hamiltonian \(H\) can be expanded in powers of the Fourier transformed SW operators \(a_{\mathbf{k}}^{(r)}\) and \(a_{\mathbf{k}}^{(r)\dagger}\) (\(r\) is one of the \(M\) sites in the magnetic unit cell) as \(H=E_{0}+H_{2}+\ldots\) with second-order Hamiltonian \[H_{2}={\sum_{\mathbf{k}}}^{\prime}\mathbf{v}_{\mathbf{k}}^{\dagger}\cdot \underline{L}(\mathbf{k})\cdot\mathbf{v}_{\mathbf{k}}, \tag{10}\] where the prime indicates that the summation over \(\mathbf{k}\) is restricted to the first BZ of the magnetic unit cell. The \(2M\)-dimensional vector operators \[\mathbf{v}_{\mathbf{k}}=(a_{\mathbf{k}}^{(1)},a_{\mathbf{k}}^{(2)},\ldots a_{ \mathbf{k}}^{(M)},a_{-\mathbf{k}}^{(1)\dagger},a_{-\mathbf{k}}^{(2)\dagger}, \ldots a_{-\mathbf{k}}^{(M)\dagger}) \tag{11}\] satisfy \([\mathbf{v}_{\mathbf{k}},\mathbf{v}_{\mathbf{k}^{\prime}}^{\dagger}]= \underline{N}\,\delta_{\mathbf{k},\mathbf{k}^{\prime}}\), where \(\underline{N}\) is defined in terms of the \(M\)-dimensional identity matrix \(\underline{I}\) by \[\underline{N}=\left(\begin{array}{cc}\underline{I}&0\\ 0&-\underline{I}\end{array}\right). \tag{12}\] The \(2M\times 2M\) matrix \(\underline{L}(\mathbf{k})\) can be compactly written \[\underline{L}(\mathbf{k})=\left(\begin{array}{cc}\underline{P}(\mathbf{k})& \underline{Q}(\mathbf{k})\\ \underline{Q}^{\prime}(\mathbf{k})&\underline{\underline{P}}^{\prime}( \mathbf{k})\end{array}\right), \tag{13}\] where \(\underline{P}(\mathbf{k})\), \(\underline{Q}(\mathbf{k})\), \(\underline{P}^{\prime}(\mathbf{k})\), and \(\underline{Q}^{\prime}(\mathbf{k})\) are \(M\times M\) matrices. Because \(\underline{L}(\mathbf{k})\) is Hermitian, \[\underline{P}^{\prime}(\mathbf{k})=\underline{P}(-\mathbf{k})^{\star}, \tag{14}\] \[\underline{Q}^{\prime}(\mathbf{k})=\underline{Q}(-\mathbf{k})^{\star}. \tag{15}\] These relations will prove useful in the following two sections and in Appendices B and C. Within the quantum SW notation, the semiclassical eigenvalue relation of Eq. (5) is replaced by \[\underline{\Delta}(\mathbf{k})\cdot\underline{X}^{-1}(\mathbf{k})=\hbar \omega_{n}(\mathbf{k})\,\underline{X}^{-1}(\mathbf{k}), \tag{16}\] where \(\underline{\Delta}(\mathbf{k})=\underline{N}\cdot\underline{L}(\mathbf{k})\) is non-Hermitian. Hence, the Bloch functions \(|u_{n}(\mathbf{k})\rangle\) are replaced by the complex matrices \(X^{-1}(\mathbf{k})_{rn}\), which can be considered the \(n\)th eigenfunctions of the \(2M\times 2M\) energy matrix \(\underline{\Delta}(\mathbf{k})\). In the quantum SW language, the Berry curvature and OAM are given by \[\mathbf{\Omega}_{n}(\mathbf{k})=\frac{i}{2\pi}\sum_{r=1}^{M} \biggl{\{}\frac{\partial X^{-1}(\mathbf{k})_{rn}^{\star}}{\partial\mathbf{k}} \times\frac{\partial X^{-1}(\mathbf{k})_{rn}}{\partial\mathbf{k}}\] \[-\frac{\partial X^{-1}(\mathbf{k})_{r+M,n}^{\star}}{\partial \mathbf{k}}\times\frac{\partial X^{-1}(\mathbf{k})_{r+M,n}}{\partial\mathbf{k}} \biggr{\}}, \tag{17}\] \[O_{n}(\mathbf{k})=\frac{\hbar}{2}\sum_{r=1}^{M}\Bigl{\{}X^{-1}( \mathbf{k})_{rn}\,\hat{l}_{z\mathbf{k}}\,X^{-1}(\mathbf{k})_{rn}^{\star}\] \[-X^{-1}(\mathbf{k})_{r+M,n}\,\hat{l}_{z\mathbf{k}}\,X^{-1}( \mathbf{k})_{r+M,n}^{\star}\Bigr{\}}, \tag{18}\] where \[\hat{l}_{z\mathbf{k}}=-i\biggl{(}k_{x}\frac{\partial}{\partial k_{y}}-k_{y} \frac{\partial}{\partial k_{x}}\biggr{)} \tag{19}\] is the OAM operator. The normalization condition for the Bloch functions \(\langle u_{n}(\mathbf{k})|u_{n}(\mathbf{k})\rangle=1\) is then replaced by the condition \[\sum_{r=1}^{M}\Bigl{\{}|X^{-1}(\mathbf{k})_{rn}|^{2}-|X^{-1}(\mathbf{k})_{r+M,n}|^{2}\Bigr{\}}=1 \tag{20}\] for the complex matrices \(X^{-1}(\mathbf{k})_{rn}\) and \(X^{-1}(\mathbf{k})_{r+M,n}\). With the Berry curvature defined above, the Chern number for band \(n\) is given by \[C_{n}=\int_{BZ}d^{2}k\,\Omega_{nz}({\bf k}), \tag{21}\] where \({\bf k}\) is integrated over the first BZ [31]. A customary factor of \(1/2\pi\) is missing from this expression for \(C_{n}\) because it is included in Eqs. (1) and (17) for the Berry phases. The Chern number \(C_{n}\) takes an integer value so long as the magnons in band \(n\) are nondegenerate, i.e. disconnected from all other magnons in frequency for all \({\bf k}\). It is physically associated with edge modes [21; 22; 32; 33] whose dispersion bridges the gap between bulk magnon bands. Since the sum of Berry curvatures over all bands vanishes, \(\sum_{n}C_{n}=0\). In the quantum language, each eigenfunction \(X^{-1}({\bf k})_{rn}\) can be multiplied by an arbitrary phase factor so that \[X^{-1}({\bf k})_{rn}\to X^{-1}({\bf k})_{rn}\,e^{-i\lambda_{n}({\bf k})}, \tag{22}\] where the gauge \(\lambda_{n}({\bf k})\) may depend on band index \(n\) and \({\bf k}=(k,\phi)\) but not on site \(r\). Of course, \(\lambda_{n}(k,\phi)\) must also be a single-valued function of \(\phi\) so that \(\lambda_{n}(k,0)=\lambda_{n}(k,2\pi)\). Under a gauge transformation, the OAM changes by \[O_{n}({\bf k})\to O_{n}({\bf k})+\frac{\hbar}{2}\frac{\partial}{\partial\phi} \lambda_{n}(k,\phi), \tag{23}\] in agreement with the semiclassical expression of Eq. (7). Now expand \(O_{n}(k,\phi)\) in powers of \(\cos l\phi\) and \(\sin l\phi\) so that \[O_{n}(k,\phi)=\sum_{l=0}\Bigl{\{}A_{ln}(k)\cos l\phi+B_{ln}(k)\sin l\phi \Bigr{\}}. \tag{24}\] Following a gauge transformation, the \(rhs\) of Eq. (23) becomes \[\sum_{l=0}\Bigl{\{}A_{ln}(k)\cos l\phi+B_{ln}(k)\sin l\phi\Bigr{\}}+ \frac{\hbar}{2}\frac{\partial}{\partial\phi}\lambda_{n}(k,\phi)\] \[=A_{0n}(k)+\sum_{l=1}\Bigl{\{}A_{ln}(k)\cos l\phi+B_{ln}(k)\sin l \phi\Bigr{\}}\] \[+\frac{\hbar}{2}\frac{\partial}{\partial\phi}\lambda_{n}(k,\phi)= A_{0n}(k), \tag{25}\] where we have set \[\lambda_{n}(k,\phi) =-\frac{2}{\hbar}\sum_{l=1}\frac{1}{l}\Bigl{\{}A_{ln}(k)\sin l\phi \tag{26}\] \[-B_{ln}(k)\cos l\phi\Bigr{\}}.\] The \(l=0\) component cannot be included on the \(rhs\) because it would produce a term proportional to \(\phi\), violating the assumption that \(\lambda_{n}(k,0)=\lambda_{n}(k,2\pi)\). Hence, the appropriate gauge \(\lambda_{n}(k)\) can be used to remove all components of the OAM except for \[A_{0n}(k)=F_{n}(k)\equiv\int_{0}^{2\pi}\frac{d\phi}{2\pi}\,O_{n}({\bf k}). \tag{27}\] Not only does this prove that \(F_{n}(k)\) is observable but it also demonstrates that \(F_{n}(k)\) is the _only_ observable component of the OAM. In the absence of DM interactions, the OAM \(O_{n}({\bf k})\) is an odd function of \({\bf k}\) so that \(O_{n}({\bf k})=-O_{n}(-{\bf k})\) and \(F_{n}(k)=0\). When the DM interaction \(D\) enters \(O_{n}({\bf k})\) linearly, then \(O_{n}({\bf k})=O_{n}(-{\bf k})\) is an even function of \({\bf k}\) and \(F_{n}(k)\) can be nonzero. More generally, we expand \(O_{n}({\bf k})\) in powers of the DM interaction as \[O_{n}({\bf k}) = O_{n}^{(0)}({\bf k})+D\,O_{n}^{(1)}({\bf k})+D^{2}\,O_{n}^{(2)}( {\bf k}) \tag{28}\] \[+D^{3}\,O_{n}^{(3)}({\bf k})+\ldots.\] Then, the even components \(O_{n}^{(2m)}({\bf k})=-O_{n}^{(2m)}(-{\bf k})\) are odd in \({\bf k}\) and the odd components \(O_{n}^{(2m+1)}({\bf k})=O_{n}^{(2m+1)}(-{\bf k})\) are even in \({\bf k}\). Of course, only the odd components \(O_{n}^{(2m+1)}({\bf k})\) contribute to \(F_{n}(k)\). If we then write \[O_{n}({\bf k})=O_{n}^{({\rm odd})}({\bf k})+O_{n}^{({\rm even})}({\bf k}), \tag{29}\] only \(O_{n}^{({\rm even})}({\bf k})\) (containing terms of order \(D^{2m+1}\)) contributes to the physically measureable \(F_{n}(k)\). Since only wavevectors \({\bf k}\) within the first BZ of the magnetic unit cell enter Eq. (10), we use periodic boundary conditions to evaluate the integral over angles \(\phi\) in \(F_{n}(k)\) when required to translate wavevectors \({\bf k}\) outside the first BZ to wavevectors inside the first BZ. Alternatively, \(F_{n}(k)\) can be evaluated by tiling all of \({\bf k}\) space with the first BZ of \(O_{n}^{({\rm even})}({\bf k})\). For the FM ZZ lattice, Appendix A shows that the resulting pattern for \(O_{n}^{({\rm tiled})}({\bf k})\) is both periodic in \({\bf k}\) and continuous as a function of \({\bf k}\) at the zone boundaries. We shall give examples of the tiling procedure for the FM HC and ZZ models in the following sections. Nevertheless, bear in mind that \(O_{n}^{({\rm tiled})}({\bf k})\) is not unique and is just a tool to evaluate the physically observable quantity \(F_{n}(k)\). ## III FM HC lattice Most details of the solution for the FM HC lattice sketched in Fig. 1(a) with exchange \(J>0\) and DM interaction \(D\) between like sites were previously provided in Ref. [26]. The \(4\times 4\) matrix \(\underline{L}({\bf k})\) defined by Eq. (10) is given by \[\underline{L}({\bf k})=\frac{3JS}{2}\left(\begin{array}{cccc}A_{ \bf k}^{-}&-\Gamma_{\bf k}^{*}&0&0\\ -\Gamma_{\bf k}&A_{\bf k}^{*+}&0&0\\ 0&0&A_{\bf k}^{+}&-\Gamma_{\bf k}^{*}\\ 0&0&-\Gamma_{\bf k}&A_{\bf k}^{*}\end{array}\right), \tag{30}\] where \(A_{\bf k}^{\pm}=1\pm d\,\Theta_{\bf k}+\kappa\), \(d=-2D/3J\), \(\kappa=2K/3J\), \[\Theta_{\bf k}=4\cos(3k_{x}a/2)\sin(\sqrt{3}k_{y}a/2)-2\sin(\sqrt{3}k_{y}a), \tag{31}\] and \[\Gamma_{\bf k}=\frac{1}{3}\Big{\{}e^{ik_{x}a}+2e^{-ik_{x}a/2}\cos(\sqrt{3}k_{y }a/2)\Big{\}}. \tag{32}\] We caution the reader that matrix element \(A_{\bf k}^{\pm}\), DM parameter \(d\), and anisotropy parameter \(\kappa\) shall be defined differently for the FM ZZ lattice in the next section. Since \(\Theta_{-\bf k}=-\Theta_{\bf k}\) and \(\Gamma_{-\bf k}=\Gamma_{\bf k}^{*}\), it can be easily shown that the upper and lower quadrants of \(\underline{L}({\bf k})\) satisfy Eq. (14) or that \(\underline{P}^{\prime}({\bf k})=\underline{P}(-{\bf k})^{\star}\). Magnon energies for bands 1 and 2 are given by \[\hbar\omega_{1}({\bf k})=3JS(1-\eta_{\bf k}+\kappa), \tag{33}\] \[\hbar\omega_{2}({\bf k})=3JS(1+\eta_{\bf k}+\kappa), \tag{34}\] where \(\eta_{\bf k}=\sqrt{|\Gamma_{\bf k}|^{2}+(d\,\Theta_{\bf k})^{2}}\). Notice that these energies are simply shifted by the easy-axis anisotropy \(\kappa\). The magnon band gap is given by \[\hbar\Delta\omega({\bf k})=\hbar(\omega_{2}({\bf k})-\omega_{1}({\bf k}))=6 JS\eta_{\bf k}. \tag{35}\] The normalized gap \[\delta({\bf k})\equiv\frac{\hbar\Delta\omega({\bf k})}{6JS}=2\eta_{\bf k}=2 \sqrt{|\Gamma_{\bf k}|^{2}+(d\,\Theta_{\bf k})^{2}} \tag{36}\] is plotted versus \({\bf k}\) for \(d=0\) and \(-0.4\) in Fig. 2. When \(d=0\), the magnon gap vanishes in triangular \({\bf k}\)-space regions around \({\bf k}=0\). When \(d=-0.4\), the smallest normalized gap \(\delta({\bf k})\) is \(2/3\). In fact, any nonzero \(d\) introduces a gap such that the \(\delta({\bf k})>0\) and modes 1 and 2 are distinct for all \({\bf k}\). Using a particularly simple form for the gauge, we earlier found [26] (correcting a minus sign) that \[O_{1}({\bf k})=\frac{\hbar}{4}\bigg{\{}1-\frac{d\,\Theta_{\bf k}}{\eta_{\bf k }}\bigg{\}}\frac{\Gamma_{\bf k}}{|\Gamma_{\bf k}|}\,\hat{l}_{z{\bf k}}\,\frac{ \Gamma_{\bf k}^{*}}{|\Gamma_{\bf k}|}, \tag{37}\] \[O_{2}({\bf k})=\frac{\hbar}{4}\bigg{\{}1+\frac{d\,\Theta_{\bf k}}{\eta_{\bf k }}\bigg{\}}\frac{\Gamma_{\bf k}}{|\Gamma_{\bf k}|}\,\hat{l}_{z{\bf k}}\,\frac{ \Gamma_{\bf k}^{*}}{|\Gamma_{\bf k}|}. \tag{38}\] Unlike the mode frequencies, however, solutions for the OAM are not unique. Since the \(d=0\) portion of the OAM is not observable, the first terms in the brackets of Eqs. (37) and (38) can be neglected. Because \(\eta_{\bf k}\) is an even function of \(d\), we then find \[O_{1}^{\rm(even)}({\bf k})=-O_{2}^{\rm(even)}({\bf k})=-\frac{d\,\hbar}{4} \frac{\Theta_{\bf k}}{\eta_{\bf k}}\frac{\Gamma_{\bf k}}{|\Gamma_{\bf k}|}\, \hat{l}_{z{\bf k}}\,\frac{\Gamma_{\bf k}^{*}}{|\Gamma_{\bf k}|} \tag{39}\] and \[F_{1}(k)=-F_{2}(k)=-\frac{d\,\hbar}{4}\int\frac{d\phi}{2\pi}\frac{\Theta_{\bf k }}{\eta_{\bf k}}\frac{\Gamma_{\bf k}}{|\Gamma_{\bf k}|}\,\hat{l}_{z{\bf k}}\, \frac{\Gamma_{\bf k}^{*}}{|\Gamma_{\bf k}|}. \tag{40}\] Figure 3: The pattern \(O_{1}^{\rm(tiled)}({\bf k})/\hbar\) for the FM HC lattice with \(d=-0.1\). The first BZ of the magnetic unit cell is denoted by the solid white lines. Figure 3 uses \(O_{1}^{\rm(even)}({\bf k})/\hbar\) with \(d=-0.1\) to construct \(O_{1}^{\rm(tiled)}({\bf k})/\hbar\). Notice that the tiled pattern is both a periodic function of \({\bf k}\) and a continuous function of \({\bf k}\) at the boundaries of the first BZ, denoted by the solid lines. The solutions for \(F_{1}(k)/\hbar\) for \(d\) running from \(-0.01\) to \(-0.1\) are plotted in Fig. 4. We find that \(F_{1}(k)/\hbar\) grows quite rapidly with \(d\) and peaks when \(ka/2\pi=2\sqrt{3}/9=0.385\), which coincides with the corners of the first BZ. While Fig. 3 suggests that the OAM is largest at the corners of the BZ and Fig. 4 does not disavow that claim, we remind the reader that Fig. 4 only states that the angular average of the OAM is largest when \(ka/2\pi\) intercepts the corners of the BZ. It does _not_ imply that the largest OAM lies at the BZ corners. Again correcting a minus sign, the Berry curvature of the FM HC lattice is given analytically by \[\Omega_{1z}({\bf k})=-\Omega_{2z}({\bf k})\] \[=i\frac{d}{4\pi}\,\frac{\Gamma_{\bf k}^{*}}{|\Gamma_{\bf k}|} \Bigg{\{}\frac{\partial\Theta_{\bf k}/\eta_{\bf k}}{\partial{\bf k}}\times \frac{\partial\Gamma_{\bf k}/|\Gamma_{\bf k}|}{\partial{\bf k}}\Bigg{\}}\cdot{ \bf z}. \tag{41}\] For the lower band, \(\Omega_{1z}({\bf k})\) is plotted in Fig. 5 for \(d=-0.1\) and \(-0.4\). The Chern numbers \(C_{n}\) of the lower and upper magnon bands are \(-1\) and \(+1\), respectively, for all \(d<0\). The Chern number is an integer due to the nonzero gap [31] between the magnon modes for all \({\bf k}\). ## IV FM ZZ lattice The FM ZZ lattice is sketched in Fig. 1(c) with exchange interactions \(0<J_{1}<J_{2}\) and DM interaction \(D\) between like sites along \((1,-1)\). Then \[\underline{L}({\bf k})=(J_{1}+J_{2})S\left(\begin{array}{cccc}A_{\bf k}^{+} &-\Psi_{\bf k}^{*}&0&0\\ -\Psi_{\bf k}&A_{\bf k}^{-}&0&0\\ 0&0&A_{\bf k}^{-}&-\Psi_{\bf k}^{*}\\ 0&0&-\Psi_{\bf k}&A_{\bf k}^{+}\end{array}\right), \tag{42}\] where \(A_{\bf k}^{\pm}=1\pm d\,\tau_{\bf k}+\kappa\), \(d=-2D/(J_{1}+J_{2})\), \(\kappa=K/(J_{1}+J_{2})\), \(\tau_{\bf k}=\sin(k_{x}a-k_{y}a)\), and \[\Psi_{\bf k}=\frac{J_{1}\xi_{\bf k}^{*}+J_{2}\xi_{\bf k}}{2(J_{1}+J_{2})} \tag{43}\] with \(\xi_{\bf k}=\exp(ik_{x}a)+\exp(ik_{y}a)\). Using \(\tau_{-\bf k}=-\tau_{\bf k}\) and \(\Psi_{-\bf k}=\Psi_{\bf k}^{*}\), it is easy to verify that the upper and lower quadrants of \(\underline{L}({\bf k})\) satisfy Eq. (14) or that \(\underline{P}^{\prime}({\bf k})=\underline{P}(-{\bf k})^{*}\). The magnon energies are given by \[\hbar\omega_{1}({\bf k})=2(J_{1}+J_{2})S(1-\mu_{\bf k}+\kappa), \tag{44}\] \[\hbar\omega_{2}({\bf k})=2(J_{1}+J_{2})S(1+\mu_{\bf k}+\kappa), \tag{45}\] where \(\mu_{\bf k}=\sqrt{|\Psi_{\bf k}|^{2}+(d\,\tau_{\bf k})^{2}}\). As for the FM HC lattice, the magnon bands are just shifted by \(\kappa\). The gap between the magnons is then given by \[\hbar\Delta\omega({\bf k})=\hbar(\omega_{2}({\bf k})-\omega_{1}({\bf k}))=4( J_{1}+J_{2})S\mu_{\bf k}, \tag{46}\] with a normalized gap \[\delta({\bf k})\equiv\frac{\hbar\Delta\omega({\bf k})}{2(J_{1}+J_{2})S}=2\mu_ {\bf k}=2\sqrt{|\Psi_{\bf k}|^{2}+(d\,\tau_{\bf k})^{2}} \tag{47}\] that only depends on \(d\) and \(r=J_{2}/J_{1}\). The normalized gap is plotted versus \({\bf k}\) on the top two panels of Fig. 6 for \(d=0\) and \(r=8\) or \(1\). For \(r>1\), \(\delta({\bf k})=0\) at the upper left and lower right borders of the BZ, which is sketched by the rotated square. For \(r=1\), \(\delta({\bf k})=0\) at all four borders of the BZ. The bottom two panels of Fig. 6 plot \(\delta({\bf k})\) for the same values of \(r\) with \(d=-0.4\). Even for \(d\neq 0\), the gap \(\delta({\bf k})\) continues to vanish at the upper left and lower right boundaries of the BZ with \(k_{x}-k_{y}=\pm\pi/a\) because \(\tau_{\bf k}=0\). The solutions for \(O_{n}^{\rm(even)}({\bf k})\) and \(F_{n}(k)\) are formally quite similar to those for the FM HC lattice in Eq. (39) and (40): \[O_{1}^{\rm(even)}({\bf k})=-O_{2}^{\rm(even)}({\bf k})=\frac{d\,\hbar}{4}\frac{ \tau_{\bf k}}{\mu_{\bf k}}\frac{\Psi_{\bf k}}{|\Psi_{\bf k}|}\,\hat{l}_{z{\bf k }}\,\frac{\Psi_{\bf k}^{*}}{|\Psi_{\bf k}|} \tag{48}\] and \[F_{1}(k)=-F_{2}(k)=\frac{d\,\hbar}{4}\int\frac{d\phi}{2\pi}\frac{\tau_{\bf k}} {\mu_{\bf k}}\frac{\Psi_{\bf k}}{|\Psi_{\bf k}|}\,\hat{l}_{z{\bf k}}\,\frac{\Psi _{\bf k}^{*}}{|\Psi_{\bf k}|}. \tag{49}\] The pattern \(O_{1}^{\rm(tiled)}({\bf k})/\hbar\) is plotted in Fig. 7 for \(d=-0.4\) and \(r=50\), \(8\), \(1.5\), and \(1.1\). For \(r=50\) or \(8\), \(O_{1}^{\rm(tiled)}({\bf k})/\hbar\) contains wide troughs of minima close to zero near avenues of maxima close to \(0.08\) or \(0.11\,\hbar\), both along \((1,1)\). Narrow lanes of vanishing OAM appear at the \(k_{x}-k_{y}=\pm\pi/a\) boundaries of the BZ, also along \((1,1)\). For \(r=1.5\) and \(1.1\), peaked regions in \(O_{1}^{\rm(tiled)}({\bf k})/\hbar\) appear at the \(k_{x}+k_{y}=\pm\pi/a\) boundaries of the BZ along \((1,-1)\). These regions become increasingly narrow as \(r\to 1\). The observable function \(F_{n}(k)/\hbar\) for the FM ZZ lattice is plotted in Fig. 8 for \(d=-0.4\) and these same four values of \(r\). For \(r=1.5\) and \(1.1\), the large maxima of \(F_{1}(k)/\hbar\) at \(ka/2\pi\approx 0.4\) are associated with the peaked regions of \(O_{1}^{\rm(tiled)}({\bf k})/\hbar\) at the lower left and upper right boundaries of the BZ (\(k_{x}+k_{y}=\pm\pi/a\)) in Fig. 7. The observable portion of the OAM becomes increasingly narrow and disappears as \(r\to 1\). Since the magnon Hall effect may be observed in the FM ZZ lattice with DM interaction, we also provide results for its Berry curvature. For the lower band, the Berry curvature along \(\mathbf{z}\) may be written as \[\Omega_{1z}(\mathbf{k})=-\Omega_{2z}(\mathbf{k})\] \[=-i\frac{d}{4\pi}\frac{\Psi_{\mathbf{k}}^{*}}{|\Psi_{\mathbf{k}} |}\left\{\frac{\partial\,\tau_{\mathbf{k}}/\mu_{\mathbf{k}}}{\partial\mathbf{ k}}\times\frac{\partial\,\Psi_{\mathbf{k}}/|\Psi_{\mathbf{k}}|}{\partial\mathbf{k}} \right\}\cdot\mathbf{z}, \tag{50}\] which is formally similar to the expression for the Berry curvature of the FM HC lattice given by Eq. (41). Using the same parameters as in Figs. 7 and 8, we plot \(\Omega_{1z}(\mathbf{k})\) in Fig. 9. Comparing Figs. 6 and 9 reveals that the Berry curvature vanishes at the upper left and lower right boundaries of the BZ with \(k_{x}-k_{y}=\pm\pi/a\), where the magnon gap \(\Delta(\mathbf{k})\) also vanishes. For any \(r\), the DM interaction does not affect the gap at \(k_{x}=k_{y}=\pm 0.5\pi/a\), where \(\tau_{\mathbf{k}}=0\) and \(\delta(\mathbf{k})=2|\Psi_{\mathbf{k}}|=2|1-r|/(1+r)\). So at \(r=1\), the gap between the bands closes at those two points. Close to \(r=1\), strong peaks in the Berry curvature are found at those same \(\mathbf{k}\) points in Fig. 8. However, the Berry curvature disappears when the exchange becomes homogeneous as \(r\to 1\). Evaluating the Chern number \(C_{n}\) for the FM ZZ model by integrating \(\Omega_{nz}(\mathbf{k})\) over all \(\mathbf{k}\) within the first BZ zone, we obtain the surprising result that \(C_{n}\) are non-integer. Recall that the Chern numbers for the FM HC lattice are \(\pm 1\) for all \(d\). The Chern numbers for the FM ZZ model are non-integer due to the degeneracy of the magnon bands at the upper right and lower left boundaries of the BZ. As mentioned earlier, the degeneracy of the magnon bands along the BZ boundaries can be lifted by allowing the exchange \(J_{1x}\) along the \(x\) axis to be different than the exchange \(J_{1y}\) along the \(y\) axis where \(J_{1}<J_{2}\) is the smaller of the two FM interactions in the ZZ Figure 7: The pattern \(O_{1}^{\rm(tiled)}(\mathbf{k})/\hbar\) for the FM ZZ lattice with \(d=-0.4\) and \(r=\) (a) 50, (b) 8, (c) 1.5, and (d) 1.1. The first BZ of the magnetic unit cell is denoted by the solid white lines. Figure 6: The normalized gap \(\delta(\mathbf{k})=2\mu_{\mathbf{k}}\) between bands of the FM ZZ lattice evaluated using \(d=0\) and \(r=\) (a) 8 and (b) 1 on the top two panels or \(d=-0.4\) and \(r=\) (c) 8 and (d) 1 on the bottom two panels. The first BZ of the magnetic unit cell is sketched by the solid lines. model. Using \(J_{1y}/J_{1x}=1.5\), the normalized gap \(\delta(\mathbf{k})\) is plotted versus \(\mathbf{k}\) in Fig. 10. With the revised definition \(\delta(\mathbf{k})=\hbar\Delta\omega(\mathbf{k})/(J_{1x}+J_{1y}+2J_{2})S\), the minimum value of \(\delta(\mathbf{k})\) increases from \(2.9\times 10^{-3}\) to \(2.5\times 10^{-2}\) as \(|d|\) increases from \(0\) to \(0.4\). The short-dash curve in Fig. 8 for \(r=2J_{2}/(J_{1x}+J_{1y})=1.5\) uses \(J_{1y}/J_{1x}=1.1\), indicating a redistribution of OAM to values of \(k\) near its peak. The Berry curvatures of the revised model with \(J_{1y}/J_{1x}=1.5\) and \(d=-0.4\) are plotted in Fig. 11 for four different values of \(r\). Notice that the range of Berry curvatures now extends over both positive and negative values with the upper negative bounds for \(d=-0.4\) exceeding the range for \(J_{1y}/J_{1x}=1\) in Fig. 9. These negative bounds for the Berry curvature can be found on the upper right and lower left boundaries of the BZ where the magnon modes were degenerate and the Berry curvature vanished for \(J_{1y}/J_{1x}=1\). For \(d<0\) and \(J_{1y}\neq J_{1x}\), the Chern numbers \(C_{n}\) for the lower and upper bands are \(-1\) and \(+1\), respectively. Of course, the Berry curvature and Chern numbers change sign when \(d\) changes sign. When \(J_{1y}=J_{1x}\) but \(J_{2y}\neq J_{2x}\) and \(d\neq 0\), the Chern numbers of both lower and upper bands vanish despite the appearance of a magnon band gap. ## V Discussion and Conclusion The magnon Hall effect was first predicted for a FM Kagome lattice [20] with DM interactions due to broken inversion symmetry. The subsequent observation and theory of the magnon Hall effect was performed for FM pyrochlore systems [5; 30]. Nonzero Berry curvatures and Chern numbers were also found in the FM star lattice [34], which has similarities to both Kagome and HC lattices. Earlier work showed [12; 35; 26] that OAM can be observed in FM HC lattices. Our current paper demonstrates that OAM can also be observed in FM ZZ lattices with distinct exchange interactions \(0<J_{1}<J_{2}\) and \(D\neq 0\). As shown in Appendices B and C, OAM is not observable in AF HC and ZZ geometries even when \(D\neq 0\). For materials based on either the FM HC or ZZ lattices, direct observation of the magnon OAM should be possible by coupling magnons to other quasiparticles carrying OAM, for example chiral phonons [27] or electrons separated into orbital components by a grating [28]. The finite Berry curvatures in both models also imply that their thermal conductivities are nonzero. The magnon Figure 10: The normalized gap \(\delta(\mathbf{k})=2\mu_{\mathbf{k}}\) between bands of the FM ZZ lattice evaluated using \(J_{1y}/J_{1x}=1.5\), \(r=8\), and (a) \(d=0\) or (b) \(-0.4\). The first BZ of the magnetic unit cell is sketched by the solid lines. Figure 9: The Berry curvature \(\Omega_{1z}(\mathbf{k})\) of the FM ZZ lattice evaluated using \(d=-0.4\) and \(r=\) (a) \(50\), (b) \(8\), (c) \(1.5\), and (d) \(1.1\). The first BZ of the magnetic unit cell is sketched by the solid lines. Hall effect [31] is evaluated in terms of the Berry curvature using Eq. (3), where \[c_{2}(\rho)=(1+\rho)\Big{(}\log\frac{1+\rho}{\rho}\Big{)}^{2}-(\log\rho)^{2}-2 \text{Li}_{2}(-\rho) \tag{51}\] and \(\text{Li}_{2}(z)\) is the dilogarithmic function. To account for the different scaling of the magnon energies \(\epsilon_{n}(\mathbf{k})=\hbar\omega_{n}(\mathbf{k})\) for the two FM models, we define \(\tilde{T}=k_{\text{B}}T/3JS\) and \(\tilde{\kappa}^{xy}=\hbar\kappa^{xy}/3k_{\text{B}}JS\) (HC) or \(\tilde{T}=k_{\text{B}}T/2(J_{1}+J_{2})S\) and \(\tilde{\kappa}^{xy}=\hbar\kappa^{xy}/2k_{\text{B}}(J_{1}+J_{2})S\) (ZZ) with the dimensionless thermal conductivity given by \[\tilde{\kappa}^{xy}(\tilde{T})=-\frac{\tilde{T}}{2\pi}\sum_{n}\int_{BZ}d^{2}k \,c_{2}\big{(}\rho(\epsilon_{n}(\mathbf{k}))\big{)}\,\Omega_{nz}(\mathbf{k}) \tag{52}\] for both models. To examine the effect of the ratio \(r=J_{2}/J_{1}>1\) for the FM ZZ model, we plot \(\tilde{\kappa}^{xy}(\tilde{T})\) versus \(r\) for several values of \(d\) in Fig. 12, which sets \(\tilde{T}=0.3\), and \(\kappa=1.5\). As expected \(\tilde{\kappa}^{xy}(\tilde{T})\to 0\) as \(r\to 1\). More unexpectedly, \(\tilde{\kappa}^{xy}(\tilde{T})\) reaches a plateau at about \(r\approx 6\), which implies that the magnon Hall effect will be most easily observed in materials with large \(r\). The magnon Hall effect has been observed in several experimental realizations of the FM HC lattice, such as CrBr\({}_{3}\)[36] and CrIr\({}_{3}\)[37]. We compare theoretical values of \(\tilde{\kappa}^{xy}(\tilde{T})\) for the FM HC and ZZ models in Fig. 13. Taking \(r=10\) for the FM ZZ model and setting \(\tilde{T}\approx 0.6\), we find that \(\tilde{\kappa}^{xy}(\tilde{T})\) is about four times larger for the HC model than for the ZZ model. In order to estimate the effect of the partially gapped magnons on the ZZ model, we plot the thermal conductivities with \(J_{1y}/J_{1x}=1.5\) in Figs. 12 and 13. Notice that the magnon band gap produced by \(J_{1y}/J_{1x}=1.5\) enhances \(\tilde{\kappa}^{xy}(\tilde{T})\) only slightly, with the largest change at small \(r\) in Fig. 12. So the disappearance of the magnon band gap and the absence of a well-defined Chern number when \(J_{1y}/J_{1x}=1\) does not significantly depress the magnon thermal conductivity at large \(r\). Due to the low contrast between different FM exchange couplings, it is difficult to identify materials described by the FM ZZ geometry. Nevertheless, several cases of FM ZZ chains coupled by FM exchange interactions have been discovered: spin-1/2 Heisenberg Vanadium chains in CdVO\({}_{3}\)[38; 39], spin-3/2 Chromium chains in LaCrOS\({}_{2}\)[40], and spin-3.4/2 Manganese chains in La\({}_{3}\)MnAs\({}_{5}\)[41]. For CdVO\({}_{3}\)[38; 39], the intrachain coupling \(J_{2}\approx 90\) K is significantly stronger than the interchain coupling \(J_{1}\approx 18\) K so that \(r\approx 5\). For La\({}_{3}\)MnAs\({}_{5}\)[41], \(r\approx 7.6\). The exchange ratio \(r\) is also believed to be large in LaCrOS\({}_{2}\)[40]. As predicted by Fig. 12, observation of the magnon Hall effect sensitively depends on the ratio \(r=J_{2}/J_{1}\). With such large values of \(r\), any of the materials mentioned above will be good candidates to search for the magnon Hall effect in FM ZZ geometries. Edge currents produced by the Berry curvature [21; 22] and closely connected with the thermal conductivity and Chern number [32; 33] are only topologically protected in systems containing a gap between the magnon bands, i.e. in magnetic insulators. Therefore, the edge currents in FM ZZ lattices with \(J_{1y}/J_{1x}=1\) are not topologically protected and will decay with time due to the degenerate \(\mathbf{k}\)-space regions along the \(k_{x}-k_{y}=\pm\pi/a\) BZ boundaries where the magnon bands overlap. The topological protection afforded by the symmetry breaking \(J_{1y}/J_{1x}\neq 1\) of the exchange interaction \(J_{1}\) may then depend on the size of the resulting magnon band gap compared to the Figure 12: The normalized thermal conductivity \(\tilde{\kappa}^{xy}(\tilde{T})\) versus \(r\) for several values of \(d\) for the FM ZZ model with temperature \(\tilde{T}=0.3\) and anisotropy \(\kappa=1.5\). The dot-dash line for \(d=-0.4\) takes \(J_{1y}/J_{1x}=1.5\). Figure 13: The normalized thermal conductivity \(\tilde{\kappa}^{xy}(\tilde{T})\) of the FM HC (solid) and ZZ (dash) models versus \(\tilde{T}\). Both models take \(d=-0.4\) and \(\kappa=1.5\) while the FM ZZ model also uses \(r=10\) and the upper (dash-dot) ZZ curve sets \(J_{1y}/J_{1x}=1.5\). The inset plots the ratio of the normalized thermal conductivities of the ZZ and HC lattices versus \(\tilde{T}\) with \(J_{1y}/J_{1x}=1\). temperature. To conclude, we have studied a new class of materials associated with FM ZZ geometries where the effects of OAM are observable. Formally, results for the OAM and Berry curvature for this geometry are quite similar to well-known results for the FM HC lattice. Direct observation of the magnon OAM may be possible in both HC and ZZ lattices by coupling magnons to other quasi-particles. While the magnon bands are not completely gapped and the Chern numbers are not well defined for \(J_{1y}/J_{1x}=1\), those deficits do not significantly impact the magnon thermal conductivity \(\kappa^{xy}(T)\) in ZZ lattices. Indeed, opening a magnon band gap and producing well-defined Chern numbers by setting \(J_{1y}/J_{1x}\neq 1\) only modestly enhances \(\kappa^{xy}(T)\). Although only an infinitesimal difference \(J_{1y}/J_{1x}-1\neq 0\) is required to create a magnon gap, FM ZZ lattice materials with \(J_{1y}/J_{1x}=1\) are not topological insulators. Consequently, the usefulness of these materials for specific applications may depend on the lifetime of the edge modes. Nonetheless, we are hopeful that future experiments on some of the materials discussed above will demonstrate that the effects of OAM may be observed in systems that are neither topological nor magnetic insulators. Research sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. The data that support the findings of this study are available from the authors upon reasonable request. ## Appendix A Tiling OAM For the FM ZZ lattice, the smoothness of the even part of the OAM at the zone boundaries is ensured if \(O^{\rm(even)}_{1,2}(k_{x},k_{y})=O^{\rm(even)}_{1,2}(k_{y},k_{x})\), as seen numerically in Figs. 3 and 7. For this case, a point just outside the BZ can be mapped into the corresponding point on the opposite side of the boundary just inside the BZ by using \[O^{\rm(even)}_{1,2}({\bf k})=O^{\rm(even)}_{1,2}({\bf k}+{\bf G}) \tag{10}\] and \[O^{\rm(even)}_{1,2}(k_{x},k_{y})=O^{\rm(even)}_{1,2}(-k_{y},-k_{x}), \tag{11}\] where \({\bf G}\) is a reciprocal lattice vector and we have used the even property of \(O^{\rm(even)}_{1,2}({\bf k})\). Thus, in approaching the BZ edge, the even OAM of point \({\bf k}\) and its mirror across the BZ boundary are equivalent, all the way to the limit of the boundary itself. To show that \(O^{\rm(even)}_{1,2}(k_{x},k_{y})=O^{\rm(even)}_{1,2}(k_{y},k_{x})\) for the FM ZZ square lattice, we see that \(O^{\rm(even)}_{1,2}({\bf k})\) in Eq. (48) is composed of functions that are symmetric or antisymmetric with respect to switching components: \(\Psi_{(k_{x},k_{y})}=\Psi_{(k_{y},k_{x})}\), \(\tau_{(k_{x},k_{y})}=-\tau_{(k_{y},k_{x})}\), \(\mu_{(k_{x},k_{y})}=\mu_{(k_{y},k_{x})}\), and \(\hat{l}_{z}(k_{x},k_{y})=-\hat{l}_{z}(k_{y},k_{x})\). Plugging these into Eq. (48), the negative signs from the antisymmetric terms cancel and \(O^{\rm(even)}_{1,2}(k_{x},k_{y})=O^{\rm(even)}_{1,2}(k_{y},k_{x})\). ## Appendix B AF HC Lattice This appendix considers the HC lattice sketched in Fig. 1(b) with AF exchange \(J<0\) between alternating up and down spins. We then find \[\underline{L}({\bf k})=-\frac{3JS}{2}\left(\begin{array}{cccc}A^{+}_{\bf k} &0&0&-\Gamma^{*}_{\bf k}\\ 0&A^{+}_{\bf k}&-\Gamma_{\bf k}&0\\ 0&-\Gamma^{*}_{\bf k}&A^{-}_{\bf k}&0\\ -\Gamma_{\bf k}&0&0&A^{-}_{\bf k}\end{array}\right), \tag{12}\] where \(A^{\pm}_{\bf k}=1\pm d\,\Theta_{\bf k}+\kappa\) as in the FM HC lattice but with \(\kappa=2K/3|J|\). It is then easy to show that solutions for the eigenfunctions \(X^{-1}_{rn}({\bf k})\) are independent of \(d\). The doubly degenerate magnon energies \[\hbar\omega_{1,2}({\bf k})=3|J|S\sqrt{(1+\kappa)^{2}-|\Gamma_{\bf k}|^{2}}+d \,\Theta_{\bf k}, \tag{13}\] are simply shifted by the DM interaction. As expected, \(O_{n}({\bf k})\) is an odd function of \({\bf k}\) for any gauge. Therefore, the AF HC lattice does not support an observable OAM and \(F_{n}(k)=0\). However, the AF HC lattice does support the magnon Nernst effect with a net spin current [42; 43]. ## Appendix C AF ZZ Lattice This appendix treats the AF ZZ lattice with AF coupling \(J_{1}<0\) between chains and FM coupling \(J_{2}>0\) within chains. As seen in Fig. 1(d), the magnetic unit cell contains 4 spins so the \(\underline{L}({\bf k})\) matrix is 8 dimensional. Fortunately, we can write \[H_{2}={\sum_{\bf k}}^{\prime}{\bf v}^{\dagger}_{\bf k}\cdot \underline{L}({\bf k})\cdot{\bf v}_{\bf k}\] \[={\sum_{\bf k}}^{\prime}\Big{\{}{\bf v}^{\dagger}_{1{\bf k}} \cdot\underline{L}_{1}({\bf k})\cdot{\bf v}_{1{\bf k}}+{\bf v}^{\dagger}_{2{ \bf k}}\cdot\underline{L}_{2}({\bf k})\cdot{\bf v}_{2{\bf k}}\Big{\}}, \tag{14}\] where \[{\bf v}_{1{\bf k}} = (a^{(1)}_{\bf k},a^{(2)}_{\bf k},a^{(3)\dagger}_{-{\bf k}},a^{(4) \dagger}_{-{\bf k}}), \tag{15}\] \[{\bf v}_{2{\bf k}} = (a^{(3)}_{\bf k},a^{(4)}_{\bf k},a^{(1)\dagger}_{-{\bf k}},a^{(2) \dagger}_{-{\bf k}}), \tag{16}\] \[\underline{L}_{1}({\bf k})=S(J_{2}-J_{1})\left(\begin{array}{cccc}A^{+}_{\bf k }&-\gamma_{2}\xi_{\bf k}&0&\gamma_{1}\xi^{*}_{\bf k}\\ -\gamma_{2}\xi^{*}_{\bf k}&A^{+}_{\bf k}&\gamma_{1}\xi_{\bf k}&0\\ 0&\gamma_{1}\xi^{*}_{\bf k}&A^{+}_{\bf k}&-\gamma_{2}\xi_{\bf k}\\ \gamma_{1}\xi_{\bf k}&0&-\gamma_{2}\xi^{*}_{\bf k}&A^{-}_{\bf k}\end{array} \right), \tag{17}\] \[\underline{L}_{2}({\bf k})=S(J_{2}-J_{1})\left(\begin{array}{cccc}A^{-}_{\bf k }&-\gamma_{2}\xi_{\bf k}&0&\gamma_{1}\xi^{*}_{\bf k}\\ -\gamma_{2}\xi^{*}_{\bf k}&A^{+}_{\bf k}&\gamma_{1}\xi_{\bf k}&0\\ 0&\gamma_{1}\xi^{*}_{\bf k}&A^{-}_{\bf k}&-\gamma_{2}\xi_{\bf k}\\ \gamma_{1}\xi_{\bf k}&0&-\gamma_{2}\xi^{*}_{\bf k}&A^{+}_{\bf k}\end{array} \right), \tag{18}\] with \(A_{\mathbf{k}}^{\pm}=1\pm d\,\mathbf{\gamma}_{\mathbf{k}}\), \(d=2D/(J_{2}-J_{1})\), and \(\gamma_{n}=J_{n}/2(J_{2}-J_{1})\). The only difference between \(\underline{L}_{1}(\mathbf{k})\) and \(\underline{L}_{2}(\mathbf{k})\) is that \(D\) changes sign. We leave it as an exercise to show that the symmetry relations of Eqs. (14) and (15) are satisfied by the full matrix \(\underline{L}(\mathbf{k})\). The AF ZZ model then contains 4 magnon bands, which are doubly degenerate with energies \[\hbar\omega_{1,3}(\mathbf{k})=2(J_{2}-J_{1})S\bigg{\{}1-(\gamma_{ 1}^{2}-\gamma_{2}^{2})|\xi_{\mathbf{k}}|^{2}+16(d\,\tau_{\mathbf{k}})^{2}\] \[\pm\sqrt{\gamma_{2}^{2}\Big{(}\gamma_{1}^{2}(\xi_{\mathbf{k}}^{2} -\xi_{\mathbf{k}}^{*2})^{2}+4|\xi_{\mathbf{k}}|^{2}\Big{)}+4(d\,\tau_{\mathbf{ k}})^{2}}\bigg{\}}^{1/2}, \tag{10}\] \(\omega_{2}(\mathbf{k})=\omega_{1}(\mathbf{k})\), and \(\omega_{4}(\mathbf{k})=\omega_{3}(\mathbf{k})\). Since the magnon energy does not depend on the sign of \(d\), lower bands 1 and 2 and upper bands 3 and 4 from \(\underline{L}_{1}(\mathbf{k})\) and \(\underline{L}_{2}(\mathbf{k})\) are degenerate. Additional exchange interactions do not affect the structure of the \(\underline{L}_{n}(\mathbf{k})\) matrices. For example, an exchange interaction \(J_{3}\) between spin pairs \(\{1,3\}\) and \(\{2,4\}\) along the \((1,1)\) diagonal does not couple the \(\underline{L}_{1}(\mathbf{k})\) and \(\underline{L}_{2}(\mathbf{k})\) matrices. Nor do any other complex set of exchange interactions or anisotropies. So the partition of \(\underline{L}(\mathbf{k})\) into two \(4\times 4\) matrices remains unaltered. Numerical calculation of \(F_{n}(k)\) reveals that \(F_{1}(k)=-F_{2}(k)\) and \(F_{3}(k)=-F_{4}(k)\) so that the contribution of the two \(\underline{L}_{n}(\mathbf{k})\) matrices with opposite \(D\) cancel. Hence, there is no _net_ observable OAM from the two lower or two upper magnon bands. Similarly, we find that the Berry curvatures of bands 1 and 2 cancel, as do the Berry curvatures of bands 3 and 4.
2310.20112
Well-Posedness of the Bochner Integral Form of Operator-Valued Riccati Equations
In this short paper, we prove that the Bochner integral form of the operator-valued Riccati equation has a unique solution if and only if its mild form has a unique solution. This implies that the mild and Bochner integral forms of this equation are equivalent. The result is obtained through an operator representation argument.
James Cheung
2023-10-31T01:09:24Z
http://arxiv.org/abs/2310.20112v1
# Well-posedness of the Bochner integral form of operator-valued Riccati equations ###### Abstract. In this short paper, we prove that the Bochner integral form of the operator-valued Riccati equation has a unique solution if and only if its mild form has a unique solution. This implies that the mild and Bochner integral forms of this equation are equivalent. The result is obtained through an operator representation argument. ## 1. Introduction Let \(H\) be a separable Hilbert space equipped with the inner product \((\cdot,\cdot)_{H}\). We define \(\mathcal{L}(H)\) to be the space of bounded linear operators defined on \(H\). We will then denote \(A:\mathcal{D}(A)\to H\) as the generator of a \(C_{0}\)-semigroup \(S(t)\in\mathcal{L}(H)\) for all \(t\in[0,\tau]\), where \(\tau>0\) and \(\mathcal{D}(A)\) is the domain of \(A\) defined densely in \(H\). The solution space of interest in this work is \(\mathcal{C}([0,\tau],\mathcal{L}(H))\), which defines the space of bounded operators that are norm-continuous with respect to \(t\in[0,\tau]\). In the differential form, the operator-valued Riccati equation is given by \[\left\{\begin{aligned} \frac{d}{dt}\Sigma(t)&=A \Sigma(t)+\Sigma(t)A^{*}+\Sigma(t)G\Sigma(t)-F\\ \Sigma(0)&=\Sigma_{0},\end{aligned}\right. \tag{1}\] for all \(t\in[0,\tau]\), where \(F,\Sigma_{0}\in\mathcal{L}(H)\) are self-adjoint operators, and \(G\) is an unbounded self-adjoint operator whose domain is dense in \(H\). The mild form of this equation is then given by \[\Sigma(t)\phi=S(t)\Sigma_{0}S^{*}(t)\phi+\int_{0}^{t}S(t-s)\left(F-\Sigma G \Sigma\right)S^{*}(t-s)\phi ds \tag{2}\] for all \(\phi\in H\) and \(t\in[0,\tau]\). From the results presented in [1], we know that it is generally known that (1) and (2) are equivalent, meaning that there exists a unique \(\Sigma(\cdot)\in\mathcal{C}([0,\tau],\mathcal{L}(H))\) that satisfies both equations. In this paper, we demonstrate that \(\Sigma(\cdot)\in\mathcal{C}([0,\tau],\mathcal{L}(H))\) satisfying (2) also satisfies \[\Sigma(t)=S(t)\Sigma_{0}S^{*}(t)+\int_{0}^{t}S(t-s)\left(F-\Sigma G\Sigma \right)S^{*}(t-s)ds \tag{3}\] for all \(t\in[0,\tau]\). The well-posedness of the Bochner integral form of the operator-valued Riccati equation plays an important part in determining theoretical error bounds for approximations to this equation [2, 4]. The previously known result presented in [3] indicates that the Bochner integral form of the operator-valued Riccati equation is well-posed if the operators \(F,G\) in (3) are compact. This result was derived through an approximation argument. This work extends well-posedness to cases where \(G\) is not necessarily bounded. We proceed to prove this result in the following section. ## 2. Analysis In the analysis, we will use an operator representation argument to demonstrate that the mild form and the Bochner integral form of the operator-valued Riccati equation are equivalent. To this end, we will utilize the following corollary to the Riesz Representation Theorem found in [5, Theorem A.63]. **Lemma 1**.: _If \(q(\cdot):H\to\mathbb{R}\) is a bounded quadratic form on \(H\), then there exists a unique self-adjoint operator \(Q\in\mathcal{L}(H)\) such that_ \[q(\phi)=\left(\phi,Q\phi\right)_{H}\] _for all \(\phi\in H\)._ We now move to prove the main result of this work given in the following. **Theorem 1**.: _Let \(S(t)\in\mathcal{L}(H)\) be a \(C_{0}\)-semigroup defined on \(t\in[0,\tau]\). Now, suppose that there exists an unique time-dependent self-adjoint operator \(\Sigma(\cdot)\in\mathcal{C}([0,\tau],\mathcal{L}(H))\) that satisfies the following mild form of the operator-valued Riccati equation_ \[\Sigma(t)\phi=S(t)\Sigma_{0}S^{*}(t)\phi+\int_{0}^{t}S(t-s)\left(F-\Sigma G \Sigma\right)S^{*}(t-s)\phi ds \tag{4}\] _for all \(\phi\in H\) and \(t\in[0,\tau]\), where \(F,\Sigma_{0}\in\mathcal{L}(H)\) are bounded self-adjoint operators and \(G\) is a generally unbounded self-adjoint operator whose domains is dense in \(H\). Then \(\Sigma(\cdot)\in\mathcal{C}([0,\tau],\mathcal{L}(H))\) satisfies (4) if and only if it satisfies also the following Bochner integral form of the operator-valued Riccati equation_ \[\Sigma(t)=S(t)\Sigma_{0}S^{*}(t)+\int_{0}^{t}S(t-s)\left(F-\Sigma G\Sigma \right)S^{*}(t-s)ds \tag{5}\] _for all \(t\in[0,\tau]\)._ Proof.: Since \(\Sigma(\cdot)\in\mathcal{C}([0,\tau],\mathcal{L}(H))\) satisfies (4), then we must have that \[\left(\phi,\Sigma(t)\phi\right)_{H}=\left(\phi,S(t)\Sigma_{0}S^{*}(t)\phi \right)_{H}+\left(\phi,\int_{0}^{t}S(t-s)\left(F-\Sigma G\Sigma\right)(s)S^{*} (t-s)\phi ds\right)_{H} \tag{6}\] for all \(\phi\in H\) and \(t\in[0,\tau]\). Defining \[q_{t}^{1}(\phi) :=\left(\phi,\Sigma(t)\phi\right)_{H}\] \[q_{t}^{2}(\phi) :=\left(\phi,S(t)\Sigma_{0}S^{*}(t)\phi\right)_{H}\] \[q_{t}^{3}(\phi) :=\left(\phi,\int_{0}^{t}S(t-s)\left(F-\Sigma G\Sigma\right)(s)S^ {*}(t-s)\phi ds\right)_{H}\] as quadratic forms defined for all \(\phi\in H\) and \(t\in[0,\tau]\). The boundedness of \(q_{t}^{1}(\cdot)\) follows from the observation that \(\Sigma(t)\phi\in H\) for all \(\phi\in H\) and \(t\in[0,\tau]\). Equation (4) then requires that \(S(t)\Sigma_{0}S^{*}(t)\phi\in H\) and that \(\int_{0}^{t}S(t-s)\left(F-\Sigma G\Sigma\right)(s)S^{*}(t-s)\phi ds\in H\) for all \(\phi\in H\) and \(t\in[0,\tau]\), which implies the boundedness of \(q_{t}^{2}(\cdot),q_{t}^{3}(\cdot)\). Applying Lemma 1 then implies that there exists unique operators \(Q_{t}^{1},Q_{t}^{2},Q_{t}^{3}\in\mathcal{L}(H)\) so that \[q_{t}^{1}(\phi) =\left(\phi,Q_{t}^{1}\phi\right)_{H}\] \[q_{t}^{2}(\phi) =\left(\phi,Q_{t}^{2}\phi\right)_{H}\] \[q_{t}^{3}(\phi) =\left(\phi,Q_{t}^{3}\phi\right)_{H}\] for all \(\phi\in H\) and \(t\in[0,\tau]\). It then follows from (6) that \[q_{t}^{1}(\phi)=q_{t}^{2}(\phi)+q_{t}^{3}(\phi)\] for all \(\phi\in H\) and \(t\in[0,\tau]\). This can only be true if \[Q_{t}^{1}=Q_{t}^{2}+Q_{t}^{3}\] for all \(t\in[0,\tau]\). Then, by the definition of \(q_{t}^{1},q_{t}^{2},q_{t}^{3}\) and the uniqueness of \(Q_{t}^{1},Q_{t}^{2},Q_{t}^{3}\) (implied by Lemma 1) associated with their respective quadratic forms, we have necessarily that \[Q_{t}^{1} =\Sigma(t)\] \[Q_{t}^{2} =S(t)\Sigma_{0}S^{*}(t)\] \[Q_{t}^{3} =\int_{0}^{t}S(t-s)\left(F-\Sigma G\Sigma\right)(s)S^{*}(t-s)ds,\] for all \(t\in[0,\tau]\). Hence, \(\Sigma(\cdot)\in\mathcal{C}\left([0,\tau],\mathcal{L}(H)\right)\) must also satisfy \[\Sigma(t)=S(t)\Sigma_{0}S^{*}(t)+\int_{0}^{t}S(t-s)\left(F-\Sigma G\Sigma\right) (s)S^{*}(t-s)ds\] for all \(t\in[0,\tau]\). Thus we have proven the "if" part of the theorem. The proof in the other direction follows by testing (5) with any \(\phi\in H\). **Remark 1**.: _We would like to point out that the analysis presented in the proof above implies that the operator-valued integral_ \[\int_{0}^{t}S(t-s)\left(F-\Sigma G\Sigma\right)(s)S^{*}(t-s)ds\] _is the unique representation of the bounded self-adjoint time-dependent linear operator \(Q_{t}\in\mathcal{C}([0,\tau],\mathcal{L}(H))\) that satisfies_ \[\left(\phi,Q_{t}\phi\right)_{H}:=\left(\phi,\int_{0}^{t}S(t-s)\left(F-\Sigma G \Sigma\right)(s)S^{*}(t-s)\phi ds\right)_{H}\] _for all \(\phi\in H\) and \(t\in[0,\tau]\). This indicates that the operator-valued integral used in the Bochner integral form of the operator-valued Riccati equation is well-defined._ ## 3. Discussion We have demonstrated above that the mild and Bochner integral forms of the operator-valued Riccati equation are equivalent. Instead of using an approximation argument, as done in [3], we have utilized an operator representation argument to achieve this result. This simpler proof then allows us to extend the known well-posedness results for the Bochner integral form to cases where the coefficient operator \(G\) in the equation are unbounded. In following works, the author will utilize the result presented in this paper to determine error bounds for approximation methods to operator-valued Riccati equations for cases where the coefficient operators \(G\) in the equation are defined by boundary and point control/observation operators.
2309.03870
Anomalies in Particle Physics
The currently accepted mathematical description of the fundamental constituents and interactions of matter is the Standard Model of particle physics. Its last missing particle, the famous Higgs boson, was observed at the Large Hadron Collider at CERN in 2012. However, it is clear that the Standard Model cannot be the ultimate theory of Nature, and e.g. cannot account for Dark Matter or non-vanishing neutrino masses (and does not include gravity). In fact, searches for physics beyond the SM have been intensified since the Higgs boson discovery. In this article, we review the hints for new physics, called ``anomalies'', obtained in particle physics experiments within the last years. We consider both direct high-energy searches for new resonances at the LHC and indirect low-energy precision experiments. These anomalies range from the nuclear scale (approximately the mass of the proton) to the electroweak scale (i.e. the mass of the Higgs boson) to the TeV scale (the highest scale directly accessible at the LHC), therefore spanning over four orders of magnitude. After discussing the experimental and theoretical status of the anomalies, we summarize possible explanations in terms of new particles and new interactions. In particular, new Higgs bosons and leptoquarks are promising candidates. Discovery prospects and implications for future colliders are discussed.
Andreas Crivellin, Bruce Mellado
2023-09-07T17:32:05Z
http://arxiv.org/abs/2309.03870v2
# Anomalies in Particle Physics ###### Abstract The currently accepted mathematical description of the fundamental constituents and interactions of matter is the Standard Model of particle physics. Its last missing particle, the famous Higgs boson, was observed at the Large Hadron Collider at CERN in 2012. However, it is clear that the Standard Model cannot be the ultimate theory of Nature, and e.g. cannot account for Dark Matter or non-vanishing neutrino masses (and does not include gravity). In fact, searches for physics beyond the SM have been intensified since the Higgs boson discovery. In this article, we review the hints for new physics, called "anomalies", obtained in particle physics experiments within the last years. We consider both direct high-energy searches for new resonances at the LHC and indirect low-energy precision experiments. These anomalies range from the nuclear scale (approximately the mass of the proton) to the electroweak scale (i.e. the mass of the Higgs boson) to the TeV scale (the highest scale directly accessible at the LHC), therefore spanning over four orders of magnitude. After discussing the experimental and theoretical status of the anomalies, we summarize possible explanations in terms of new particles and new interactions. In particular, new Higgs bosons and leptoquarks are promising candidates. Discovery prospects and implications for future colliders are discussed. **Key points:** * The Standard Model of Particle Physics is the currently accepted mathematical theory describing the fundamental constituents of matter as well as their interactions, and its final completion was the discovery of the Higgs particle at the Large Hadron Collider at CERN in 2012. * The Standard Model cannot account for the existence of Dark Matter or non-vanishing neutrino masses and must therefore be extended, however, a plethora of viable options exist. * In recent years, several interesting deviations from the Standard Model predictions, called "anomalies", were found, both in high-energy searches at the LHC and in low-energy precision observables. * These anomalies range from precision measurements of properties of the muon to hints for new scalar bosons to the existence of heavy resonances at the TeV scale. * They can be explained by supplementing the Standard Model with new particles and new interactions, in particular, additional Higgs bosons, new fermions and new strongly interacting particles. * While the data of the third run of the Large Hadron Collider will already be able to establish the existence of such new particles, future colliders, like FCC, ILC or CEPC, as well as new precision experiments are needed for a comprehensive and precise study of their properties. **Website summary:** The Standard Model of particle physics is the currently accepted theory of the fundamental constituents of matter and their interactions. We review the status of hints for new physics, which, if confirmed, would require the extension of the Standard Model by new particles and new interactions. ## 1 Introduction The Standard Model (SM) of particle physics is the currently accepted mathematical description of the fundamental constituents of matter and their interactions (excluding gravity). More specifically, matter, i.e. quarks and leptons, are fermions (spin 1/2 particles whose wave functions are invariant under rotation of \(4\pi\)). A proton contains two up-quarks (with electric charge \(+2/3\)) and one down-quark (with charge \(-1/3\)), while a neutron consists of one up-quark and two down-quarks. Electrons constitute the hull of atoms. Together with the nearly mass-less and very weakly interacting neutrinos they form the class called leptons. All fermions appear in three copies, called generations or flavours, that only differ in mass. The electron is accompanied by its heavy cousins the muon and the tau. The more massive versions of the up-quark, are called charm and top while strange and bottom (sometimes also called "beauty") are the more massive versions of the down-quark. Only first-generation fermions are stable, while the heavier generations are short-lived and decay to the lighter flavours. The forces between the fermions are mediated by gauge interactions. The corresponding gauge group is \(SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y}\). These are "local" symmetries, meaning that they hold independently at any point in space-time. Due to quantisation, these interactions result in force particles, the gauge bosons. The electromagnetic force is mediated by the photon, the weak force (corresponding to the \(SU(2)_{L}\) factor) by the \(W\) and \(Z\) gauge bosons and the strong force (\(SU(3)_{c}\)) by eight gluons. While quarks are charged under the strong force and thus interact with gluons, neutrinos only feel the weak force and the charged leptons (electron, muon and tau) as well have electromagnetic interactions. Importantly, all flavour violation in the SM is induced by the couplings of the \(W\) boson to up and down quarks via the Cabibbo-Kobayashi-Maskawa (CKM) matrix. Finally, we have the famous Higgs particle [1, 2], which was discovered at the Large Hadron Collider at CERN in 2012 [3, 4]. The field from which the Higgs boson originates spontaneously breaks the gauge symmetry of the weak force and \(U(1)_{Y}\) to the electromagnetic gauge group (\(U(1)_{\rm EM}\)) and gives masses to the \(W\) and \(Z\) bosons as well as to all (fundamental) fermions and the Higgs boson itself. The particle content of the SM is summarized in Fig. 1. Therefore, the SM is now complete. It has been extensively tested and verified by precision experiments within the last decades and no discovery of any particle beyond the SM ones has been announced [5]. However, it is clear that the SM cannot be the ultimate fundamental theory of Nature: In addition to many theoretical arguments for the existence of beyond the SM (BSM) physics, the SM e.g. cannot account for the observations of Dark Matter (DM) established at cosmological scales (since it does not contain a weakly interacting particle with the right relic abundance), nor for the non-vanishing neutrino masses required by neutrino oscillations, because in the SM neutrinos are necessarily massless due to the absence of a right-handed partner. Unfortunately, no right-handed neutrinos have been observed and the Dark Matter direct detection experiments did not see any signal [6]. Thus, we know that the SM cannot be the ultimate theory of Nature. However, there are many options, spanning a very large mass range (from several keV to the scale of Grand Unification at around \(10^{15}\)GeV), how one can account for Dark Matter and neutrino masses. Therefore, more experimental information on physics beyond the SM, preferably deviations from its predictions (in best case in the form of new resonances) is imperative to make progress towards a theory superseding the SM. The search for BSM physics was further intensified in the last decade, both in direct high-energy searches (mainly at the LHC) and in precision experiments testing the SM via quantum fluctuations of new particles. In fact, an increasing number of hints for new physics, called "anomalies", have been reported. They span over a huge energy range, from precision measurements of muon properties (the anomalous magnetic moment of the muon), over semi-leptonic \(B\) meson (quark bound states containing a \(b\) quark) decays, the measurement of the \(W\) boson mass, to direct LHC searches and even non-resonant Figure 1: Particle content of the Standard Model: Fermions consisting of quarks (gray) and leptons (green) as well as the gauge bosons (red) and the Higgs (blue). searches for particle too heavy to be produced directly at the LHC. While probability theory and statistics tell us that one cannot expect that all these anomalies will be confirmed in the future, it is also unlikely that all of them are just statistical flukes. Therefore, it is important to assess the strengths and weaknesses of these anomalies and to see to which extensions of the SM they point to predict other signals for future verification or falsification. In this article, we will review the status of these anomalies, give an overview of how they can be explained by BSM physics and give an outlook on their future implications. ## 2 Anomalies Let us now review the status of these anomalies. We will present them in increasing order of the corresponding energy scale. ### Anomalous magnetic moment of the muon (\(a_{\mu}\)) While the Dirac equation predicts that the \(g\) factor of any fundamental fermion is exactly 2, the famous prediction of Quantum Electro Dynamics (the quantum field theory of electromagnetism) by Schwinger [7] was a positive shift of \(a_{\ell}=(g-2)_{\ell}/2=\alpha/(2\pi)\) (see left diagram in Fig. 2 a)). Nowadays, the accuracy has dramatically increased. The combined value of the Brookhaven E821 result [8] and the \(g-2\) experiment at Fermilab [9, 10] deviate from the SM prediction of the \(g-2\) theory initiative [11] by \(5.0\,\sigma\). However, this SM prediction is based on measurements of \(e^{+}e^{-}\to\)hadrons [12, 13, 14] and does not include newer results for hadronic vacuum polarization from lattice simulations of Quantum Chromo Dynamics (the quantum field theory describing the strong interactions) [15] nor the latest measurement of \(e^{+}e^{-}\to\)hadrons1 by the CMD 3 collaboration [16] which would render the SM prediction closer to the measurement. Since these tensions between the different SM predictions are not understood yet, one can only say that a positive shift in \(a_{\mu}\) of the order of \(10^{-9}\) is preferred but a reliable estimate of the significance is not possible at the moment. Footnote 1: Hadrons are bound states of quarks which form due to the confining nature of QCD (i.e. because the coupling increases at low energies). ### Cabibbo Angle Anomaly (CAA) The CKM matrix \(V\) is by construction unitary [17] (since it must conserve probability) and we know from experiments that it has a hierarchical structure; while the size of the diagonal elements is close to unity, the off-diagonal elements are small. One can test the SM prediction \(\sum_{j}V_{ij}V_{jk}^{*}=\delta_{ik}\) experimentally. In this context, the Cabibbo angle [18], which parametrizes the mixing between the first two quark generations, is particularly interesting as it dominates the first and second row and column relations. In fact, a deficit in first-row and first-column CKM unitarity, which can be traced back to the fact that \(V_{ud}\) extracted from beta decays [19, 20] (see left diagram in Fig. 2 b)) does not agree with \(V_{us}\) (\(V_{cd}\)) determined from kaon [2] and tau decays (\(D\) decays), when comparing them via CKM unitarity. Furthermore, there is also a disagreement between the determinations of \(V_{us}\) from \(K\to\mu\nu\)[21] and \(K\to\pi\ell\nu\)[22] decays. The significance of these deviations crucially depends on the radiative (quantum) corrections to beta decays [20, 23, 24] and on the treatment of the tensions between kaon [25, 26, 27] and tau decays [28]. In summary, both tensions are slightly below the \(3\sigma\) level. ### Lepton flavour universality violation in tau decays (\(\tau\to\mu\nu\nu\)) Because the interactions of the \(W\) boson with leptons and neutrinos are flavour universal, the (leading terms of the) amplitudes for \(\ell\to\ell^{\prime}\nu\nu\) are predicted to be the same for all lepton generations. However, combining the ratios of branching ratios \(\mathrm{Br}(\tau\to\mu(e)\nu\nu)/\mathrm{Br}(\mu\to e\nu\nu)\) and \(\mathrm{Br}(\tau\to\mu\nu\nu)/\mathrm{Br}(\tau\to\nu\nu)\)[28, 29], leads to an \(\approx 2\sigma\) preference for constructive new physics (NP) at the per-mille level in \(\tau\to\mu\nu\bar{\nu}\)[30]. While here the SM prediction is quite clean, the significance is limited by statistics and the experimental problem of reconstructing the event due to missing energy. ### Charged current tauonic \(B\) decays (\(b\to c\tau\nu\)) These charged current transitions, mediated at tree-level by a \(W\) boson in the SM (see left diagram in Fig. 2 c)), have significant branching ratios (up to \(\mathcal{O}(10^{-2})\)). With light leptons, they are used to extract the CKM element \(V_{cb}\) and the result is consistent with the global CKM fit [31, 32]. However, the ratios (of branching ratios) \(R(D^{(*)})\) = \(\mathrm{Br}(B\to D^{(*)}\tau\nu)/\mathrm{Br}(B\to D^{(*)}\ell\nu)\)), are measured to be bigger than the SM predictions by approximately 20%, resulting in a \(\gtrapprox 3\sigma\) significance [28] for NP related to tau leptons. ### Flavour changing neutral current semi-leptonic \(B\) decays (\(b\to s\ell^{+}\ell^{-}\)) Like all flavour changing neutral current processes, \(b\to s\ell^{+}\ell^{-}\) transitions are loop suppressed (i.e. only induced via quantum fluctuations) within the SM (see left diagram in Fig. 2 d) for an example) since only the couplings of the charged can violate quark flavour. This results in small branching ratios, up to a few times \(10^{-6}\). While the previous hints[33] for lepton flavour universality violation in the ratios \(R(K^{(s)})={\rm Br}(B\to K^{(s)}\mu^{+}\mu^{-})/{\rm Br}(B\to K^{(s)}e^{+}e^{-})\) were not confirmed[34] and \(B_{s}\to\mu^{+}\mu^{-}\)[35, 36] now agrees quite well with the SM prediction[37, 38], there are several \(b\to s\mu^{+}\mu^{-}\) observables that significantly deviate from the SM predictions. This includes the angular observable \(P_{5}^{\prime}\)[39, 40], the total branching ratio \({\rm Br}(B\to K\mu^{+}\mu^{-})\)[41, 42], \({\rm Br}(B_{s}\to\phi\mu^{+}\mu^{-})\)[43, 44] and also semi-inclusive observables[45]. As a result, global fits find a preference for NP at the \(5\sigma\) level[46, 47, 48]. Recently, the Belle-II collaboration reported an excess in the closely related \(B\to K^{*}\nu\nu\) decay[49]. ### Asymmetries in \(Z\) decays (\(Z\to b\bar{b}\)) Since the \(Z\) boson couplings to fermions are chiral (see left diagram in Fig. 2 e)), i.e. are different for left-handed and right-handed ones, one can measure asymmetries in its decays (with respect to the beam direction). The measurement by the LEP collaboration of the forward-backward asymmetry in \(Z\to b\bar{b}\)[50] deviates from its SM prediction by \(\approx 2\sigma\). Similarly, there is an \(\approx 2\sigma\) tension in the lepton asymmetry parameter \(A_{\ell}\)[51], mainly due to the electron channel. The corresponding suggested deviations from the SM coupling at the sub-permille level. ### \(W\) boson mass (\(m_{W}\)) In general, three parameters are sufficient to parameterise completely (at tree level) the electro-weak sector of the SM. They are usually taken to be the Fermi constant \(G_{F}\), the fine-structure constant \(\alpha\) and the \(Z\) boson mass since these are measured most precisely. In this input scheme, the \(W\) mass is not a free parameter but can be calculated as a function of \(G_{F}\), \(\alpha\) and \(m_{Z}\) (and the Higgs and the top mass which enter at the loop-level). The CDF II result[52] shows a very strong \(7\sigma\) tension with the SM prediction. However, LHC[53, 54, 55, 56] and LEP results[57] are closer to the SM and thus in tension with CDF II value. Therefore, employing a conservative error estimate that increases the uncertainty finds a tension of \(3.7\,\sigma\)[51].3 Footnote 3: This average does not include the latest ATLAS result[68] superseding Ref.[53], which however has a small impact on the fit. ### LHC Multi-Lepton Anomalies (\(\epsilon\mu(+b)\)) The "multi-lepton anomalies", are LHC processes with two or more leptons in the final state (see Ref.[59] for a review), with and without \(b\)-jets4, where statistically large disagreements with the SM predictions have been observed[60, 61, 62, 63, 64, 65]. The excess can be summarized as follows Footnote 4: Since quarks and gluons are confined at low energies, they do not appear as free particles in a detector but rather hadronize and give signatures called jets (\(j\)). A \(b\)-jet is such an experimental signature containing a bottom quark. \begin{tabular}{c|c|c|c} Final state & Characteristics & SM backgrounds & Significance \\ \hline \(\ell^{+}\ell^{-}\)+(\(b\)-jets)[62, 65, 66] & \(m_{\ell\ell}<100\,{\rm GeV},\,(1b,2b)\) & \(t\bar{t},Wt\) & \(>5\sigma\) \\ \(\ell^{+}\ell^{-}\)+(no jet)[61, 67] & \(m_{\ell\ell}<100\,{\rm GeV}\) & \(W^{+}W^{-}\) & \(\approx 3\sigma\) \\ \(\ell^{\pm}\ell^{\pm}\), \(3\ell\) (\(b\)-jets)[68, 69, 64] & Moderate \(H_{T}\) & \(t\bar{t}W^{\pm},t\bar{t}\bar{t}\) & \(>3\sigma\) \\ \(\ell^{\pm}\ell^{\pm},3\ell,\) (no \(b\)-jet)[63, 70, 71] & In association with \(h\) & \(W^{\pm}h(125),WWW\) & \(\gtrapprox 4\sigma\) \\ \(Z(\to\ell\ell)\ell,\) (no \(b\)-jet)[62, 72] & \(p_{T}^{2}<100\,{\rm GeV}\) & \(ZW^{\pm}\) & \(>3\sigma\) \\ \end{tabular} The fact that the leptons in these channels are non-resonant, i.e. no peak in the invariant mass spectrum is observed, shows that, at least within the SM, they are related to leptonic \(W\) decays. A statistically particularly significant disagreement is observed in differential lepton distributions in \(t\bar{t}\) measurement[62, 65] (see left diagram in Fig. 2 f)). For all SM simulations used, ATLAS finds such a high \(\chi^{2}\) value that they conclude[66]: "No model (SM simulation) can describe all measured distributions within their uncertainties." Because excesses also appear in \(WW\) signatures without jets (where SM \(t\bar{t}\) production is strongly suppressed) and in \(Wh/3W\), \(t\bar{t}W\), \(t\bar{t}t\bar{t}\) and \(ZW\) production with low \(Z\) boson transverse momentum (\(p_{T}^{Z}\)), this indicates that the excess may not be due to the mismodelling of \(t\bar{t}\) production and decay. In addition, there is a hint for a resonant \(t\bar{t}\) excess at around \(400\,{\rm GeV}\)[73]. ### Higgs-like signals (\(\mathbf{y}=\gamma\gamma,\tau\tau,WW,ZZ\)) New particles that are directly produced at colliders show up as bumps in the otherwise continuous invariant mass spectrum of the corresponding decay products. For scalar bosons, di-photon distributions are very sensitive: Even though they have in general small rates because they are loop-suppressed, the experimental signature is very clear. In fact, there are several hints for di-photon resonances at \(95\,{\rm GeV}\)[74, 75], \(\approx 152\,{\rm GeV}\)[76] and also \(\approx 680\,{\rm GeV}\)[77, 78]. The hint at \(95\,{\rm GeV}\) is supported by a di-taus excess reported by CMS[79], a \(ZH\) signal (with \(H\to b\bar{b}\)) by LEP[80] as well as the \(WW\) channel[61, 81]. The \(\gamma\gamma\) (plus missing energy) hint at \(152\,{\rm GeV}\) is supported by several signals in associated production[76, 82], including \(WW+\)missing energy[81]. Combining all channels, global significances of \(3.8\sigma\) and \(4.9\sigma\) are found for \(95\,{\rm GeV}\) and \(152\,{\rm GeV}\), respectively, if for the latter, a simplified model with \(pp\to H\to SS^{*}\) is assumed[83]. Figure 2: Feynman diagrams showing some of the processes where anomalies are observed. The left diagrams depict the SM process, while the right-handed ones show a possible NP explanation. \(a)\) Schwinger term contribution to \(a_{\mu}\) and LQ explanation \(b)\) beta decay in the SM and modification via a vector-like quark \(c)\)\(W\) contribution to \(R(D^{(*)})\) and LQ effect \(d)\)\(W\) box contribution to \(b\to s\ell^{+}\ell^{-}\) in the SM and \(Z^{\prime}\) effect \(e)\)\(Z\to b\bar{b}\) and its modification via vector-like quarks \(g)\) top pair production and decay in the SM and new Higgses “pollulting” the measurement \(g)\) di-di-jet production in the SM and NP contribution via DQs h) \(pp\to e^{+}e^{-}\) in the SM and LQ contribution. ### (di-)di-jet resonances (\(jj(-jj)\)) A particle decaying into two quarks (or two gluons) results in a di-jet event at the LHC. ATLAS[84] observed a weaker limit than expected in resonant di-jet searches slightly below 1 TeV. Furthermore, CMS[85] found hints for the (non-resonant) pair production of di-jet resonances with a mass of \(\approx 950\) GeV with a local (global) significance of 3.6\(\sigma\) (2.5\(\sigma\)). This compatibility suggests that both excesses might be due to the same new particle \(X\), once directly (resonantly) produced in proton-proton collisions (\(pp\!\to\!X\!\to jj\)), once pair produced via a new state \(Y\) (\(pp\!\to\!Y^{(*)}\!\to\!XX\!\to jj(jj)\)). In fact, Ref.[86] finds a global 3.2\(\sigma\) significance at \(m_{\mathrm{Y}}\approx 3.6\) TeV. In the latest analysis, ATLAS finds a di-di-jet excesses[87] at \(\approx 3.3\) TeV with a di-jet mass of 850 GeV which could be compatible with the CMS one once the quite poor jet energy resolution is taken into account. ### Non-resonant di-electrons (\(q\bar{q}\to e^{+}e^{-}\)) If the mass of a particle exceeds the energy reach of a collider, its impact can still be seen by looking at the high-energetic end of the spectrum of a distribution where such effects are most relevant because they possess a relative enhancement w.r.t. the SM (see left-diagram in Fig. 2 h)). In such a non-resonant search for high-energetic oppositely charged leptons, CMS and ATLAS observe more electrons than expected in the SM[88, 89]. Because the number of observed muons is compatible with the SM prediction, this is a sign of lepton flavour universality violation and the ratio of muons over electrons provided by CMS has the advantage of reduced theoretical uncertainties[90]. Performing a model-independent fit, one finds the NP at a scale of 10 TeV with order one couplings can improve over the SM hypothesis by \(\approx 3\sigma\)[91]. ### Summary The anomalies observed in particle physics are summarized in Fig. 3, together with their corresponding energy scale, showing that they range over at least five orders of magnitude. While one cannot expect that all anomalies will be confirmed, it is also statistically unlikely that all will turn out as flukes. Therefore, it is important to investigate their implications for NP in order to assess possible correlations among them and identify signatures for future verification (or falsification). ## 3 New Physics Explanations For a consistent renormalizable extension of the SM, only scalars bosons (spin 0), fermions (spin 1/2) and vectors bosons (spin 1) are at our disposal, provided that in the latter case a Higgs-like mechanism of spontaneous symmetry-breaking exists to give them masses. Here we will focus on the following extensions of the SM: * Leptoquarks (LQs): Scalar or vector bosons that carry color and couple quarks directly to leptons[92, 93]. These particles were first proposed in the context of quark-lepton unification at high energies, namely the Pati-Salam model[94] and Grand Unified Theories (GUTs)[95]. Furthermore, in the R-parity violating MSSM (see e.g. Ref.[96] for a review) the supersymmetric partners of quarks can have the properties of LQs. * Diquarks (DQs): Scalar bosons which are either triplets or sextets of \(SU(3)_{c}\) and couple to a quark and an anti-quark. They are predicted by GUTs based on the \(E_{6}\) symmetry group[97] and appear in the R-parity violating MSSM. * \(Z^{\prime}\) bosons: Neutral heavy vector bosons. They can be singlets under \(SU(2)_{L}\) but also the neutral component of an \(SU(2)_{L}\) multiplet. These particles can be resonances of the SM \(Z\), e.g. Kaluza-Klein excitations of the SM \(W\) in composite[98] or extra-dimensional models[99], or originate from an abelian symmetry (like \(B-L\)[94]) or gauged flavour symmetries[100]. * \(W^{\prime}\) bosons: Electrically charged but QCD neutral vector particles. They can have similar origins as \(Z^{\prime}\) bosons but also come for a left-right symmetry[101]. Figure 3: Compilation of various anomalies ordered according to the corresponding energy scale. * Vector-like Quarks (VLQs): For vector-like fermions in general left-handed and right-handed fields have the same quantum numbers under the SM gauge group (unlike SM fermions) and can thus have masses independently of EW symmetry breaking, meaning that they can be arbitrarily heavy. They appear in GUTs [102], as resonances of SM fermions in composite or extra-dimensional models [103] and as the supersymmetric partners of SM vectors and scalars [104]. * Vector-like Leptons (VLLs): These particles can have similar origins as VLQs. In addition, they are involved in the type I [105, 106] and type III [107] seesaw mechanisms used for giving masses to the light active neutrinos as required by neutrino oscillations. * New scalars (\(S\)): Scalars could be supersymmetric partners of SM fermions [104], but also scalar fields of different representations under the SM gauge group can be added. Most commonly, a copy of the SM Higgs, an \(SU(2)_{L}\) doublet with hypercharge 0, leading to a two-Higgs doublet model [108, 109]. Note that we do not include coloured scalars with the properties of DQs or LQs here. ### Anomalous magnetic moment of the muon (\(a_{\mu}\)) Since the anomalous magnetic moment of the electron is measured and predicted much more precisely than \(a_{\mu}\), the resulting bounds on NP are stringent [110, 111, 112]. Therefore, the effect in \(a_{\ell}\), unlike the famous Schwinger term, must violate lepton flavour universality [113]. Furthermore, because the deviation from the SM prediction is as large as its EW contribution, new physics must be either quite light, e.g. a light \(Z^{\prime}\) boson only coupling to muons and tau lepton [114], or if it is heavy (at the TeV scale) must possess an enhancement factor (see Ref. [115] for a recent overview on NP in \(a_{\mu}\)). This can be provided via the mechanism of chiral enhancement, meaning that the chirality flip does not originate from the small muon Yukawa coupling but from a larger coupling of other particles to the SM Higgs. In the MSSM, this factor is \(\tan\beta\), the ratio of the two vacuum expectation values of the two Higgs fields [116, 117] but also models with generic new scalars and fermions can explain \(a_{\mu}\)[118, 119, 120, 121]. Furthermore, and there are two scalar LQs (\(S_{1}\) and \(S_{2}\)) that address \(a_{\mu}\) via a \(m_{t}/m_{\mu}\) enhancement [113, 122, 123] (see Fig. 2 a)). This leads to interesting predictions for \(h\to\mu^{+}\mu^{-124}\) and \(Z\to\mu^{+}\mu^{-125}\) that can be measured at future colliders. ### Cabibbo Angle Anomaly (CAA) A sub-permille effect suffices to explain the CAA. The disagreement between the two determinations of \(V_{us}\) can only be explained via a right-handed quark current pointing towards vector-like quarks (see Fig. 2 b)). The deficit in first-row and first-column CKM unitarity can be explained via left-handed (i.e. SM-like) NP in beta decays. Both an effect in beta decays or in the Fermi constant (determined from muon decay, i.e. \(\mu\to e\nu\nu\)), which is needed to extract \(V_{ud}\), is possible. Therefore, we have four options [126]: 1) a direct (tree-level) modification of beta decays 2) a direct (tree-level) modification of muon decay 3) a modified \(W\)-\(\mu\)-\(\nu\) coupling entering muon decay 4) a modified \(W\)-\(u\)-\(d\) coupling entering beta decays (the effect of a modified \(W\)-\(u\)-\(\nu\) drops out). Option 1) could in principle be realized by a \(W^{\prime}\)[127] or a LQ [128], however, in the latter case, stringent bounds from other flavour observables arise. Possibility 2) can be achieved by adding a singly charged \(SU(2)_{L}\) singlet scalar [129], a \(W^{\prime}\)[127] or \(Z^{\prime}\) boson with flavour violating couplings [130]. Option 3) and 4) can be achieved by vector-like leptons [131, 132] and vector-like quarks [133, 134, 135, 136], respectively. However, without a compensating effect [137], explaining the CAA via a modification of \(G_{F}\) increases the tension in the \(W\) mass. ### Lepton flavour universality violation in tau decays (\(\tau\to\mu\nu\nu\)) Explanations of \(\tau\to\mu\nu\nu\) are very similar to the ones of the CAA via a modified Fermi constant (with \(\tau\to\mu\nu\nu\) taking the role of \(\mu\to e\nu\nu\)). In addition to the options discussed above, a \(Z^{\prime}\) boson coupling to muons and tau leptons can generate the desired effect via box diagrams [138]. ### Charged current tauonic \(B\) decays (\(b\to c\ell\nu\)) Because this transition occurs at tree-level in the SM, also a tree-level NP effect is necessary to obtain the needed effect of \(O(10)\)% w.r.t. the SM (assuming heavy NP with perturbative couplings). Therefore, charged Higgses [139, 140, 141], \(W^{\prime}\) bosons [142] (with or without right-handed neutrinos) or LQs [143, 144, 145, 146, 147] are candidates. While there is a small region in parameter space left that can account for \(R(D^{(s)})\) with charged Higgses [146, 147], LHC searches constrain \(W^{\prime}\) solutions [148, 142], leaving LQ as the probably best solutions (see Fig. 2 c)). However, also for LQs constraints from \(B_{s}-\bar{B}_{s}\) mixing, \(B\to K^{(*)}\nu\nu\) and LHC searches must be respected such that the \(SU(2)_{L}\) singlet vector LQ [149, 150, 151, 152, 153, 154, 155] or the singlet-triplet model [156, 157, 158] are particularly interesting. ### Flavour changing neutral current semi-leptonic \(B\) decays (\(b\to s\ell^{+}\ell^{-}\)) Because \(R(K^{(*)})\) requires dominantly lepton flavour universal NP and \(B_{s}\to\mu\mu\) constraints axial couplings to leptons. Such a NP effect at the required level of \(O(20\%)\) (w.r.t. the SM) can be most naturally obtained via [159]: 1) A \(Z^{\prime}\) boson with lepton flavour universal but flavour-violating couplings to bottom and strange quarks [160, 161] (see Fig. 2 d)). However, due to the bounds from \(B_{s}-\bar{B}_{s}\) mixing [162], the LHC (see e.g. [163]), and LEP [164] a full explanation requires some tuning in \(B_{s}-\bar{B}_{s}\) mixing by a right-handed \(sb\) coupling [165] or a cancellation with Higgs contributions [166]. Furthermore, \(K^{0}-\bar{K}^{0}\) and \(D^{0}-\bar{D}^{0}\) mixing require an approximate global \(U(2)\) flavour symmetry [167]. 2) \(\tau\) or charm loop effect via an off-shell photon penguin [168]. The LQ representations which can give such a tau loop are \(S_{2}\) LQ [169], the \(U_{1}\) LQ [170] or the combination of \(S_{1}+S_{3}\)[157]. The 2HDM with generic flavour structure [171] can generate the desired effect via a charm loop \(C_{9}^{U\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ of the signal is in associated production. In fact, not only the most significant excesses are related to missing energy, the \(WW\) signal can also be explained for \(m_{S}\approx 150\) GeV, i.e. the decay chain \(pp\to H\to(S\to\gamma\gamma,WW)+(S^{\prime}\to\)invisible) describes data well. ### (di-)di-jet resonances (\(jj(-jj)\)) Here two options come to mind[86]: two scalar DQ (see Fig. 2 g)) or new massive gluons seem to be the most plausible candidates[86]. Concerning the latter, a specific example is based on an \(SU(3)_{1}\times SU(3)_{2}\times SU(3)_{3}\) gauge group, broken down to \(SU(3)\) colour via two bi-triplets. ### Non-resonant di-electrons (\(q\bar{q}\to e^{+}e^{-}\)) As this analysis involves non-resonant electrons that do not originate from the on-shell production of a new particle, NP must be heavier than the energy scale of the LHC (or enter in the \(t\)-channel). This can be achieved with NP at the 10 TeV scale with order one coupling to first-generation quarks and electrons[136]. Therefore, \(Z^{\prime}\) bosons[187] or LQs[128] (see Fig. 2 h)) have the potential to explain the CMS measurement. ### Summary and connections The anomalies discussed above, together with the extensions of the SM to which they point, are shown in Fig. 4. One can see that many extensions point towards new Higgs-like scalars. In particular, the agreement between the mass of the scalar suggested by the multi-lepton anomalies and the \(\gamma\gamma\) excess around 152 GeV is striking. LQ are also interesting candidates and, in particular, allow for a combined and correlated explanation of \(b\to c\tau\nu\) and \(b\to s\ell^{+}\ell^{-}\) via the tau-loop[188, 189]. Finally, \(Z\to b\bar{b}\), \(m_{W}\) and the CAA could be explained by VLQ. Of course, in a UV complete model, many more possible connections exist, as can be seen from Fig. 4, offering interesting open research directions. ## 4 Comparison, conclusions and outlook Let us now compare the anomalies concerning their experimental and theoretical features:6 Footnote 6: Please note that even though we try to be objective here, the impact of personal opinion is unavoidable. * Standard Model prediction plagued by hadronic uncertainties - Tensions within hadronic light-by-light measurements and with lattice QCD - Quite large NP effect needed; model building is challenging * Only one (competitive) measurement of \(K\to\mu\nu\) available - Beta decays need hadronic theory input to extract \(V_{ud}\) * Challenging measurement - Small statistical significance * Sensitive to form factors and other hadronic input * Difficult measurement - Limited significance - Large effect needed; challenging model building * Limited significance * Tensions among the measurements * SM prediction often difficult - Complex SM extension needed * Possible look-elsewhere effect * Poor mass resolution - Challenging theory explanation * Limited statistics - Electrons are difficult LHC signatures The anomalies are also compared in Table 1 w.r.t. several criteria which try to answer the following questions: * Experimental signature: Is the experimental environment clean? Is the signal well separated from background? * Experimental consistency: Do multiple independent measurements exist? Are they in agreement with each other? * Standard Model prediction: How accurate and reliable is the SM prediction? Are the conflicting results? * Statistical significance: How sizable are the deviations from the SM predictions? * NP explanation: Are there models that can naturally account for the excess? Are they in conflict with other observables? * Consistent connection: Are the connections to other anomalies via the same new particle or model? How direct is this connection? Here, + reflects a positive assessment, - a negative one and \(0\) means neutral, i.e. positive and negative aspects compensate to a good approximation. Finally, let us discuss the future implications of these anomalies. * \(a_{\mu}\): While the experimental situation concerning the direct measurement seems settled, there will be updates on HVP e.g. by Belle-II [190] and MUonE [191] aims at an independent determination with a completely different method. Furthermore, lattice QCD simulations will deliver improved results within the next years. * CAA: Here improved measurements and theory calculations of beta decays will be available within the next years [192]. Furthermore, NA62 could measure \((K\to\mu\nu)/(K\to\pi\mu\nu)\) to asses the possibility of right-handed currents [27] and the PIONEER experiment will measure pion beta decay [193] to determine \(V_{us}\), which is theoretically accurately predicted. * \(\tau\to\mu\nu\nu\): Because a clear experimental environment is necessary, this measurement can be done at the Belle-II [190] or at the FCC-ee [194] or CEPC [195] using tau leptons from \(Z\) decays. * \(b\to c\ell\nu\): \(R(D^{(*)})\) and related ratios can be measured at Belle-II, by LHCb with run 3 data and the parked \(B\) data from CMS [196]. * \(b\to s\ell^{+}\ell^{-}\): The best hope to solve the bottleneck concerning the SM predictions are lattice calculations in combinations with other non-perturbative methods like dispersion relations [197]. * \(m_{W}\): Since with increasing luminosity (like at the high-luminosity LHC) the measurement of the \(W\) mass at a hadron collider becomes even more difficult, very precise results would be possible with a future electron-positron collider like ILC [198], CLIC [199], FCC-ee or CEPC. However, also LHCb could help to solve the puzzle since it does not use the full LHC luminosity. * \(e\mu(+b)\): Full next-to-next-to-leading order calculation [200] of all processes including off-shell effects [201] will help to determine the SM background. On the experimental side, already run 3 of the LHC should provide enough events such that statistics is not the limiting factor anymore. * \(y\): Given the current strength of the excesses, LHC run 3, but at the latest the high-luminosity LHC [202], should suffice to verify or falsify them. However, to fully explore their properties, a Higgs factory would be desirable. * \(jj(-jj)\) & \(q\bar{q}\to e^{+}e^{-}\): LHC run 3 should suffice to determine the validity of these excesses. Clearly, particle physics is currently a very exciting area of research. While the SM has been consolidated over the last five decades, hints of new particles and new interactions are emerging. Despite originating from very different experiments and ranging over five orders of magnitude in energy, the task is to find combined explanations to verify or falsify their predictions in the future. However, one has to take into account that most likely not all anomalies will be confirmed by ongoing and forthcoming experimental efforts. However, already establishing one of these hints beyond a reasonable doubt would lead particle physics into a new era, the BSM age. \begin{table} \begin{tabular}{c|c c c c c c} & Exp. & Exp. & SM & statistical & NP & consistent \\ & signature & consistency & prediction & significance & explanation & connection \\ \hline \(a_{\mu}\) & **+** & **–** & **+** & **–** & **–** \\ CAA & **+** & \(0\) & \(0\) & **–** & **+** & **+** \\ \(\tau\to\mu\nu\nu\) & **–** & \(0\) & **+** & **–** & **+** & **+** \\ \(b\to s\ell\ell\) & **+** & **+** & \(0\) & **+** & \(0\) & **+** \\ \(b\to c\tau\nu\) & **–** & **+** & **+** & \(0\) & **–** & **+** \\ \(Z\to b\bar{b}\) & **+** & \(0\) & **+** & **–** & \(0\) & \(0\) \\ \(m_{W}\) & \(0\) & **–** & **+** & **+** & **+** & **+** \\ \(e\mu\,(+b)\) & \(0\) & **+** & \(0\) & **+** & \(0\) & **+** \\ \(y\) & **+** & **+** & \(0\) & **+** & \(0\) & **+** \\ \(jj(jj)\) & \(0\) & **+** & **+** & \(0\) & \(0\) & **–** \\ \(pp\to ee\) & \(0\) & **+** & **+** & **–** & \(0\) & **–** \\ \end{tabular} \end{table} Table 1: Comparison of the different anomalies in particle physics in terms of various features. See main text for details.
2309.12969
Detect Everything with Few Examples
Few-shot object detection aims at detecting novel categories given only a few example images. It is a basic skill for a robot to perform tasks in open environments. Recent methods focus on finetuning strategies, with complicated procedures that prohibit a wider application. In this paper, we introduce DE-ViT, a few-shot object detector without the need for finetuning. DE-ViT's novel architecture is based on a new region-propagation mechanism for localization. The propagated region masks are transformed into bounding boxes through a learnable spatial integral layer. Instead of training prototype classifiers, we propose to use prototypes to project ViT features into a subspace that is robust to overfitting on base classes. We evaluate DE-ViT on few-shot, and one-shot object detection benchmarks with Pascal VOC, COCO, and LVIS. DE-ViT establishes new state-of-the-art results on all benchmarks. Notably, for COCO, DE-ViT surpasses the few-shot SoTA by 15 mAP on 10-shot and 7.2 mAP on 30-shot and one-shot SoTA by 2.8 AP50. For LVIS, DE-ViT outperforms few-shot SoTA by 17 box APr. Further, we evaluate DE-ViT with a real robot by building a pick-and-place system for sorting novel objects based on example images. The videos of our robot demonstrations, the source code and the models of DE-ViT can be found at https://mlzxy.github.io/devit.
Xinyu Zhang, Yuhan Liu, Yuting Wang, Abdeslam Boularias
2023-09-22T16:07:16Z
http://arxiv.org/abs/2309.12969v4
# Detect Every Thing with Few Examples ###### Abstract Few-shot object detection aims at detecting novel categories given a few example images. Recent methods focus on finetuning-based strategies to learn features representing novel classes, whose complicated procedures prohibit a wider application. In this paper, we introduce DE-ViT, a few-shot object detector without the need for finetuning. We transform the multi-class classification into multiple binary classifications, so a binary classifier can be trained and used for all classes without finetuning. We propose a novel propagation-based localization mechanism upon frozen DINov2. We evaluate DE-ViT on few-shot, and one-shot object detection benchmarks with COCO and LVIS. For COCO, DE-ViT surpasses the few-shot SoTA by 15 mAP on 10-shot and 7.2 mAP on 30-shot and one-shot SoTA by 2.8 AP50. For LVIS, DE-ViT outperforms few-shot SoTA by 20 box AP. When compared to open-vocabulary detectors, DE-ViT outperforms the COCO SoTA by 6.9 AP50 and achieves 50 AP50 in novel classes, and surpasses LVIS SoTA by 1.5 mask APr and reaches 34.3 mask APr. Code is available at [https://github.com/mlzxy/devit](https://github.com/mlzxy/devit). ## 1 Introduction Object recognition and localization are two of the core tasks in computer vision. _Few-shot object detection_ provides a promising paradigm for generic object detector by representing novel categories with a set of support images (Antonelli et al., 2022). However, despite the promising practicality in principle, there still exist fundamental limitations in previous work on few-shot object detection. First, most recent few-shot methods rely on finetuning to adapt to novel classes (Kohler et al., 2023), a complicated and tedious procedure that restricts the practical use of these methods (Zhao et al., 2022). Second, the accuracy of existing few-shot methods does not keep up with other alternative solutions. Specifically, few-shot detectors fall behind open-vocabulary zero-shot detectors, especially in challenging datasets such as COCO and LVIS (Ma et al., 2023; Wu et al., 2023). To alleviate the above limitations, we observe that most recent work represents novel categories with features produced from few-shot training (Kohler et al., 2023). However, without representation learning, few-shot training may not produce strong enough features. This could hinder few-shot performance. Motivated by this, we propose to build a few-shot detector on top of DINov2 (Oquab et al., 2023), a strong pretrained vision model. To avoid finetuning over novel classes, we transform the multi-class classification into multiple binary classifications. Thus, a single binary classifier can be trained and used for all classes without any finetuning on novel classes. To accurately localize objects from frozen DINov2 features, we design a novel propagation-based mechanism that localizes each object by propagating a region on the similarity map between DINov2 features and class prototypes. Prototypes are class representatives built from support image features. With these proposed techniques, we introduce DE-ViT, a few-shot object detector that uses example images to detect novel objects without the need for finetuning. The overall architecture is illustrated in Fig. 2. A demonstration of detecting YCB objects is shown in Fig. 1. We evaluate DE-ViT on few-shot, and one-shot object detection benchmarks with COCO (Lin et al., 2014) and LVIS (Gupta et al., 2019) datasets. Our method establishes new state-of-the-art (SoTA) results on all benchmarks. For COCO, DE-ViT surpasses the few-shot SoTA LVC (Kaul et al., 2022) by 15 mAP on 10-shot and 7.2 mAP on 30-shot and one-shot SoTA BHRL (Yang et al., 2022) by 2.8 AP50. For LVIS, which has been regarded as a highly challenging dataset for few-shot object detection (Wang et al., 2020), DE-ViT outperforms the SoTA DiGeo (Ma et al., 2023) by 20 box APr. When compared to open-vocabulary detectors, DE-ViT outperforms the COCO SoTA CORA+ (Wu et al., 2023) by 6.9 AP50 and reaches 50 AP50 and LVIS SoTA Ro-ViT (Kim et al., 2023) by 1.5 mask APr, reaching 34.3 mask AP in novel categories. Notably, our method only trains on the corresponding dataset, i.e., COCO or LVIS, without leveraging extra datasets for training, or distillation. Our contributions are summarized as follows: (1) To the best of our knowledge, we are the first to incorporate DINOV2 into solving the few-shot object detection problem. (2) Built upon the techniques motivated above, we introduce DE-ViT, which detects novel objects without any finetuning. (3) We demonstrate that DE-ViT establishes state-of-the-art performance over few-shot and one-shot benchmarks on COCO and LVIS, and outperforms open-vocabulary detectors as well. ## 2 Related work **Few-shot Object Detection (FSOD)** aims at detecting objects of novel classes by utilizing a few support images from novel classes as training samples (Kohler et al., 2023). Existing approaches can be broadly classified into finetuning-based (Wang et al., 2020; Fan et al., 2021; Sun et al., 2021; Xiao et al., 2022; Guirguis et al., 2023) and meta-learning-based strategies (Yan et al., 2019; Kang et al., 2019). Finetuning-based methods, despite their prevalence, suffer from a large accuracy gap between the base and the novel classes, as well as practical limitations due to redundant multi-stage procedures (Zhao et al., 2022). Meta-learning methods avoid finetuning by online adaptation, but exhibit inferior accuracy (Kohler et al., 2023). One-shot Object Detection (OSOD) is an extreme case Figure 1: Demonstration of the proposed method on YCB objects (Calli et al., 2015). DE-ViT with ViT-L/14 is used for prediction. Note that our model is trained on only the base categories of LVIS. Example images of YCB objects are provided only during inference to represent novel categories. Figure 2: Overview of the proposed method. Our approach uses DINOV2 ViT to encode the image into a feature map, from which proposal features are extracted using ROIAlign. Proposals are generated via an off-the-shelf RPN. Prototype projection transforms proposal features into similarity maps based on prototypes derived from ViT features of support images. Multi-class classification of proposals is recast as a series of one-vs-rest binary classification tasks without the need for costly per-class inference. Refined localization is accomplished by our novel region propagation module. Both classification and refined localization rely exclusively on the computed similarity maps. of FSOD with only one exemplar per novel class and simplifies the setting to single-class detection without finetuning (Yang et al., 2022). Prior approaches primarily focus on designing interaction mechanisms of dense spatial features between support and target images (Antonelli et al., 2022). However, the OSOD formulation restricts the use of additional support, if available, and requires a separate inference per class. Compared with existing work, our method does not use finetuning or per-class inference, and only utilizes class-level prototypes without dense feature interactions, while outperforming SoTA methods in both few-shot and one-shot settings. **Open-Vocabulary Object Detection (OVD)** aims at detecting objects of novel classes with only their category names and without support images (Zareian et al., 2021; Yu et al., 2023). However, despite this more challenging zero-shot setting, with the recent advances in vision-language models (Radford et al., 2021), open-vocabulary detectors outperform few-shot ones in COCO and LVIS (Ma et al., 2023; Wu et al., 2023). Our few-shot detector outperforms the SoTAs on both few-shot and open-vocabulary, which has not been achieved before by existing few-shot work. ## 3 Method The major challenge in both FSOD is to generalize to classes that are unseen during training. However, despite numerous attempts to this issue, _e.g._, margin-based regularization (Ma et al., 2023), there persists a considerable accuracy gap between base and novel classes. This disparity indicates that a network trained with base classes would inevitably fixate on patterns that are only present among a few base classes, which does not align with the objective of detecting arbitrary classes. To prevent overfitting base class patterns, we propose to use the maps of similarities between the features and prototypes as the detector input. Thus, the network can only make decisions upon relevant information projected by the class representative features. Specifically, let \(\mathbf{f}\in\mathbb{R}^{H\times W\times D}\) be the features of an image where \(D\) represents the channel dimension and \((H,W)\) represent spatial dimensions, let \(\mathbf{p}\in\mathbb{R}^{(C+B)\times D}\) be the prototypes where \(C\) is the number of classes and \(B\) is the number of class-agnostic background prototypes. Similarity map \(\mathbf{s}\in\mathbb{R}^{H\times W\times(C+B)}\) is calculated using Eq. 1. This procedure is referred to in Fig. 3 as prototype projection. \[\mathbf{s}=\mathbf{f}\cdot\mathbf{p}^{\top} \tag{1}\] We adopt a standard two-stage object detection framework, _e.g._, Mask R-CNN (He et al., 2017), which detects objects through RPN and RCNN stages. Existing literature has demonstrated that class-agnostic RPN proposals generalize well to novel classes (Gu et al., 2021). We use off-the-shelf RPNs to propose object regions and extract proposal features from DINOv2 ViT backbones. Similarity maps are computed between proposal features and prototypes, which are then fed to our architectures for classification and refined localization. Prototypes are constructed offline from support images with the procedures detailed in Sec. 3.3. The ViT backbones, prototypes, and resulting the similarity maps are kept frozen during detector training. ### Classification with an Unknown Number of Classes Unlike supervised learning, the number of classes in FSOD is indeterminate. The common strategy in FSOD is to extend the final linear layer with finetuning. In contrast with these existing approaches, we transform the multi-classification of \(C\) classes into \(C\) one-vs-rest binary classification tasks. In doing so, a single binary classifier could be trained and used for all classes without finetuning. However, applying a binary classifier for multi-class classification requires separate inference for each class (Zang et al., 2022). To avoid the costly per-class inference, we apply a class preselection to only predict the probabilities for a small selection of classes. Let \(\bar{\mathbf{f}}=\frac{\sum_{i=1}^{H}\mathbf{f}_{i,j}}{HW}\in\mathbb{R}^{D}\) be the average feature of a proposal. The class pre-selection procedure returns the top-\(K\) mostly likely classes \(\mathcal{C}_{K}=\text{top\_in\_}\text{\_}{\text{index}}_{K}(\mathbf{h})\), where \(\mathbf{h}\in\mathbb{R}^{C}\) is the dot-product similarity between \(\bar{\mathbf{f}}\) and class prototypes, defined as \(\mathbf{h}=\bar{\mathbf{f}}\cdot\mathbf{p}^{\top}\). Our method only predicts the probabilities for \(\mathcal{C}_{K}\) and sets the others to 0. As shown in Sec. 4.2, our method surpasses SoTA even when \(K=3\) on both COCO (80 classes) and LVIS (1203 classes), eliminating the need for costly per-class inference. For each class \(c_{k}\) in \(\mathcal{C}_{K}\), the similarity map \(\mathbf{s}\in\mathbb{R}^{H\times W\times(C+B)}\) (computed in Eq. 1) is rearranged into a class-specific map \(\mathbf{s}_{c_{k}}\in\mathbb{R}^{H\times W\times(1+T+B)}\), as shown in Eq. 2, where \([C]\backslash c_{k}\) is defined as \(\{1,..,c_{k}-1,c_{k}+1,...,C\}\). \[\mathbf{s}_{c_{k}}=\text{concat}(\mathbf{s}[:,:,c_{k}],F_{\text{ rearrange}}(\mathbf{s}_{[C]\backslash c_{k}}),\mathbf{s}[:,:,C:C+B]) \tag{2}\] where \(\mathbf{s}_{[C]\backslash c_{k}}=\text{concat}(\mathbf{s}[:,:,:c_{k}-1],\mathbf{ s}[:,:,c_{k}+1:])\), and \(F_{\text{rearrange}}(\mathbf{x})\) is defined in Eq. 3. \(T\) is a constant hyper-parameter. In Eq. 2, \(\mathbf{s}[:,:,c_{k}],\mathbf{s}_{[C]\backslash c_{k}}\), and \(\mathbf{s}[:,:,C:C+B]\) represent the similarity map for the current class \(c_{k}\), other classes \([C]\backslash c_{k}\), and the background, correspondingly. \[F_{\text{rearrange}}(\mathbf{x})=\begin{cases}\text{upsample}(\text{sort}( \mathbf{x}),T)&\text{if }T\geq C-1\\ \text{sort}(\mathbf{x})[:,:,:T]&\text{otherwise}\end{cases} \tag{3}\] However, it is difficult to input \(\mathbf{s}_{[C]\backslash c_{k}}\) directly to network due to the lack of inherent order in classes and the non-determinate nature of the number of classes in the few-shot setting. For the first issue, we decide to use magnitude order by sorting \(\mathbf{s}_{[C]\backslash c_{k}}\) along the channel dimension, as in Eq. 3. For the second issue, we standardize the input size by either keeping the similarity of the top \(T\) classes or linearly upsampling the similarity map of all \(C-1\) classes to \(T\). In doing so, \(\mathbf{s}_{[C]\backslash c_{k}}\) is transformed from \(H\times W\times(C-1)\) into a fixed size \(H\times W\times T\). Note that the listed functions, _i.e._, sort, concat and upsample, are applied along the channel dimension, and sort works in descending order. Finally, the class-specific similarity map \(\mathbf{s}_{c_{k}}\) is given as input to a binary classification network that returns the probability for \(c_{k}\). The overall classification architecture is illustrated in Fig. 3. ### Localization with Region Propagation. Despite their rich semantics information, ViT features lack the coordinates information required for bounding box regression. As shown in Sec. 4.3, naively applying a conventional regression on ViT features yields poor localization results. A natural solution is to learn this localization capability by finetuning the ViT backbone during detector training. Similar strategies have demonstrated success in OVD for vision-language backbones, _e.g._, CLIP (Zhong et al., 2022). However, we observed that finetuning results in an accuracy collapse on novel classes when text embeddings are replaced with prototypes, which indicates a loss of generalization power. While this phenomenon is intriguing, the practical question is how to produce accurate localization using frozen visual features. Our intuition is that only proposals that overlap with objects are important because others would be rejected by the classification network. If we expand such proposals, they would possibly cover the entire region of underlying objects. Therefore, original proposals can be refined by predicting object regions within the expanded proposals. We model this procedure as propagating original proposals to object areas in the form of heatmaps. The heatmaps can be projected into box coordinates through integral over spatial dimensions. The overall refined localization architecture is illustrated in Fig. 4. As shown in Fig. 4, the propagation procedure is implemented through binary segmentation. We convert groundtruth bounding boxes to heatmaps that we use to train this segmentation network. Inspired by unsupervised keypoint estimation, particularly the works of IMM (Jakab et al., 2018) Figure 3: Overview of our classification architecture. Class pre-selection chooses the top-\(K\) classes based on the dot product similarity between the average feature of each proposal and class-level prototypes. The probability of each selected class \(c_{k}\) is predicted through a binary classification network, shared by all the classes, in a one-vs-rest manner. The input to this classification network is the similarity map that results from the prototype projection, after rearranging it for each class. and Transporter (Kulkarni et al., 2019), we devise a spatial integral layer to project the propagated heatmap to box. The idea is to learn a transformation that translates the heatmap to a box \((c_{w}^{\text{rel}},c_{h}^{\text{rel}},w^{\text{rel}},h^{\text{rel}})\in[0,1]^{4}\) in coordinates that are relative to the expanded proposal. Relative coordinates can be simply mapped back to absolute coordinates. Let \(\mathbf{g}\in\mathbb{R}^{H\times W}\) denote the logits of the propagated heatmap, the relative box \((c_{w}^{\text{rel}},c_{h}^{\text{rel}},w^{\text{rel}},h^{\text{rel}})\) can be estimated by our spatial integral layer as explained in Eq. 4 and 5. An illustrative example of our spatial integral layer can be found in Figure 5. \[(c_{w}^{\text{rel}},c_{h}^{\text{rel}})=\sum_{i,j=1}^{H,W}\left( \frac{i}{H},\frac{j}{W}\right)*\text{softmax}(\mathbf{g})_{ij} \tag{4}\] \[(w^{\text{rel}},h^{\text{rel}})=\left(\sum_{i=1}^{H}\sum_{j=1}^{W }\frac{\sigma(\mathbf{g})_{(i)j}}{W}\theta^{\mathbf{w}},\ \sum_{j=1}^{W}\frac{\mu}{H}\frac{\sigma(\mathbf{g})_{i(j)}}{H}\theta^{\mathbf{ h}}_{j}\right) \tag{5}\] To motivate Eq. 4 and 5, consider the toy example of converting a binary mask to a bounding box in Fig. 5. A reasonable approach is to compute the mask center as the bounding box center and pick the maximum row and column sum as width and height. Following the same spirit, we compute the expected position under the spatial distribution \(\text{softmax}(\mathbf{g})\) as the bounding box center. We compute the row and column sums of sigmoid activation as \(\sum_{j=1}^{W}\sigma(\mathbf{g})_{ij}\) and \(\sum_{i=1}^{H}\sigma(\mathbf{g})_{ij}\). Instead of picking the maximum, we aggregate all row or column sums in terms of magnitude. The rationale of this aggregation is that a larger estimation is more likely to over-cover the entire object and a small estimation may produce a box too tight. The aggregation is done by sorting the estimation and then weighted averaging. This explains the use of order statistics notation \((i)\) and \((j)\), and \(\theta^{\mathbf{h}}\in\mathbb{R}^{W},\theta^{\mathbf{w}}\in\mathbb{R}^{H}\) are learnable aggregation weights. Finally, the relative coordinates are mapped to absolute ones by Eq. 6, where \((c_{w}^{\text{sep}},c_{h}^{\text{sep}},w^{\text{sep}},h^{\text{exp}})\) is the expanded proposal. \[(w^{\text{out}},h^{\text{out}})=(w^{\text{exp}}\,w^{\text{rel}},h ^{\text{exp}}\,h^{\text{rel}}) \tag{6}\] \[(c_{w}^{\text{out}},c_{h}^{\text{out}})=(c_{w}^{\text{exp}}-0.5w ^{\text{exp}},c_{h}^{\text{exp}}-0.5h^{\text{exp}})+(c_{w}^{\text{end}}\,w^{ \text{exp}},c_{h}^{\text{rel}}\,h^{\text{exp}})\] During training, we use groundtruth bounding boxes as regression targets for the output of spatial integral layer. Similar to our classification pipeline, the localization procedure is applied for each class in \(\mathcal{C}_{K}\) to produce class-specific boxes. For each class \(c_{k}\), only the similarity map of \(c_{k}\) and the background will be selected as input. We omit this detail in Fig. 4 for visual clarity. Figure 4: Overview of our refined localization architecture. Proposal expansion enlarges each proposal by a fixed ratio to cover more object area. The spatial relationship between the original and expanded proposal is described via a heatmap. The segmentation network navigates the initial heatmap toward accurate object regions. The propagated heatmap is converted into bounding box coordinates through our spatial integral layer. Figure 5: To compute the bounding box of the triangle object from the binary mask, the maximum row and column sum can be used as the box width and height. ### Building Prototypes Similar to the pioneering FSOD work Meta R-CNN (Yan et al., 2019), our method represents classes with prototype vectors constructed from visual features of given support images. The main difference in building prototypes is that we use the features from pretrained DINOv2 while Meta R-CNN uses region features from its detection network. Fig. 6 shows the process of building instance-level prototypes. For each object instance, its prototype is computed as the mean ViT feature from corresponding regions defined by either a segmentation mask or bounding box. Next, class-representing prototypes are obtained by averaging the cluster centroids of instance-level prototypes for each class. We use the online clustering method proposed by SwAV (Caron et al., 2020), though we find that simply averaging all instance prototypes for each class achieves similar results. Prototypes of base classes are built from the entire training set instead of support images, which are used only for novel classes. To build background prototypes, we start from the obervation that backgrounds typically share similar visual attributes, such as uniform motion, smooth texture, static color tone, _etc._ Moreover, the ability to capture and thereby separate background from foreground is crucial. Because of the lack of visual diversity yet the importance of background semantics, we use masks for a fixed list of background classes, _e.g._, sky, wall, road, and apply a similar prototype-building procedure. The idea of using background information for few-shot performance is similarly explored in IPRNet (Okazawa, 2022). In contrast with class-level prototypes that change upon class configurations, background prototypes are always fixed as a part of the network's parameters. All the prototypes are built offline and frozen during training and inference. ## 4 Experiments We comprehensively evaluate our method on few-shot, one-shot benchmarks, and compare it to open-vocabulary detectors. Furthermore, we compare the efficiency of our method against SoTA solutions, study few-shot performance on different numbers of shots, and provide qualitative results. We conduct ablations to show that a naive combination of DINOv2 and Meta R-CNN leads to unsatisfying results, and every proposed component is important to the performance of our method. **Evaluation Metrics and Datasets.** Few-shot, one-shot, and open-vocabulary evaluations split classes into base and novel classes. Base classes are seen during training and novel classes are unseen. The performance on novel classes is more important. For COCO, nAP, nAP50, and nAP75 represent mAP, AP50, and AP75 in novel classes. bAP and bAP50 represent mAP and AP50 in base classes. One-shot evaluation conventionally divides 80 classes of COCO into four even partitions, and alternatively takes three as base classes and one partition as novel classes (Michaelis et al., \begin{table} \begin{tabular}{l|c|c|c c c|c c c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Finetune} & \multicolumn{4}{c}{10-shot} & \multicolumn{4}{c}{30-shot} \\ \cline{3-10} & & on Novel & bAP & nAP & nAP50 & nAP75 & bAP & nAP & nAP50 & nAP75 \\ \hline FSRW (Kang et al., 2019) & ✗ & - & 5.6 & 12.3 & 4.6 & - & 9.1 & 19 & 7.6 \\ Meta R-CNN (Yan et al., 2019) & ✗ & 5.2 & 6.1 & 19.1 & 6.6 & 7.1 & 9.9 & 25.3 & 10.8 \\ TFAN (Wang et al., 2020) & ✓ & 33.9 & 10 & 19.2 & 9.2 & 34.5 & 13.5 & 24.9 & 13.2 \\ FSCI (Sun et al., 2021) & ✓ & - & 11.9 & - & 10.5 & - & 16.4 & - & 16.2 \\ Retentive RCNN (Yan et al., 2021) & ✓ & 39.2 & 10.5 & 19.5 & 9.3 & 39.3 & 13.8 & 22.9 & 13.8 \\ Heterograph (Liu et al., 2012) & ✓ & 11.6 & 23.9 & 9.8 & 16.5 & 31.9 & 15.5 \\ PNeVdev (Xiao et al., 2022) & ✓ & 6.4 & 7.6 & - & 9.3 & 12 & - & - \\ Meta Faster RCNN (Jian et al., 2022) & ✓ & - & 12.7 & 25.7 & 10.8 & - & 16.6 & 31.8 & 15.8 \\ LVC (Gall et al., 2022) & ✓ & 28.7 & 19 & 34.1 & 19 & 34.8 & 26.8 & 45.8 & 27.5 \\ CrossTransformer (Iban et al., 2022) & ✓ & - & 17.1 & 30.2 & 17 & - & 21.4 & 35.5 & 22.1 \\ NIPF (Gonggins et al., 2023) & ✓ & 39 & 18.8 & - & - & 39 & 20.9 & - & - \\ DGeon (Ma et al., 2023) & ✓ & 39.2 & 10.3 & 18.7 & 9.9 & 39.4 & 14.2 & 26.2 & 14.8 \\ \hline \multirow{2}{*}{DE-ViT (Our)} & ViT-814 & ✗ & 24 & **27.1** & **43.1** & **28.5** & 24.2 & **26.9** & **43.1** & **28.4** \\ & ViT-814 & ✗ & 28.3 & **33.2** & **51.4** & **35.5** & 28.5 & **33.4** & **51.4** & **38.7** \\ & ViT-L/14 & ✗ & 29.4 & **34.0** & **53.0** & **37.0** & 29.5 & **34.0** & **52.9** & **37.2** \\ \hline \end{tabular} \end{table} Table 1: Results on COCO 2014 few-shot benchmark. Our method outperforms existing work in detecting novel classes by a significant margin and does not require finetuning on novel classes. Figure 6: Overview of building instance prototypes. 2018). There are 4 base/novel splits in total, named Split-1/2/3/4. For LVIS, Apr, APC, and Apr represent AP on rare, common, and frequent categories. Rare categories are used as novel classes. Metrics on LVIS are computed separated on bounding boxes, _e.g._, box APr, or instance segmentation masks, _e.g._, mask APr. We evaluate our method on COCO 2014, COCO 2017 (Lin et al., 2014), and LVIS-v1 (Gupta et al., 2019). We follow the conventional base/novel classes split with existing work (Wang et al., 2020; Yang et al., 2022; Zhong et al., 2022). Note that the few-shot benchmark uses COCO 2014, while the one-shot and open-vocabulary benchmarks use COCO 2017. **Model Specifications.** We use DINOv2 (Oquab et al., 2023) ViT as the feature extractor, and report results in ViT-S/B/L (small, base, large) model sizes. We train a ResNet50 RPN separately for each dataset using only base classes and use 1000 proposals in all settings. We detail the design of binary classification and segmentation networks in Sec. A.2. Class prototypes are built upon instance masks in support images unless specified. We use the same support images as sampled by previous work (Wang et al., 2020; Yang et al., 2022) for few-shot and one-shot settings. When comparing with open-vocabulary detectors, we sample 30 instances per class (30-shot) using the protocol of Wang et al. (2020) for the support set. Background prototypes are extracted from background classes, _e.g._, sky, road, by the semantic masks in COCOStuff (Caesar et al., 2018). ### Main Results Tab. 1 shows our results on few-shot COCO benchmark. DE-ViT outperforms the previous SoTA LVC by a significant margin (+15 nAP on 10-shot, +7.2 nAP on 30-shot). It is worth noting that LVC requires over ten stages for self-training and pseudo-labeling procedures (Kaul et al., 2022). A pretrained model for LVC has never been released. Other recent few-shot works also include multiple pretraining and finetuning stages (Han et al., 2022;b). Our proposed method DE-ViT can be trained in a single stage and used on novel objects directly without any fine-tuning. The difference between 30-shot (nAP50=52.9) and 10-shot (nAP50=53.0) is smaller than 0.1% and could be interpreted as statistically insignificant. LVIS has been regarded as a highly challenging dataset in FSOD (Wang et al., 2020) and only DiGeo (Ma et al., 2023) reports few-shot results on LVIS v1. Tab. 3 shows that our method outperforms DiGeo in all metrics and a significant boost in the accuracy of detecting novel objects (+20 box APr). Tab. 2 shows our results on one-shot COCO. DE-ViT outperforms the previous SoTA BHRL by 6 bAP50 and 2.8 nAP50. One-shot methods follow a single-class detection setting. We adapt DE-ViT by detecting each class separately during evaluation. It is worth noting that our proposed DE-ViT can perform both one-shot and few-shot detection using the same model. OWL-ViT (Minderer et al., 2022) also reports one-shot results on COCO. However, OWL-ViT's results are obtained with an ensemble of open-vocabulary and one-shot pipelines without providing implementation or isolated measurements. Therefore, we only compare against OWL-ViT with our few-shot setting in Tab. 4. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{bAP50} & \multicolumn{4}{c}{nAP50} \\ \cline{2-11} & Split-1 & Split-2 & Split-3 & Split-4 & Avg & Split-1 & Split-2 & Split-3 & Split-4 & Avg \\ \hline SimMask (Michaelis et al., 2018) & 38.9 & 37.1 & 37.8 & 36.6 & 37.6 & 15.3 & 17.6 & 17.4 & 17 & 16.8 \\ CoAE (Hsieh et al., 2019) & 42.2 & 40.2 & 39.9 & 41.3 & 40.9 & 23.4 & 23.6 & 20.5 & 20.4 & 22 \\ AIT (Chen et al., 2021) & 50.1 & 47.2 & 45.8 & 46.9 & 47.5 & 26 & 26.4 & 22.3 & 22.6 & 24.3 \\ SaFT (Zhao et al., 2022) & 49.2 & 47.2 & 47.9 & 49.8 & 48.3 & 27.8 & 27.6 & 21 & 23 & 24.9 \\ BHRL (Yang et al., 2022) & 56 & 52.1 & 52.6 & 53.4 & 53.6 & 26.1 & 29 & 22.7 & 24.5 & 25.6 \\ \hline DE-ViT (Ours, ViT-L/14) & **59.4** & **57.0** & **61.3** & **60.7** & **59.6** & 27.4 & **33.2** & **27.1** & **26.1** & **28.4** \\ \hline \hline \end{tabular} \end{table} Table 2: Results on COCO 2017 one-shot benchmark. DE-ViT outperforms existing work and is not limited to single class detection and single support image as other one-shot methods. We compare DE-ViT against open-vocabulary detectors in Tab. 4. DE-ViT outperforms the previous SoTA CORA+ by 6.9 AP50. Our method only trains on COCO while CORA+ uses ImageNet-21K (Krizhevsky et al., 2012) and COCO Captions (Chen et al., 2015) as additional training data. When only using COCO, DE-ViT outperforms CORA by 8.3 AP50. In LVIS, DE-ViT outperforms the previous SoTA on mask APr (+1.5 over F-VLM) and box APr (+2.4 over OWL-ViT). We follow a multi-scale instance segmentation head design, as detailed in Sec. A.2. We observe a high variance in the performance of OVD detectors in Tab. 4. F-VLM achieves a mask APr of 32.8 on LVIS but only has 28 nAP50 on COCO. While CORA+ has 43.1 nAP50 on COCO but only a box APr of 28.1 on LVIS. On the contrary, our DE-ViT outperforms existing solutions on both LVIS (34.3 mask APr) and COCO (50 nAP50). We acknowledge the task setting difference of this comparison. Our few-shot method DE-ViT has access to support images that language-based detectors do not. However, we emphasize that DE-ViT outperforms the SoTAs on both few-shot and open-vocabulary, which has not been achieved before by existing few-shot work. all methods is measured under the same machine. We conduct inference time comparisons under different values of \(K\) and detail the efficiency discussion in Sec. A.1. **More shots.** To study the model performance with different numbers of shots, we plot the nAP50 with different shots in Fig. 7 and Fig. A2 for COCO 2014 and COCO 2017, correspondingly. It can be seen that performance generally increases with the number of shots, and there exists an inflection point after which more samples do not help. The inflection point is located around 50 to 75 shots for COCO 2017 and 6 shots for COCO 2014. We detail the shot sampling setup in Sec. A.1.2. **Qualitative Results.** We provide qualitative comparisons of our proposed DE-ViT and Meta Faster RCNN in Fig. 8 and Fig. A10. DE-ViT detects more novel objects while having fewer false positives. Note that Meta Faster RCNN can only detect novel objects after finetuning, while DE-ViT can detect both base and novel classes without finetuning. We provide more visualizations in Fig. A9 and A11. ## 5 Conclusion In this work, we propose DE-ViT, a few-shot detector that uses example images to detect novel classes without any finetuning. We demonstrate that DE-ViT establishes new state-of-the-art in few-shot and one-shot benchmarks. DE-ViT also outperforms open-vocabulary detectors, which has not been achieved before by existing few-shot work. DE-ViT detects objects using only frozen DINov2 features. Therefore, it is straightforward to integrate DE-ViT with backbones other than DINov2. One of the limitations is that our current architecture is a mix of ViT and RCNN, while a full transformer network can clearly be more scalable and unlock more possible abilities and integrations to other modalities. Another limitation is that our current model relies on an external region proposal network (RPN). It is possible to train an RPN on top of frozen DINov2 with some engineering efforts. We hope that our work will be useful in downstream tasks such as robotic manipulation, and help other researchers develop better methods for few-shot object detection.
2309.03864
The Early History of Moment Problems and Non-Negative Polynomials with Gaps: Sparse Moment Problems, Sparse Positivstellensätze, and Sparse Nichtnegativstellensätze from a T-System Point of View
We deal with and investigate sparse univariate Positivstellens\"atze, Nichtnegativstellens\"atze, and solutions to sparse moment problems. The paper relies heavily on results on T-system by Karlin in 1963 and by Karlin and Studden in 1966. We gain complete descriptions of all sparse strictly positive and sparse non-negative algebraic polynomials on $[a,b]$ with $a\geq 0$ and $[0,\infty)$. We extend, simplify, and solve the sparse Hausdorff and Stieltjes moment problem with these results and the methods of adapted spaces and T-systems.
Philipp J. di Dio
2023-09-07T17:26:00Z
http://arxiv.org/abs/2309.03864v3
The Early History of Moment Problems and Non-Negative Polynomials with Gaps: Sparse Moment Problems, Sparse Positivstellensatze, and Sparse Nichntnegativstellensatze from a T-System Point of View ###### Abstract We deal with and investigate sparse univariate Positivstellensatze, Nichntnegativstellensatze, and solutions to sparse moment problems. The paper relies heavily on results on T-systems by Karlin in 1963 and by Karlin and Studden in 1966. We gain complete descriptions of all sparse strictly positive and sparse non-negative algebraic polynomials on \([a,b]\) with \(a\geq 0\) and on \([0,\infty)\). We extend, simplify, and solve the sparse Hausdorff and Stieltjes moment problem with these results and the methods of adapted spaces and T-systems. keywords: moment problem, Positivstellensatz, Nichntnegativstellensatz, sparse, gap, T-system Msc: [2020]Primary 44A60, 14P99 ; Secondary 41A10, 12E10. + Footnote †: journal: arXiv ###### Contents * 1 Introduction * 2 Preliminaries * 3 The Beginning of the Moment Problem * 3.1 The Usual Suspects: Well-known Classical Results without Gaps * 3.2 Early Results with Gaps * 3.3 Finitely Atomic Representing Measures: The Richter Theorem * 3.4 Signed Representing Measures: Boas' Theorem * 4 T-Systems * 4.1 Definition and Basic Properties of T-Systems * 4.2 Examples of T-systems * 4.3 Non-Negativity, Zeros, and Determinantal Representations of Polynomials in T-Systems * 4.4 ET-Systems * 5 Sparse Positivstellensatze and Nichntnegativstellensatze * 5.1 Sparse Positivstellensatze and Nichntnegativstellensatze on \([a,b]\) for general T-Systems * 5.2 Sparse Positivstellensatze and Nichntnegativstellensatze on \([a,b]\) for Algebraic Polynomials * 5.3 Sparse Positivstellensatze and Nichntnegativstellensatze on \([0,\infty)\) * 5.4 Sparse Positivstellensatze and Nichntnegativstellensatze on R Summary ## 1 Introduction The theory of moments (or the moment problem) has been connected to non-negative polynomials for a long time and this connection is well-known since Haviland [10] or even dates back further. The classical moment problem is the following: Given a closed set \(K\subseteq\mathds{R}^{n}\) and a real sequence \(s=(s_{\alpha})_{\alpha\in I}\) with \(I\subseteq\mathds{N}_{0}^{n}\). When does there exist a measure \(\mu\) on \(K\) such that \[s_{\alpha}=\int_{K}x^{\alpha}\ \mathrm{d}\mu(x)\] holds for all \(\alpha\in I\)? For more on the theory of moments see e.g. [11, 1, 12, 13, 14, 15, 16] and references therein. In modern times the theory of moments and the theory of non-negative polynomials were revived by [13] and then put to useful applications, see e.g. [17]. By applications and specially by the need for efficient and fast algorithms the focus more and more turned the last years (and decades) to sparse systems, i.e., the index set \(I\subsetneq\mathds{N}_{0}^{n}\) is not all \(\mathds{N}_{0}^{n}\) and specially not all \(\alpha\) with \(|\alpha|\leq d\) for some \(d\in\mathds{N}_{0}^{n}\) are present. It should not be surprising that these sparse systems were studied theoretically. More surprising is it that the early results in this field are not well-known or even completely forgotten. And unfortunately it came recently to our attention that several known results are being reproved in weaker versions [1]. The main purpose of this article is to review the early results in the theory of sparse moment problems and to show how important results and especially sparse Positivstellensatze, sparse Nichtnegativstellensatze, and sparse moment problems follow from these early results. All results presented here are not contained in the modern literature about the theory of moments [15, 16], about real algebraic geometry [1], or about (sparse) polynomial optimization [18, 19]. We hope that this treatment will also be useful for the emerging works of moment problems and polynomials on curves since these often reduce to the univariate polynomial case [11]. By the title we only look at early results (and their applications). By "early" we mean everything up to and including 1966. Everything between 1967 and up to 1991 we consider "modern" and everything after "contemporary". Modern and contemporary results are not considered here since they deserve more space than this article can give. The year 1966 is chosen since in 1966 the extensive research monograph by Samuel Karlin and William J. Studden about T-systems appeared [13]. This monograph is an extensive follow up of the work by Karlin in [14]. Both works solve important problems in the theory of T-systems. The theory of T-systems is the theoretical framework where e.g. sparse univariate algebraic polynomial systems were investigated in. The year 1991 is chosen since then the first denominator free description of strictly positive polynomials appeared [15] reviving a large part in real algebraic geometry. The article is structured as follows. In the next Section 2 we shortly introduce the notations in this article. In Section 3 we shortly present the "usual suspects" (classical results without gaps) and the two first explicit studies of problems with gaps. We will meet there also Richter's Theorem and Boas' Theorem. In Section 4 we will introduce the theory of T-systems and show their basic properties with a special emphasis on zeros and non-negativity. By far the most important part is Section 5 where we look at the results in [14, 15] and apply them to get sparse algebraic Positivstellensatze and Nichtnegativstellensatze. Additionally, they are used to solve and even extend the early sparse moment problems from Section 3. In Section 6 we sum up the results. All results are presented with proofs as far as possible. Several are collected from the literature but translated to nowadays mathematical language. Also some missing steps are filled in and errors are corrected. ## 2 Preliminaries Let \(n\in\mathds{N}\), \(K\subseteq\mathds{R}^{n}\) be closed, and \(s=(s_{\alpha})_{\alpha\in\mathds{N}_{0}^{n}}\) be a real sequence. We say that \(s\) is a \(K\)-moment sequence if there exists a measure \(\mu\) on \(K\) such that \[s_{\alpha}=\int_{K}x^{\alpha}\ \mathrm{d}\mu(x).\] The measure \(\mu\) is called a representing measure. Unless otherwise denoted as signed measure all measures are positive. The moment sequence \(s\) is called determined if \(\mu\) is unique. We call \(s\) a truncated moment sequence when only finitely many \(s_{\alpha}\) with \(\alpha\in\mathds{N}_{0}\) are known. For a real sequence \(s=(s_{\alpha})_{\alpha\in\mathds{N}_{0}}\) we call the linear functional \(L_{s}:\mathds{R}[x_{1},\ldots,x_{n}]\to\mathds{R}\) defined by \(L_{s}(x^{\alpha})=s_{\alpha}\) the Riesz functional. For \(\beta\in\mathds{N}_{0}\) we define \(X^{\beta}s=(s_{\alpha+\beta})_{\alpha\in\mathds{N}_{0}^{n}}\) the shifted sequence. For a sequence \(s=(s_{\alpha})_{\alpha\in\mathds{N}_{0}^{n}}\) we define the Hankel matrix \(\mathcal{H}(s):=(s_{\alpha,\beta})_{\alpha,\beta\in\mathds{N}_{0}^{n}}\). For \(K\subseteq\mathds{R}^{n}\) we set \(\mathrm{Pos}(K):=\{f\in\mathds{R}[x_{1},\ldots,x_{n}]\,|\,f\geq 0\text{ on }K\}\). For any set \(\mathcal{X}\) we denote by \(|\mathcal{X}|\) the cardinality of \(\mathcal{X}\). ## 3 The Beginning of the Moment Problem ### The Usual Suspects: Well-known Classical Results without Gaps The first moment problem that was solved is the following. **Stieltjes' Theorem 3.1.1** ([11]).: _Let \(s=(s_{i})_{i\in\mathds{N}_{0}}\) be a real sequence. The following are equivalent:_ 1. \(s\) _is a_ \([0,\infty)\)_-moment sequence (Stieltjes moment sequence)._ 2. \(L_{s}(p)\geq 0\) _for all_ \(p\in\mathrm{Pos}([0,\infty))\)_._ 3. \(L_{s}(p^{2})\geq 0\) _and_ \(L_{Xs}(p^{2})=L_{s}(x\cdot p^{2})\geq 0\) _for all_ \(p\in\mathds{R}[x]\)_._ 4. \(s\) _and_ \(Xs=(s_{i+1})_{i\in\mathds{N}_{0}}\) _are positive semidefinite._ 5. \(\mathcal{H}(s)\succeq 0\) _and_ \(\mathcal{H}(Xs)\succeq 0\) _for all_ \(d\in\mathds{N}_{0}\)_._ Stieltjes' Theorem 3.1.1 in the original proof [11] does not use non-negative polynomials. Stieljes uses continued fractions and introduces new sequences which we (nowadays) denote by \(s\) and \(Xs\). Stieltjes only proves (i) \(\Leftrightarrow\) (iv). The implication (i) \(\Leftrightarrow\) (ii) is Haviland's Theorem 3.1.4, (ii) \(\Leftrightarrow\) (iii) is the description of \(\mathrm{Pos}([0,\infty))\), and (iv) \(\Leftrightarrow\) (v) is a reformulation of \(s\) and \(Xs\) being positive semi-definite. The next moment problem that was solved is the following. **Hamburger's Theorem 3.1.2** ([14, Satz X and Existenstheorem (SS8, p. 289)]).: _Let \(s=(s_{i})_{i\in\mathds{N}_{0}}\) be a real sequence. The following are equivalent:_ 1. \(s\) _is a_ \(\mathds{R}\)_-moment sequence (Hamburger moment sequence or short moment sequence)._ 2. \(L_{s}(p)\geq 0\) _for all_ \(p\in\mathrm{Pos}(\mathds{R})\)_._ 3. \(L_{s}(p^{2})\geq 0\) _for all_ \(p\in\mathds{R}[x]\)_._ 4. \(s\) _is positive semidefinite._ 5. \(\mathcal{H}(s)\succeq 0\) Hamburger proves similar to Stieltjes [14] the equivalence (i) \(\Leftrightarrow\) (iv) via continued fractions. In [10, Satz XIII] Hamburger solves the full moment problem by approximation with truncated moment problems. This was later in a slightly more general framework reproved in [13]. Hamburger needed to assume that the sequence of measures \(\mu_{k}\) (which he called "Belegungen" and denoted by \(\mathrm{d}\Phi^{(k)}(u)\)) to converge to some measure \(\mu\) (condition 2 of [10, Satz XIII]). Hamburger's additional condition 2 is nowadays replaced by the vague convergence and the fact that the solution set of representing measures is vaguely compact [12, Thm. 1.19], i.e., it assures the existence of a \(\mu\) as required by Hamburger in the additional condition 2. Shortly after Hamburger the moment problem on \([0,1]\) was solved. **Hausdorff's Theorem 3.1.3** ([10, Satz II and III]).: _Let \(s=(s_{i})_{i\in\mathds{N}_{0}}\) be a real sequence. The following are equivalent:_ 1. \(s\) _is a_ \([0,1]\)_-moment sequence (Hausdorff moment sequence)._ 2. \(L_{s}(p)\geq 0\) _for all_ \(p\in\mathrm{Pos}([0,1])\)_._ 3. \(L_{s}(p^{2})\geq 0\)_,_ \(L_{Xs}(p^{2})\geq 0\)_, and_ \(L_{(1-X)s}(p^{2})\geq 0\) _for all_ \(p\in\mathds{R}[x]\)_._ 4. \(s\)_,_ \(Xs\)_, and_ \((1-X)s\) _are positive semidefinite._ 5. \(\mathcal{H}(s)\succeq 0\)_,_ \(\mathcal{H}(Xs)\succeq 0\)_, and_ \(\mathcal{H}((1-X)s)\succeq 0\)_._ Hausdorff proves the equivalence (i) \(\Leftrightarrow\) (iii) via so called C-sequences. In [13] Toeplitz treats general linear averaging methods. In [10] Hausdorff uses these. Let the infinite dimensional matrix \(\lambda=(\lambda_{i,j})_{i,j\in\mathds{N}_{0}}\) be row-finite, i.e., for every row \(i\) only finitely many \(\lambda_{i,j}\) are non-zero. Then the averaging method \[A_{i}=\sum_{j\in\mathds{N}_{0}}\lambda_{i,j}a_{j}\] shall be consistent: If \(a_{j}\to\alpha\) converges then \(A_{i}\to\alpha\) converges to the same limit. Toeplitz proved a necessary and sufficient condition on \(\lambda\) for this property. Hausdorff uses only part of this property. He calls a matrix \((\lambda_{i,j})_{i,j\in\mathds{N}_{0}}\) with the property that a convergent sequence \((a_{j})_{j\in\mathds{N}_{0}}\) is mapped to a convergent sequence \((A_{j})_{j\in\mathds{N}_{0}}\) (the limit does not need to be preserved) a C-matrix (convergence preserving matrix). Hausdorff gives the characterization of C-matrices [10, p. 75, conditions (A) - (C)]. Additionally, if \(\lambda\) is a C-matrix and a diagonal matrix with diagonal entries \(\lambda_{i,i}=s_{i}\) then \(s=(s_{i})_{i\in\mathds{N}_{0}}\) is called a C-sequence. The equivalence (i) \(\Leftrightarrow\) (iii) is then shown by Hausdorff in the result that a sequence is a \([0,1]\)-moment sequence if and only it is a C-sequence [10, p. 102]. A much simpler approach to solve the \(K\)-moment problem for any closed \(K\subseteq\mathds{R}^{n}\), \(n\in\mathds{N}\), was presented by Haviland. He no longer used continued fractions but employed the Riesz(-Markov-Kakutani) representation theorem, i.e., representing a linear functional by integration. The present Riesz-Markov-Kakutani representation theorem was developed in several stages. A first version for continuous functions on the unit interval \([0,1]\) is by F. Riesz [13]. It was extended by Markov to some non-compact spaces [11] and then by Kakutani to locally compact Hausdorff spaces [12]. Interestingly, it already follows from Daniell's Representation Theorem [10, 11] with Urysohn's Lemma [14]. Haviland proved the following. **Haviland's Theorem 3.1.4** ([10, Theorem], see also [10, Theorem] for \(K=\mathds{R}^{n}\)).: _Let \(n\in\mathds{N}\), \(K\subseteq\mathds{R}^{n}\) be closed, and \(s=(s_{\alpha})_{\alpha\in\mathds{N}_{0}^{n}}\) be a real sequence. The following are equivalent:_ 1. \(s\) _is a_ \(K\)_-moment sequence._ 2. \(L_{s}(p)\geq 0\) _for all_ \(p\in\mathrm{Pos}(K)\) In [11, Theorem] Haviland proves "only" the case \(K=\mathds{R}^{n}\) with the extension method by M. Riesz. In [11, Theorem] this is extended to any closed \(K\subseteq\mathds{R}^{n}\). The idea to do so is attributed by Haviland to Aurel Wintner [11, p. 164]: "A. Wintner has subsequently suggested that it should be possible to extend this result [[11, Theorem]] by requiring that the distribution function [measure] solving the problem have a spectrum [support] contained in a preassigned set, a result which would show the well-known criteria for the various standard special momentum problems (Stieltjes, Herglotz [trigonometric], Hamburger, Hausdorff in one or more dimensions) to be put particular cases of the general \(n\)-dimensional momentum problem mentioned above. The purpose of this note is to carry out this extension." In [11] after the general Theorem 3.1.4 Haviland then goes through all the classical results (Theorems 3.1.1 to 3.1.3, and the Herglotz (trigonometric) moment problem on the unit circle which we did not included here) and shows how all these results (i.e., conditions on the sequences) are recovered from the at this point known representations of non-negative polynomials. For the Hamburger moment problem Haviland uses \[\operatorname{Pos}(\mathds{R})=\left\{f^{2}+g^{2}\,\big{|}\,f,g\in\mathds{R }[x]\right\}\] which was already known to Hilbert [12]. For the Stieltjes moment problem he uses \[\operatorname{Pos}([0,\infty))=\left\{f_{1}^{2}+f_{2}^{2}+x\cdot(g_{1}^{2}+g_ {2}^{2})\,\big{|}\,f_{1},f_{2},g_{1},g_{2}\in\mathds{R}[x]\right\} \tag{1}\] with the reference to Polya and Szego (previous editions of [13, 14]). In [13, p. 82, ex. 45] the representation (1) is still included while it was already known before, see [12, p. 6, footnote], that \[\operatorname{Pos}([0,\infty))=\left\{f^{2}+x\cdot g^{2}\,\big{|}\,f,g\in \mathds{R}[x]\right\} \tag{2}\] is sufficient. Also in [1, Prop. 3.2] the representation (1) is used, not the representation (2). For the \([-1,1]\)-moment problem Haviland uses \[\operatorname{Pos}([-1,1])=\left\{f^{2}+(1-x^{2})\cdot g^{2}\,\big{|}\,f,g\in \mathds{R}[x]\right\}.\] For the Hausdorff moment problem he uses that any non-negative polynomial on \([0,1]\) is a linear combination of \(x^{m}\cdot(1-x)^{p-m}\), \(m,p\in\mathds{N}_{0}\), \(p\geq m\), with non-negative coefficients. For the two-dimensional Hausdorff moment problem he uses that any non-negative polynomial on \([0,1]^{2}\) is a linear combination of \(x^{m}\cdot y^{n}\cdot(1-x)^{p-m}\cdot(1-y)^{q-n}\), \(n,m,q,p\in\mathds{N}_{0}\), \(p\geq m\), \(q\geq n\), with non-negative coefficients [10]. Hildebrandt and Schoenberg [10] already solved the moment problem on \([0,1]^{2}\) (and more generally on \([0,1]^{n}\) for all \(n\in\mathds{N}\)) getting the same result as Haviland. The idea of using \(\operatorname{Pos}(K)\)-descriptions to solve the moment problem was therefore already used by Hildebrandt and Schoenberg in 1933 [10] before Haviland uses this in [11] and generalized this in [11] as suggested to him by Wintner. With these broader historical remarks we see that of course more people are connected to Theorem 3.1.4. It might also be appropriate to call Theorem 3.1.4 the _Haviland-Wintner_ or _Haviland-Hildebrand-Schoenberg-Wintner Theorem_. But as so often, the list of contributors is long (and maybe even longer) and hence the main contribution (the general proof) is rewarded by calling it just Haviland Theorem. As one other solved moment problem of the long list (our list here is far from complete) is the following. **Svenco's Theorem 3.1.5** ([14]).: _Let \(s=(s_{i})_{i\in\mathds{N}_{0}}\) be a real sequence. The following are equivalent:_ 1. \(s\) _is a_ \((-\infty,0]\cup[1,\infty)\)_-moment sequence._ 2. \(L_{s}(p)\geq 0\) _for all_ \(p\in\operatorname{Pos}((-\infty,0]\cup[1,\infty))\)_._ 3. \(L_{s}(p^{2})\geq 0\)_,_ \(L_{(X^{2}-X)s}(p^{2})\geq 0\) _for all_ \(p\in\mathds{R}[x]\) _._ * \(s\) _and_ \((X^{2}-X)s\) _are positive semi-definite._ * \(\mathcal{H}(s)\succeq 0\) _and_ \(\mathcal{H}((X^{2}-X)s)\succeq 0\)_._ All moment problems on closed and semi-algebraic sets \(K\subseteq\mathds{R}\) follow nowadays easily from Haviland's Theorem 3.1.4 and the fact that any preodering from a natural description of \(K\) is saturated, see e.g. [11, Prop. 2.7.3]. The higher dimensional moment problem is much harder than the one-dimensional moment problem and in general it is not solved. The reason is that a description of \(\operatorname{Pos}(K)\) is in general unknown. A huge progress in this field was done by Konrad Schmudgen in 1991 [10] where he solved the \(K\)-moment problem for compact semi-algebraic sets \(K\subset\mathds{R}^{n}\), \(n\geq 2\). As a corollary he gained a complete description of strictly positive \(f\in\operatorname{Pos}(K)\). These and subsequence results are discussed elsewhere [11, 12]. ### Early Results with Gaps The early history of moment problems with gaps is very thin. We discuss only [12] and [1]. Hausdorff just solved Hausdorff's Theorem 3.1.3 in [12] (submitted 11th February 1920) and in [12] (submitted 8th September 1920) he treats \[s_{n}=\int_{0}^{1}x^{k_{n}}\ \mathrm{d}\mu(x)\] with \[k_{0}=0<k_{1}<k_{2}<\dots<k_{n}<\dots\] for a sequence of real numbers, i.e., not necessarily in \(\mathds{N}_{0}\). See also [10, p. 104]. Since Hausdorff in [12] did not have access to Haviland's Theorem 3.1.4 [12] or the description of all non-negative linear combinations of \(1,x^{k_{1}},\dots,x^{k_{n}},\dots\) the results in [12] need complicated formulations and are not very strong. Only with the description of non-negative linear combinations by Karlin [10] an easy formulation of the result is possible. We will therefore postpone the exact formulation to Theorem 5.2.3, 5.2.5, and 5.2.6 where we present easy proofs using also the theory of adapted spaces [1, 20, 10]. In [1] Boas investigates the Stieltjes moment problem (\(K=[0,\infty)\)) with gaps. Similar to [12] the results are difficult to read and they are unfortunately incomplete since Boas (like Hausdorff) did not have access to the description of all non-negative or strictly positive polynomials with gaps (or more general exponents). We will give the complete solution of the \([0,\infty)\)-moment problem with gaps and more general exponents in Theorem 5.3.4. ### Finitely Atomic Representing Measures: The Richter Theorem When working with a truncated moment sequence it is often useful in theory and applications to replace a representing measure with a finitely atomic measure without changing the moments. That this is always possible for truncated moment sequences was first proved in full generality by Richter [13]. **Richter's Theorem 3.3.1** ([13, 14]).: _Let \(n\in\mathds{N}\), \((\mathcal{X},\mathfrak{A})\) be a measurable space, and \(\{f_{i}\}_{i=1}^{n}\) be a family of real measurable functions \(f_{i}:\mathcal{X}\to\mathds{R}\). Then for every measure \(\mu\) on \(\mathcal{X}\) such that all \(f_{i}\) are \(\mu\)-integrable, i.e.,_ \[s_{i}:=\int_{\mathcal{X}}f_{i}(x)\ \mathrm{d}\mu(x)\quad<\infty\] _for all \(i=1,\dots,n\), there exists a \(K\in\mathds{N}\) with \(K\leq n\), points \(x_{1},\dots,x_{K}\in\mathcal{X}\) pairwise different, and \(c_{1},\dots,c_{K}\in(0,\infty)\) such that_ \[s_{i}=\sum_{j=1}^{K}c_{j}\cdot f_{i}(x_{j})\] _holds for all \(i=1,\dots,n\)._ The history of this result is often still misrepresented in the literature, even after K. Schmudgen and the present author compared the different contributions and publication dates in detail in [1]. With these historical remarks it also is appropriate to call Theorem 3.3.1 the _Richter-Rogosinski-Rosenbloom Theorem_[13, 14, 15]. Every other result before or after [13] is only a special case and can easily be recovered from Richter's Theorem 3.3.1, especially [1]. Since Richter's Theorem 3.3.1 only needs a family of finitely many measurable functions it also includes all cases of gaps in the truncated moment theory. ### Signed Representing Measures: Boas' Theorem In the theory of moments almost exclusively the representation by non-negative measures is treated. The reason is the following. **Boas' Theorem 3.4.1** ([1] or e.g. [12, p. 103, Thm. 3.11]).: _Let \(s=(s_{i})_{i\in\mathds{N}_{0}}\) be a real sequence. Then there exist infinitely many signed measures \(\mu\) on \(\mathds{R}\) and infinitely many signed measures \(\nu\) on \([0,\infty)\) such that_ \[s_{i}=\int_{\mathds{R}}x^{i}\ \mathrm{d}\mu(x)=\int_{0}^{\infty}x^{i}\ \mathrm{d}\nu(x)\] _holds for all \(i\in\mathds{N}_{0}\)._ Boas' Theorem 3.4.1 also holds in the \(n\)-dimensional case on \(\mathds{R}^{n}\) and \([0,\infty)^{n}\) for any \(n\in\mathds{N}\). See [12, p. 104] for an extension which kinds of measures can be chosen. Boas' Theorem 3.4.1 also covers the case with gaps. If any gaps in the real sequence \(s\) are present then fill them with any real number you like. ## 4 T-Systems We have seen the early attempts to deal with gaps in the moment problems. A sufficient solution was at these times not possible. Only with the introduction of so called T-systems and their rigorous investigation significant progress and finally complete solutions were possible. For more on the early development and history of T-systems see [13, 14, 15, 16]. ### Definition and Basic Properties of T-Systems **Definition 4.1.1**.: Let \(n\in\mathds{N}\), \(\mathcal{X}\) be a set with \(|\mathcal{X}|\geq n+1\), and let \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a family of real functions \(f_{i}:\mathcal{X}\to\mathds{R}\). We call any linear combination \[f=\sum_{i=0}^{n}a_{i}\cdot f_{i}\quad\in\operatorname{lin}\mathcal{F}\] with \(a_{1},\dots,a_{n}\in\mathds{R}\) a _polynomial_. The family \(\mathcal{F}\) on \(\mathcal{X}\) is called a _Tchebycheff system_ (_T-system_) _of order \(n\) on \(\mathcal{X}\)_ if any polynomial \(f\in\operatorname{lin}\mathcal{F}\) with \(\sum_{i=0}^{n}a_{i}^{2}>0\) has at most \(n\) zeros on \(\mathcal{X}\)._ If \(\mathcal{X}\) is a topological space and \(\mathcal{F}\) is a family of continuous functions then we call \(\mathcal{F}\) a _continuous T-system_. If additionally \(\mathcal{X}\) is the unit circle then we call \(\mathcal{F}\) a _periodic T-system_. **Corollary 4.1.2**.: _Let \(n\in\mathds{N}\) and \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a T-system of order \(n\) on some \(\mathcal{X}\) with \(|\mathcal{X}|\geq n+1\). Let \(\mathcal{Y}\subset\mathcal{X}\) with \(|\mathcal{Y}|\geq n+1\). Then \(\mathcal{G}:=\{f_{i}|_{\mathcal{Y}}\}_{i=0}^{n}\) is a T-system of order \(n\) on \(\mathcal{Y}\)._ Proof.: Let \(f\in\operatorname{lin}\mathcal{F}\). Then \(f\) has at most \(n\) zeros in \(\mathcal{X}\) and hence \(f|_{\mathcal{Y}}\) has at most \(n\) zeros in \(\mathcal{Y}\subset\mathcal{X}\). Since for any \(g\in\operatorname{lin}\mathcal{G}\) there is a \(f\in\operatorname{lin}\mathcal{F}\) such that \(g=f|_{\mathcal{Y}}\) we have the assertion. The set \(\mathcal{X}\) does not require any structure or property except \(|\mathcal{X}|\geq n+1\). In the theory of T-systems we often deal with one special matrix. We use the following abbreviation. **Definition 4.1.3**.: Let \(n\in\mathds{N}\), \(\{f_{i}\}_{i=0}^{n}\) be a family of real functions on a set \(\mathcal{X}\) with \(|\mathcal{X}|\geq n+1\). We define the matrix \[\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}:=\begin{pmatrix}f_{0}(x_{0})&f_{1}(x_{0})& \dots&f_{n}(x_{0})\\ f_{0}(x_{1})&f_{1}(x_{1})&\dots&f_{n}(x_{1})\\ \vdots&\vdots&&\vdots\\ f_{0}(x_{n})&f_{1}(x_{n})&\dots&f_{n}(x_{n})\end{pmatrix}=(f_{i}(x_{j}))_{i,j=0}^ {n}\] for any \(x_{0},\dots,x_{n}\in\mathcal{X}\). **Lemma 4.1.4** (see e.g. [10, p. 31]).: _Let \(n\in\mathds{N}\), \(\mathcal{X}\) be a set with \(|\mathcal{X}|\geq n+1\), and \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a family of real functions \(f_{i}:\mathcal{X}\to\mathds{R}\). The following are equivalent:_ 1. \(\mathcal{F}\) _is a T-system of order_ \(n\) _on_ \(\mathcal{X}\)_._ 2. _The determinant_ \[\det\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}\] _does not vanish for any pairwise distinct points_ \(x_{0},\dots,x_{n}\in\mathcal{X}\)_._ Proof.: (i) \(\Rightarrow\) (ii): Let \(x_{0},\dots,x_{n}\in\mathcal{X}\) be pairwise distinct. Since \(\mathcal{F}\) is a T-system we have that any non-trivial polynomial \(f\) has at most \(n\) zeros, i.e., the matrix \[\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}\] has trivial kernel and hence its determinant is non-zero. Since \(x_{0},\dots,x_{n}\in\mathcal{X}\) are arbitrary pairwise distinct we have (ii). (ii) \(\Rightarrow\) (i): Assume there is a polynomial \(f\) with \(\sum_{i=0}^{n}a_{i}^{2}>0\) which has the \(n+1\) pairwise distinct zeros \(z_{0},\dots,z_{n}\in\mathcal{X}\). Then the matrix \[Z=\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ z_{0}&z_{1}&\dots&z_{n}\end{pmatrix}\] has non-trivial kernel since \(0\neq(a_{0},a_{1},\dots,a_{n})^{T}\in\ker Z\) and hence \(\det Z=0\) in contradiction to (ii). **Corollary 4.1.5**.: _Let \(n\in\mathds{N}\), and \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a T-system of order \(n\) on some \(\mathcal{X}\subseteq\mathds{R}\) with \(|\mathcal{X}|\geq n+1\). The following hold:_ 1. _The functions_ \(f_{0},\dots,f_{n}\) _are linearly independent over_ \(\mathcal{X}\)_._ 2. _For any_ \(f=\sum_{i=0}^{n}a_{i}\cdot f_{i}\in\ln\mathcal{F}\) _the coefficients_ \(a_{i}\) _are unique._ Proof.: Follows immediately from Lemma 4.1.4 (i) \(\Rightarrow\) (ii). We even have the following. **Theorem 4.1.6** (see e.g. [10, p. 33]).: _Let \(n\in\mathds{N}\), \(\mathcal{F}\) be a T-system on some set \(\mathcal{X}\) with \(|\mathcal{X}|\geq n+1\), and let \(x_{0},\dots,x_{n}\in\mathcal{X}\) be pairwise different points. The following hold:_ 1. _Any_ \(f\in\ln\mathcal{F}\) _is uniquely determined by its values_ \(f(x_{0}),\dots,f(x_{n})\)_._ 2. _For any_ \(y_{0},\dots,y_{n}\in\mathds{R}\) _there exists a unique_ \(f\in\ln\mathcal{F}\) _with_ \(f(x_{i})=y_{i}\) _for all_ \(i=0,\dots,n\) Proof.: (i): Since \(f\in\operatorname{lin}\mathcal{F}\) we have \(f=\sum_{i=0}^{n}a_{i}\cdot f_{i}\). Let \(x_{1},\dots,x_{n}\in\mathcal{F}\) be pairwise distinct. Then by Lemma4.1.4 (i) \(\Rightarrow\) (ii) we have that \[\begin{pmatrix}f(x_{0})\\ \vdots\\ f(x_{n})\end{pmatrix}=\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}\cdot\begin{pmatrix}\alpha_{0}\\ \vdots\\ \alpha_{n}\end{pmatrix}\] has the unique solution \(\alpha_{0}=a_{0}\),..., \(\alpha_{n}=a_{n}\). (ii): By the same argument as in (i) the system \[\begin{pmatrix}y_{0}\\ \vdots\\ y_{n}\end{pmatrix}=\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}\cdot\begin{pmatrix}\alpha_{0}\\ \vdots\\ \alpha_{n}\end{pmatrix}\] has the unique solution \(\alpha_{0}=a_{0}\),..., \(\alpha_{n}=a_{n}\). So far we imposed no structure on \(\mathcal{X}\). We now impose structure on \(\mathcal{X}\). The following structural result was proved in [14] for compact subsets \(\mathcal{X}\) of \(\mathds{R}^{n}\) and for arbitrary compact sets \(\mathcal{X}\) in [10, 11]. **Theorem 4.1.7** ([14, Thm. 2], [10], [11, Thm. 8 and Cor.]).: _Let \(n\in\mathds{N}\) and \(\mathcal{F}\) be a continuous T-system of order \(n\) on a topological space \(\mathcal{X}\). If \(\mathcal{X}\) is a compact metrizable space then \(\mathcal{X}\) can be homeomorphically embedded in the unit circle \(\{(x,y)\in\mathds{R}^{2}\,|\,x^{2}+y^{2}=1\}\)._ **Corollary 4.1.8** ([11, Thm. 8]).: _The order \(n\) of a periodic T-system is even._ Proof.: Let \(\varphi:[0,2\pi]\to S=\{(x,y)\in\mathds{R}^{2}\,|\,x^{2}+y^{2}\}\) with \(\varphi(\alpha)=(\sin\alpha,\cos\alpha)\) and \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a periodic T-system. Then the \(f_{i}\) are continuous and hence also \[\det\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ t_{0}&t_{1}&\dots&t_{n}\end{pmatrix}\] is continuous in \(t_{0},\dots,t_{n}\in S\). If \(\mathcal{F}\) is a T-system we have that \[d(\alpha):=\det\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ \varphi(\alpha)&\varphi(\alpha+2\pi/(n+1))&\dots&\varphi(\alpha+2n\pi/(n+1)) \end{pmatrix}\] in non-zero for all \(\alpha\in[0,2\pi]\) and never changes singes. If \(n\) is odd then \(d(0)=-d(2\pi/(n+1))\) which is a contradiction. Hence, \(n\) must be even. ### Examples of T-systems **Examples 4.2.1** (algebraic polynomials, see e.g. [12, 13]).: **(a)** Let \(n\in\mathds{N}\) and \(\mathcal{X}\subseteq\mathds{R}\) with \(|\mathcal{X}|\geq n+1\). Then the family \(\mathcal{F}=\{x^{i}\}_{i=0}^{n}\) is a T-system. This follows immediately from the Vandermonde determinant \[\det\begin{pmatrix}1&x&\dots&x^{n}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}=\prod_{0\leq i<j\leq n}(x_{j}-x_{i}) \tag{3}\] for any \(x_{0},\dots,x_{n}\in\mathcal{X}\). Note that we abuse the notation for the algebraic polynomial cases. The functions \(f_{0},\dots,f_{n}\) should not be denoted by \(x^{i}\) but by \[\cdot^{i}:\mathds{R}\to\mathds{R},\ x\mapsto x^{i}.\] However, then we have the notation \[\begin{pmatrix}.^{0}&.^{1}&\dots&.^{n}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}\qquad\text{or more general}\qquad\begin{pmatrix}. ^{\alpha_{0}}&.^{\alpha_{1}}&\dots&.^{\alpha_{n}}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}\] which seems hard to read. For convenience we will therefore abuse the notation and use \(x^{i}\) and (3). **(b)**: Let \(n\in\mathds{N}\), \(\mathcal{X}\subseteq[0,\infty)\) with \(|\mathcal{X}|\geq n+1\), and \(\alpha_{0}=0<\alpha_{1}<\dots<\alpha_{n}\) be real numbers. Then \(\mathcal{F}=\{x^{\alpha_{i}}\}_{i=0}^{n}\) is a T-system of order \(n\) on \(\mathcal{X}\). **(c)**: Let \(n\in\mathds{N}\), \(\mathcal{X}\subseteq(0,\infty)\) with \(|\mathcal{X}|\geq n+1\), and \(0<\alpha_{0}<\dots<\alpha_{n}\) be real numbers. Then \(\mathcal{F}=\{x^{\alpha_{i}}\}_{i=0}^{n}\) is a T-system of order \(n\) on \(\mathcal{X}\). \(\circ\) **Example 4.2.2** (see e.g. [10, p. 38]).: Let \(n\in\mathds{N}\) and \(\alpha_{0}<\alpha_{1}<\dots<\alpha_{n}\) be reals. Then \[\mathcal{F}=\{e^{\alpha_{0}x},e^{\alpha_{1}x},\dots,e^{\alpha_{n}x}\}\] is a T-system on any \(\mathcal{X}\subseteq\mathds{R}\) with \(|\mathcal{X}|\geq n+1\). \(\circ\) **Example 4.2.3** (see e.g. [10, p. 37-38]).: Let \(n\in\mathds{N}\) and \(\alpha_{0}<\alpha_{1}<\dots<\alpha_{n}\) be reals. Then \[\mathcal{F}=\left\{\frac{1}{x+\alpha_{0}},\frac{1}{x+\alpha_{1}},\dots,\frac{ 1}{x+\alpha_{n}}\right\}\] is a continuous T-system on any \([a,b]\) or \([a,\infty)\) with \(-\alpha_{0}<a<b\). \(\circ\) **Example 4.2.4** (see e.g. [10, p. 38]).: Let \(n\in\mathds{N}\) and let \(f\in C^{n}(\mathcal{X})\) with \(\mathcal{X}=[a,b]\), \(a<b\), and \(f^{(n)}>0\) on \(\mathcal{X}\). Then \[\mathcal{F}=\{1,x,x^{2},\dots,x^{n-1},f\}\] is a continuous T-system of order \(n\) on \(\mathcal{X}=[a,b]\). We can also allow \(\mathcal{X}=(a,b)\), \([a,\infty)\), \((-\infty,b)\), \(\dots\). \(\circ\) **Example 4.2.5** (see e.g. [10, p. 10]).: Let \(n\in\mathds{N}\) and \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a (continuous) T-systems on \(\mathcal{X}\subseteq\mathds{R}\) with \(|\mathcal{X}|\geq n+1\). Then for any (continuous) function \(r:\mathcal{X}\to(0,\infty)\) the family \(\{r\cdot f_{i}\}_{i=0}^{n}\) is a (continuous) T-system. \(\circ\) **Example 4.2.6**.: Let \(n\in\mathds{N}\), \(\{f_{i}\}_{i=0}^{n}\) be a (continuous) T-system of order \(n\) on \(\mathcal{X}\subseteq\mathds{R}\) and let \(g:\mathcal{Y}\subseteq\mathds{R}\to\mathcal{X}\) be a strictly increasing (continuous) function. Then \(\{f_{i}\circ g\}_{i=0}^{n}\) is a (continuous) T-systems of order \(n\) on \(\mathcal{Y}\). \(\circ\) ### Non-Negativity, Zeros, and Determinantal Representations of Polynomials in T-Systems **Theorem 4.3.1** (see e.g. [10, p. 20] or [10, p. 33]).: _Let \(n\in\mathds{N}\), \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a T-system on some set \(\mathcal{X}\) with \(|\mathcal{X}|\geq n+1\), \(x_{1},\dots,x_{n}\in\mathcal{X}\) be \(n\) distinct points, and let \(f\in\operatorname{lin}\mathcal{F}\) be a polynomial. The following are equivalent:_ 1. \(f(x_{i})=0\) _holds for all_ \(i=1,\dots,n\)_._ 2. _There exists a constant_ \(c\in\mathds{R}\) _such that_ \[f(x)=c\cdot\det\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ x&x_{1}&\dots&x_{n}\end{pmatrix}.\] Proof.: (ii) \(\Rightarrow\) (i): Clear. (i) \(\Rightarrow\) (ii): If \(f=0\) then \(c=0\) so the assertion holds. If \(f\neq 0\) then there is a \(x_{0}\in\mathcal{X}\setminus\{x_{1},\dots,x_{n}\}\) such that \(f(x)\neq 0\). Then also the determinant in (ii) is non-zero and we can choose \(c\) such that both \(f\) and the scaled determinant coincide also in \(x_{0}\). By Corollary 4.1.5 a polynomial is uniquely determined by \(x_{0},\dots,x_{n}\) which shows that (ii) is one and hence the only possible polynomial which fulfills (i). So far we treated general T-systems. For further properties we go to continuous T-systems. **Definition 4.3.2**.: Let \(n\in\mathds{N}\), \(\mathcal{F}\) be a continuous T-system on \(\mathcal{X}\subseteq\mathds{R}\) an interval, \(f\in\ln\mathcal{F}\), and let \(x_{0}\) be a zero of \(f\). Then \(x_{0}\in\operatorname{int}\mathcal{X}\) is called a _non-nodal_ zero if \(f\) does not change sign at \(x_{0}\). Otherwise the zero \(x_{0}\) is called _nodal_, i.e., either \(f\) changes signs at \(x_{0}\) or \(x_{0}\) is a boundary point of \(\mathcal{X}\). The following result bounds the number of nodal and non-nodal zeros. **Theorem 4.3.3** (see [12] or e.g. [13, p. 34, Thm. 1.1]).: _Let \(n\in\mathds{N}\), \(\mathcal{F}\) be a continuous T-system of order \(n\) on \(\mathcal{X}=[a,b]\) with \(a<b\)._ _If \(f\in\ln\mathcal{F}\) has \(k\in\mathds{N}_{0}\) non-nodal zeros and \(l\in\mathds{N}_{0}\) nodal zeros in \(\mathcal{X}\) then \(2k+l\leq n\)._ The proof is adapted from [13, Thm. 1.1]. Proof.: \(\mathcal{X}=[a,b]\) and \(k=0\): If \(f\in\ln\mathcal{F}\) has \(l\) zeros then \(n\geq l\) by Definition 4.1.1. \(\underline{\mathcal{X}=[a,b]}\) and \(k\geq 1\): Let \(x_{1},\ldots,x_{n}\in\mathcal{X}\) be the zeros of \(f\). Set \[M_{i}:=\max_{x_{i-1}\leq x\leq x_{i}}|f(x)|\] for \(i=1,\ldots,k+l+1\) with \(t_{0}=a\) and \(x_{k+l+1}=b\). Additionally, set \[m:=\frac{1}{2}\min_{i=1,\ldots,k+l+1}M_{i}>0.\] We construct a polynomial \(g_{1}\in\ln\mathcal{F}\) such that 1. \(g_{1}\) has the values \(m\) at the non-nodal zeros \(x_{i}\) of \(f\) with \(f\geq 0\) in a neighborhood of \(x_{i}\), 2. \(g_{1}\) has the values \(-m\) at the non-nodal zeros \(x_{i}\) of \(f\) with \(f\leq 0\) in a neighborhood of \(x_{i}\), and 3. \(g_{1}\) vanishes at all nodal zeros \(x_{i}\). After renumbering the \(x_{i}\)'s we can assume \(x_{1},\ldots,x_{k_{1}}\) fulfill (a), \(x_{k_{1}+1},\ldots,x_{k_{1}+k_{2}}\) fulfill (b), and \(x_{k_{1}+k_{2}+1},\ldots,x_{k_{1}+k_{2}+l}\) fulfill (c) with \(k_{1}+k_{2}=k\). By Definition 4.1.1 we have \(k+l\leq n\) and hence by Lemma 4.1.4 we have that \[\begin{pmatrix}m\\ \vdots\\ m\\ -m\\ \vdots\\ -m\\ 0\\ \vdots\\ 0\end{pmatrix}=\begin{pmatrix}f_{0}(x_{1})&f_{1}(x_{1})&\ldots&f_{n}(x_{1})\\ \vdots&\vdots&&\vdots\\ f_{0}(x_{k+l})&f_{1}(x_{k+l})&\ldots&f_{n}(x_{k+l})\end{pmatrix}\cdot\begin{pmatrix} \beta_{0}\\ \vdots\\ \beta_{n}\end{pmatrix}\] has at least one solution, say \(\beta_{0}=b_{0}\), \(\ldots\), \(\beta_{n}=b_{n}\). Then \(g_{1}=\sum_{i=0}^{n}b_{i}\cdot f_{i}\in\ln\mathcal{F}\) fulfills (a) to (c). Set \[\rho:=\frac{m}{2\cdot\|g_{1}\|_{\infty}}\] and let \(g_{2}:=f-g_{1}\). We show that to each non-nodal zero \(x_{i}\) of \(f\) there correspond two zeros of \(g_{2}\): Let \(x_{i}\) be a non-nodal zero of \(f\) with \(f\geq 0\) in a neighborhood of \(x_{i}\), say. We can find a point \(y_{i}\in(x_{i-1},x_{i})\) and a point \(y_{i+1}\in(x_{i},x_{i+1})\) such that \[f(y_{i})=M_{i}>m\qquad\text{and}\qquad f(y_{i+1})=M_{i+1}>m.\] Therefore, \(g_{2}(y_{i})>0\) and \(g_{2}(y_{i+1})>0\). Since \(g_{2}(x_{i})=-\rho\cdot m<0\) if follows that \(g_{2}\) as a zero both in \((y_{i},x_{i})\) and \((x_{i},y_{i+1})\). Additionally, \(g_{2}\) also vanishes at all nodal zeros of \(f\) and so has at least \(2k+l\) distinct zeros. Therefore, by Definition 4.1.1 we have \(2k+l\leq n\). **Corollary 4.3.4**.: _Theorem 4.3.3 also holds for sets \(\mathcal{X}\subseteq\mathds{R}\) of the form_ 1. \(\mathcal{X}=(a,b)\)_,_ \([a,b)\)_,_ \((a,b]\) _with_ \(a<b\)_,_ 2. \(\mathcal{X}=(a,\infty)\)_,_ \([a,\infty)\)_,_ \((-\infty,b)\)_,_ \((-\infty,b]\)_,_ 3. \(\mathcal{X}=\{x_{1},\ldots,x_{k}\}\subseteq\mathds{R}\) _with_ \(k\geq n+1\) _and_ \(x_{1}<\cdots<x_{k}\)_, and_ 4. _countable unions of (i) to (iii)._ Proof.: \(\mathcal{X}=[0,\infty)\): Let \(0\leq x_{1}<\cdots<x_{k}\) be the zeros of \(f\) in \([0,\infty)\). Since every T-system on \([0,\infty)\) is also a T-system on \([0,b]\) for any \(b>0\) the claim follows from Theorem 4.3.3 with \(b=x_{k}+1\). For the other assertions adapt (if necessary) the proof of Theorem 4.3.3. **Definition 4.3.5**.: Let \(x\in[a,b]\) with \(a\leq b\). We define the _index_\(\varepsilon(x)\) by \[\varepsilon(x):=\begin{cases}2&\text{if }x\in(a,b),\\ 1&\text{if }x=a\text{ or }b.\end{cases}\] The same definition holds for sets \(\mathcal{X}\) as in Corollary 4.3.4. **Definition 4.3.6**.: Let \(n\in\mathds{N}\) and \(\mathcal{F}\) be a T-system of order \(n\) on some set \(\mathcal{X}\). We define \[(\operatorname{lin}\mathcal{F})^{e} :=\left\{\sum_{i=0}^{n}a_{i}\cdot f_{i}\,\middle|\,\sum_{i=0}^{n} a_{i}^{2}=1\right\},\] \[(\operatorname{lin}\mathcal{F})_{+} :=\left\{f\in\mathcal{F}\,\middle|\,f\geq 0\text{ on }\mathcal{X}\right\},\] and \[(\operatorname{lin}\mathcal{F})^{e}_{+} :=(\operatorname{lin}\mathcal{F})^{e}\cap(\operatorname{lin} \mathcal{F})_{+}.\] With these definitions we can prove the following existence criteria for non-negative polynomials in a T-systems on \([a,b]\). **Theorem 4.3.7**.: _Let \(n\in\mathds{N}\), \(\mathcal{F}\) be a continuous T-system on \(\mathcal{X}=[a,b]\), and \(x_{1},\ldots,x_{m}\in\mathcal{X}\). The following are equivalent:_ 1. _The points_ \(x_{1},\ldots,x_{m}\) _are zeros of a non-negative polynomial_ \(f\in\operatorname{lin}\mathcal{F}\)_._ 2. \(\sum_{i=1}^{m}\varepsilon(x_{i})\leq n\)_._ The proof is adapted from [11, p. 35, Thm. 1.2]. Proof.: "(i) \(\Rightarrow\) (ii)" is Theorem 4.3.3 and we therefore only have to prove "(ii) \(\Rightarrow\) (i)". Case I: At first assume that \(a<x_{1}<\cdots<x_{m}<b\) and \(\sum_{i=0}^{m}\varepsilon(x_{i})=2m=n\). If \(2m<n\) then add \(k\) additional points \(x_{m+1},\ldots,x_{m+k}\) such that \(2m+2k=n\) and \(x_{m}<x_{m+1}<\cdots<x_{m+k}<b\). Select a sequence of points \((x_{1}^{(j)},\ldots,x_{m}^{(j)})\in\mathds{R}^{m}\), \(j\in\mathds{N}\), such that \[a<x_{1}<x_{1}^{(j)}<\cdots<x_{m}<x_{m}^{(j)}<b\] for all \(j\in\mathds{N}\) and \(\lim_{j\to\infty}x_{i}^{(j)}=x_{i}\) for all \(i=1,\ldots,m\). Set \[g_{j}(x):=c_{j}\cdot\det\begin{pmatrix}f_{0}&f_{1}&f_{2}&\ldots&f_{2m-1}&f_{2m }\\ x&x_{1}&x_{1}^{(j)}&\ldots&x_{m}&x_{m}^{(j)}\end{pmatrix}\quad\in(\ln\mathcal{ F})^{e} \tag{4}\] for some \(c_{j}>0\). Since \((\ln\mathcal{F})^{e}\) is compact we can assume that \(g_{j}\) converges to some \(g_{0}\in(\ln\mathcal{F})^{e}\). Then \(g_{0}\) has \(x_{1},\ldots,x_{m}\) as zeros with \(\varepsilon(x_{i})=2\) and \(g_{0}\) is non-negative since \(g_{j}>0\) on \([a,x_{1})\), \((x_{1}^{(j)},x_{2})\),..., \((x_{m-1}^{(j)},x_{m})\), and \((x_{m}^{(j)},b]\) as well as \(g_{j}<0\) on \((x_{1},x_{1}^{(j)})\), \((x_{2},x_{2}^{(j)})\),..., \((x_{m},x_{m}^{(j)})\). Case II: If \(a=x_{1}<x_{2}<\cdots<x_{m}<b\) with \(\sum_{i=1}^{m}\varepsilon(x_{i})=2m-1=n\) the only modification required in case I is to replace (4) by \[g_{j}(x):=-c_{j}\cdot\det\begin{pmatrix}f_{0}&f_{1}&f_{2}&f_{3}&\ldots&f_{2m- 2}&f_{2m-1}\\ x&a&x_{2}&x_{2}^{(j)}&\ldots&x_{m}&x_{m}^{(j)}\end{pmatrix}\quad\in(\ln\mathcal{ F})^{e}\] with some normalizing factor \(c_{j}>0\). Case III: The procedure is similar if \(x_{m}=b\) and \(\sum_{i=1}^{m}\varepsilon(x_{i})=n\). _Remark 4.3.8_.: Theorem 4.3.7 appears in [11, p. 35, Thm. 1.2] in a stronger version. In [11, p. 35, Thm. 1.2] Krein claims that the \(x_{1},\ldots,x_{m}\) are the only zeros of some non-negative \(f\in\ln\mathcal{F}\). This holds when \(n=2m+2p\) for some \(p>0\). To see this add to \(x_{1},\ldots,x_{m}\) in (4) points \(x_{m+1},\ldots,x_{m+p}\in\operatorname{int}\mathcal{X}\setminus\{x_{1},\ldots,x_{m}\}\) and get \(g_{0}\). Hence, \(g_{0}\geq 0\) has exactly the zeros \(x_{1},\ldots,x_{m+p}\). Then construct in a similar way \(g_{0}^{\prime}\) with the zeros \(x_{1},\ldots,x_{m},x_{m+1}^{\prime},\ldots,x_{m+p}^{\prime}\) with \(x_{m+1}^{\prime},\ldots,x_{m+p}^{\prime}\in\operatorname{int}\mathcal{X} \setminus\{x_{1},\ldots,x_{m+p}\}\). Hence, \(g_{0}+g_{0}^{\prime}\geq 0\) has only the zeros \(x_{1},\ldots,x_{m}\). A similar construction works for \(n=2m+1\) with or without end points \(a\) or \(b\). However, Krein misses that for \(n=2m\) and when one end point is contained in \(x_{1},\ldots,x_{m}\) then it might happen that also the other end point must appear. In [11, p. 28, Thm. 5.1] additional conditions are given which ensure that \(x_{1},\ldots,x_{m}\) are the only zeros of some \(f\geq 0\). For example if also \(\{f_{i}\}_{i=0}^{n-1}\) is a T-system then is can be ensured that \(x_{1},\ldots,x_{m}\) are the only zeros of some non-negative polynomial \(f\in\ln\mathcal{F}\), see [11, p. 28, Thm. 5.1 (b-i)]. For our main example(s), the algebraic polynomials with gaps, this holds. The same problem appears in [11, p. 36, Thm. 1.3]. A weaker but correct version is given in Theorem 4.3.11 below. \(\circ\) _Remark 4.3.9_.: Assume that in Theorem 4.3.7 we have additionally that \(f_{0},\ldots,f_{n}\in C^{1}([a,b])\). Then in (4) we can set \(x_{i}^{(j)}=x_{i}+j^{-1}\) for all \(i=0,\ldots,m\) and \(j\gg 1\). For \(j\to\infty\) with \(c_{j}=j^{m}\) we then get \[g_{0}(x) =\lim_{j\to\infty}j^{m}\cdot\det\begin{pmatrix}f_{0}&f_{1}&f_{2}& \ldots&f_{2m-1}&f_{2m}\\ x&x_{1}&x_{1}+j^{-1}&\ldots&x_{m}&x_{m}+j^{-1}\end{pmatrix}\] \[=\lim_{j\to\infty}j^{m}\cdot\det\begin{pmatrix}f_{0}(x)&\ldots&f_{ 2m}(x)\\ f_{0}(x_{1})&\ldots&f_{2m}(x_{1})\\ f_{0}(x_{1}+j^{-1})&\ldots&f_{2m}(x_{1}+j^{-1})\\ \vdots&&\vdots\\ f_{0}(x_{m})&\ldots&f_{2m}(x_{m})\\ f_{0}(x_{m}+j^{-1})&\ldots&f_{2m}(x_{m}+j^{-1})\end{pmatrix}\] \[=\lim_{j\to\infty}\det\begin{pmatrix}f_{0}(x)&\dots&f_{2m}(x)\\ f_{0}(x_{1})&\dots&f_{2m}(x_{1})\\ \frac{f_{0}(x_{1}+j^{-1})-f_{0}(x_{1})}{j^{-1}}&\dots&\frac{f_{2m}(x_{1}+j^{-1}) -f_{2m}(x_{1})}{j^{-1}}\\ \vdots&&\vdots\\ f_{0}(x_{m})&\dots&f_{2m}(x_{m})\\ \frac{f_{0}(x_{m}+j^{-1})-f_{0}(x_{m})}{j^{-1}}&\dots&\frac{f_{2m}(x_{m}+j^{-1} )-f_{2m}(x_{m})}{j^{-1}}\end{pmatrix} \tag{5}\] \[=\begin{pmatrix}f_{0}(x)&\dots&f_{2m}(x)\\ f_{0}(x_{1})&\dots&f_{2m}(x_{1})\\ f_{0}^{\prime}(x_{1})&\dots&f_{2m}^{\prime}(x_{1})\\ \vdots&&\vdots\\ f_{0}(x_{m})&\dots&f_{2m}(x_{m})\\ f_{0}^{\prime}(x_{m})&\dots&f_{2m}^{\prime}(x_{m})\end{pmatrix},\] i.e., double zeros are included by including the values \(f_{i}^{\prime}(x_{j})\). Therefore, whenever we have \(C^{1}\)-functions in \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) and \(x_{i}=x_{i+1}\) we define \[\begin{pmatrix}f_{0}&\dots&f_{i-1}&f_{i}&f_{i+1}&f_{i+2}&\dots&f_{n}\\ x_{0}&\dots&x_{i-1}&(x_{i}&x_{i})&x_{i+2}&\dots&x_{n}\end{pmatrix}:=\begin{pmatrix} f_{0}(x_{0})&\dots&f_{n}(x_{0})\\ \vdots&&\vdots\\ f_{0}(x_{i-1})&\dots&f_{n}(x_{i-1})\\ f_{0}(x_{i})&\dots&f_{n}(x_{i})\\ f_{0}(x_{i})&\dots&f_{n}^{\prime}(x_{i})\\ f_{0}(x_{i+2})&\dots&f_{n}(x_{i+2})\\ \vdots&&\vdots\\ f_{0}(x_{n})&\dots&f_{n}(x_{n})\end{pmatrix} \tag{6}\] and equivalently when \(x_{j}=x_{j+1}\), \(x_{k}=x_{k+1}\),... for additional entries. We use the additional brackets "(" and ")" to indicate that \(x_{i}\) is inserted in the \(f_{0},\dots,f_{n}\) and then also into \(f_{0}^{\prime},\dots,f_{n}^{\prime}\) to distinguish (6) from Definition 4.1.3 to avoid confusion. Hence, \[\det\begin{pmatrix}f_{0}&\dots&f_{i-1}&f_{i}&f_{i+1}&f_{i+2}&\dots&f_{n}\\ x_{0}&\dots&x_{i-1}&x_{i}&x_{i}&x_{i+2}&\dots&x_{n}\end{pmatrix}=0\] since in two rows \(x_{i}\) is inserted into \(f_{0},\dots,f_{n}\), while in \[\begin{pmatrix}f_{0}&\dots&f_{i-1}&f_{i}&f_{i+1}&f_{i+2}&\dots&f_{n}\\ x_{0}&\dots&x_{i-1}&(x_{i}&x_{i})&x_{i+2}&\dots&x_{n}\end{pmatrix}\] indicates that \(x_{i}\) is inserted in \(f_{0},\dots,f_{n}\) and then also into \(f_{0}^{\prime},\dots,f_{n}^{\prime}\). Extending this to zeros of order \(k\) for \(C^{k+1}\)-functions is straight forward and we leave it to the reader to write down the formulas and their proofs. Similar to (6) we write for any \(a\leq x_{0}\leq x_{1}\leq\dots\leq x_{n}\leq b\) the matrix as \[\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}^{*}\] when \(f_{0},\dots,f_{n}\) are sufficiently differentiable. We often want to express polynomials \(f\in\ln\mathcal{F}\) as determinants (4) only by knowing their zeros \(x_{1},\dots,x_{k}\). If arbitrary multiplicities appear we only have \(x_{1}\leq x_{2}\leq\dots\leq x_{n}\) where we include zeros multiple times according to their multiplicities. Hence, for \[x_{0}=\dots=x_{i_{1}}<x_{i_{1}+1}=\dots=x_{i_{2}}<\dots<x_{i_{k}+1}=\dots=x_{n}\] we introduce a simpler notation to write down (6): \[\left(\begin{array}{c|ccccccccc}f_{0}&f_{1}&f_{2}&\dots&f_{n}\\ x&x_{1}&x_{2}&\dots&x_{n}\end{array}\right):=\begin{pmatrix}f_{0}&f_{1}&\dots&f_{i _{1}}&f_{i_{1}+1}&\dots&f_{i_{2}}&\dots&f_{i_{k}+1}&\dots&f_{i_{k}+1}\\ x&(x_{1}&\dots&x_{i_{1}})&(x_{i_{1}+1}&\dots&x_{i_{2}})&\dots&(x_{i_{k}+1}&\dots&x _{n})\end{pmatrix}. \tag{7}\] Clearly \((7)\in\operatorname{lin}\mathcal{F}\). For (7) to be well-defined we need \(\mathcal{F}\subset C^{m-1}\) where \(m\) is the largest multiplicity of any zero. However, the procedure (5) can lead to the zero polynomial. We have to introduce ET-systems, see Section 4.4 and Definition 4.4.1. In Theorem 4.3.7 we did not need the condition \(\mathcal{F}\subset C^{m}\) for some \(m\geq 1\). The limit \(g_{0}\) of the \(g_{j}\) in (4) does not need the unique \(f_{0}^{\prime},\dots,f_{n}^{\prime}\) and therefore the limit needs not to be unique. \(\circ\) **Corollary 4.3.10**.: _Theorem 4.3.7 also holds for intervals \(\mathcal{X}\subseteq\mathds{R}\), i.e.,_ \[\mathcal{X}=(a,b),\ (a,b),\ [a,b),\ [a,b],\ (a,\infty),\ [a,\infty),\ (- \infty,b),\ (-\infty,b],\text{ and }\mathds{R}\qquad\text{with }a<b. \tag{8}\] Proof.: We have that "(i) \(\Rightarrow\) (ii)" follows from Corollary 4.3.4. For "(ii) \(\Rightarrow\) (i)" we apply Theorem 4.3.7 on \([\min_{i}x_{i},\max_{i}x_{i}]\) and Corollary 4.3.4 assures that no additional zeros appear in \(\mathcal{X}\). We will give a sharper version of Theorem 4.3.3, see also Remark 4.3.8. **Theorem 4.3.11**.: _Let \(n\in\mathds{N}\) and \(\mathcal{F}\) be a continuous T-system on \(\mathcal{X}=[a,b]\). Additionally, let \(x_{1},\dots,x_{k}\in\mathcal{X}\) and \(y_{1},\dots,y_{l}\in\mathcal{X}\) be pairwise distinct points. The following are equivalent:_ 1. _There exists a polynomial_ \(f\in\operatorname{lin}\mathcal{F}\) _such that_ 1. \(x_{1},\dots,x_{k}\) _are the non-nodal zeros of_ \(f\) _and_ 2. \(y_{1},\dots,y_{l}\) _are the nodal zeros of_ \(f\)_._ 2. \(2k+l\leq n\)_._ Proof.: (i) \(\Rightarrow\) (ii): That is Theorem 4.3.3. (ii) \(\Rightarrow\) (i): Adapt the proof and especially the \(g_{j}\)'s in (4) of Theorem 4.3.7 accordingly. Let \(z_{1}<\dots<z_{k+l}\) be the \(x_{i}\)'s and \(y_{i}\)'s together ordered by size. Then in \(g_{j}\) treat every nodal \(z_{i}\) like the endpoint \(a\) or \(b\), i.e., include it only once in the determinant, and insert for every non-nodal point \(z_{i}\) the point \(z_{i}\) and the sequence \(z_{i}^{(j)}\in(z_{i},z_{i+1})\) with \(\lim_{j\to\infty}z_{i}^{(j)}=z_{i}\). **Corollary 4.3.12**.: _Theorem 4.3.11 also holds for sets \(\mathcal{X}\subseteq\mathds{R}\) of the form_ 1. \(\mathcal{X}=(a,b)\)_,_ \([a,b)\)_,_ \((a,b]\) _with_ \(a<b\)_,_ 2. \(\mathcal{X}=(a,\infty)\)_,_ \([a,\infty)\)_,_ \((-\infty,b)\)_,_ \((-\infty,b]\)_,_ 3. \(\mathcal{X}=\{x_{1},\dots,x_{k}\}\subseteq\mathds{R}\) _with_ \(k\geq n+1\) _and_ \(x_{1}<\dots<x_{k}\)_, and_ 4. _countable unions of (i) to (iii)._ Proof.: In the adapted proof and the \(g_{j}\)'s in (4) of Theorem 4.3.7 we do not need to have non-negativity, i.e., in the \(g_{j}\)'s sign changes at the \(y_{i}\)'s are allowed (and even required). ### ET-Systems **Definition 4.4.1**.: Let \(n\in\mathds{N}\) and let \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\subset C^{n}([a,b])\) be a T-system of order \(n\) on \([a,b]\) with \(a<b\). \(\mathcal{F}\) is called an _extended Tchebycheff system (ET-system) of order \(n\)_ if any polynomial \(f\in\operatorname{lin}\mathcal{F}\setminus\{0\}\) has at most \(n\) zeros counting algebraic multiplicities. For notation of the matrices \[\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}^{*}\] for \(a\leq x_{0}\leq x_{1}\leq\dots x_{n}\leq b\) see the previous Remark 4.3.9. **Corollary 4.4.2** ([11] or e.g. [12, p. 37, p.1.1]).: _Let \(n\in\mathds{N}\) and \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\subset C^{n}([a,b])\). The following are equivalent:_ 1. \(\mathcal{F}\) _is an ET-system._ 2. _We have_ \[\det\begin{pmatrix}f_{0}&f_{1}&\dots&f_{n}\\ x_{0}&x_{1}&\dots&x_{n}\end{pmatrix}^{*}\neq 0\] _for every_ \(a\leq x_{0}\leq x_{1}\leq\dots\leq x_{n}\leq b\)_._ Proof.: Follows immediately from Remark 4.3.9. **Example 4.4.3** (see e.g. [12, p. 19, Exm. 12]).: Let \(n\in\mathds{N}\) and \(g_{0},\dots,g_{n}\in C^{n}([a,b])\) such that \(g_{0},\dots,g_{n}>0\) on \([a,b]\) with \(a<b\). Define \[f_{0}(x) :=g_{0}(x)\] \[f_{1}(x) :=g_{0}(x)\cdot\int_{a}^{x}g_{1}(y_{1})\ \mathrm{d}y_{1}\] \[f_{2}(x) :=g_{0}(x)\cdot\int_{a}^{x}g_{1}(y_{1})\cdot\int_{a}^{y_{1}}g_{2 }(y_{2})\ \mathrm{d}y_{2}\ \mathrm{d}y_{1}\] \[\quad\vdots\] \[f_{n}(x) :=g_{0}(x)\cdot\int_{a}^{x}g_{1}(y_{1})\cdot\int_{a}^{y_{1}}g_{2 }(y_{2})\ \dots\int_{a}^{y_{n-1}}g_{n}(y_{n})\ \mathrm{d}y_{n}\ \dots\ \mathrm{d}y_{2}\ \mathrm{d}y_{1}.\] Then \(\{f_{i}\}_{i=0}^{n}\) is an ET-system on \([a,b]\). \(\circ\) **Example 4.4.4**.: Let \(\mathcal{F}=\{1,x,x^{3}\}\) on \([0,b]\), \(b>0\). Then \(\mathcal{F}\) is a T-system (Example 4.2.1(b)) but not an ET-system. To see this let \(x_{0}=x_{1}=x_{2}=0\), then \[\begin{pmatrix}f_{0}&f_{1}&f_{2}\\ 0&0&0\end{pmatrix}^{*}=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&0\end{pmatrix}.\] This shows that \(\mathcal{F}\) is not an ET-system. \(\circ\) In the previous example the position \(x=0\) prevents the T-system to be a ET-system. If \(x=0\) is removed then it is an ET-system. **Example 4.4.5**.: Let \(\alpha_{0},\ldots,\alpha_{n}\in\mathds{N}_{0}\) with \(\alpha_{0}<\alpha_{1}<\cdots<\alpha_{n}\). Then \(\mathcal{F}=\{x^{\alpha_{i}}\}_{i=0}^{n}\) on \([a,\infty)\) with \(a>0\) is an ET-system. For \(n=2m\) and \(a<x_{1}<x_{2}<\cdots<x_{m}\) we often encounter a specific polynomial structure and hence we write it down explicitly once: \[\det\begin{pmatrix}x^{\alpha_{0}}&x^{\alpha_{1}}&x^{\alpha_{2}}& \ldots&x^{\alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix}\] \[=\lim_{\varepsilon\to 0}\varepsilon^{-m}\cdot\det\begin{pmatrix}x^{ \alpha_{0}}&x^{\alpha_{1}}&x^{\alpha_{2}}&\ldots&x^{\alpha_{2m-1}}&x^{\alpha_ {2m}}\\ x&x_{1}&x_{1}+\varepsilon&\ldots&x_{m}&x_{m}+\varepsilon\end{pmatrix}\] \[=\lim_{\varepsilon\to 0}\left[\prod_{i=1}^{m}(x_{i}-x)(x_{i}+ \varepsilon-x)\right]\cdot\left[\prod_{1\leq i<j\leq m}(x_{j}-x_{i})^{2}(x_{j }-x_{i}-\varepsilon)(x_{j}+\varepsilon-x_{i})\right] \tag{9}\] \[\qquad\times s_{\alpha}(x,x_{1},x_{1}+\varepsilon,\ldots,x_{m},x_ {m}+\varepsilon)\] \[=\prod_{i=1}^{m}(x_{i}-x)^{2}\cdot\prod_{1\leq i<j\leq m}(x_{j}-x _{i})^{4}\cdot s_{\alpha}(x,x_{1},x_{1},\ldots,x_{m},x_{m})\] where \(s_{\alpha}\) is the Schur polynomial of \(\alpha=(\alpha_{0},\ldots,\alpha_{n})\)[10]. Hence, \(s_{\alpha}(x,x_{1},x_{1},\ldots,x_{m},x_{m})\) is not divisible by some \((x_{i}-x)\). In fact, this is a special case of Example 4.4.3. With Example 4.4.3 we can even allow \(-\infty<\alpha_{0}<\alpha_{1}<\cdots<\alpha_{n}<\infty\) to be reals since \(a>0\). \(\circ\) Proof.: Combine the induction \[f^{(m+1)}(x)=\lim_{h\to 0}\frac{f^{(m)}(x+h)-f^{(m)}(x)}{h}\] and \[\det\begin{pmatrix}x^{\alpha_{0}}&\ldots&x^{\alpha_{n}}\\ x_{0}&\ldots&x_{n}\end{pmatrix}=\prod_{0\leq i<j\leq n}(x_{j}-x_{i})\cdot s_{ \alpha}(x_{0},\ldots,x_{n})\] where \(s_{\alpha}\) is the Schur polynomial of \(\alpha=(\alpha_{0},\ldots,\alpha_{n})\). **Example 4.4.6**.: Let \(n\in\mathds{N}\). Then the T-system \(\mathcal{F}=\{x^{i}\}_{i=0}^{n}\) on \(\mathds{R}\) is an ET-system. \(\circ\) ## 5 Sparse Positivstellensatze and Nichtnegativstellensatze In this section we present the Positivestellensatz for T-systems of Karlin (Karlin's Theorem 5.1.2 and Karlin's Corollary 5.1.3). We show their application to gain algebraic sparse Positivestellensatze and Nichtnegativstellensatze. They will be used to solve sparse moment problems. ### Sparse Positivstellensatze and Nichtnegativstellensatze on \([a,b]\) for general T-Systems For what follows we want to remind the reader of the index \(\varepsilon(x)\) of a point \(x\), see Definition 4.3.5. **Definition 5.1.1**.: Let \(\mathcal{Z}\subset[a,b]\). We say \(\mathcal{Z}\)_has index \(n\)_ if \(\sum_{x\in\mathcal{Z}}\varepsilon(x)=n\). The same definition holds for sets \(\mathcal{X}\) as in Corollary 4.3.4. Because of its importance and since it was first proved in full generality by Karlin in [11] we call the following result Karlin's Theorem. **Karlin's Theorem 5.1.2** ([11] or e.g. [12, p. 66, Thm. 10.1]).: _Let \(n\in\mathds{N}\), \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a continuous T-system of order \(n\) on \([a,b]\) with \(a<b\), and let \(f\in C([a,b])\) with \(f>0\) on \([a,b]\). The following hold:_ 1. _There exists a unique polynomial_ \(f_{*}\in\operatorname{lin}\mathcal{F}\) _such that_ 1. \(f(x)\geq f_{*}(x)\geq 0\) _for all_ \(x\in[a,b]\)_,_ 2. \(f_{*}\) _vanishes on a set with index_ \(n\)_,_ 3. _the function_ \(f-f_{*}\) _vanishes at least once between each pair of adjacent zeros of_ \(f_{*}\)_,_ 4. _the function_ \(f-f_{*}\) _vanishes at least once between the largest zero of_ \(f_{*}\) _and the end point_ \(b\)_,_ 5. \(f_{*}(b)>0\)_._ 2. _There exists a unique polynomial_ \(f^{*}\in\operatorname{lin}\mathcal{F}\) _which satisfies the conditions (a)-(d) of (i), and_ 1. \(f^{*}(b)=0\)_._ Proof.: See e.g. [11, p. 68-71]. Note, in the previous result we do not need to have \(f\in\operatorname{lin}\mathcal{F}\). The function \(f\) only needs to be continuous and strictly positive on \([a,b]\). An earlier version of Karlin's Theorem 5.1.2 is a lemma by Markov [13], see also [10, p. 80]. For the same reason as for Karlin's Theorem 5.1.2 we call the following immediate consequence Karlin's Corollary. It is the T-system Positivstellensatz by Karlin and will be used to generate (algebraic) Positivestellensatze. **Karlin's Corollary 5.1.3** ([10] or e.g. [11, p. 71, Cor. 10.1(a)]).: _Let \(n\in\mathds{N}\), \(\mathcal{F}\) be a continuous T-system of order \(n\) on \([a,b]\) with \(a<b\), and let \(f\in\operatorname{lin}\mathcal{F}\) with \(f>0\) on \([a,b]\). Then there exists a unique representation_ \[f=f_{*}+f^{*}\] _with \(f_{*},f^{*}\in\operatorname{lin}\mathcal{F}\) such that_ 1. \(f_{*},f^{*}\geq 0\) _on_ \([a,b]\)_,_ 2. _the zeros of_ \(f_{*}\) _and_ \(f^{*}\) _each are sets of index_ \(n\)_,_ 3. _the zeros of_ \(f_{*}\) _and_ \(f^{*}\) _strictly interlace, and_ 4. \(f^{*}(b)=0\)_._ Proof.: Let \(f_{*}\) be the unique \(f_{*}\) from Karlin's Theorem 5.1.2(i). Then \(f-f_{*}\in\operatorname{lin}\mathcal{F}\) is a polynomial and fulfills (a)-(d), and (e') of \(f^{*}\) in Karlin's Theorem 5.1.2. But since \(f^{*}\) is unique we have \(f-f_{*}=f^{*}\). **Corollary 5.1.4** ([10] or e.g. [11, Cor. 10.1(b)]).: _Let \(n\in\mathds{N}\), \(\{f_{i}\}_{i=0}^{n}\) and \(\{f_{i}\}_{i=0}^{n+1}\) be continuous T-systems of order \(n\) and \(n+1\) on \([a,b]\) with \(a<b\). Then \(f_{n+1}-(f_{n+1})_{*}\) and \(f_{n+1}-(f_{n+1})^{*}\) both vanish on sets of index \(n+1\)._ **Theorem 5.1.5** ([10] or e.g. [11, Thm. 10.2]).: _Let \(n\in\mathds{N}\), \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a continuous T-system of order \(n\) on \([a,b]\) with \(a<b\), and let \(g_{1},g_{2}\) be two continuous functions on \([a,b]\) such that there exists a \(g^{\prime}\in\operatorname{lin}\mathcal{F}\) with_ \[g_{1}(x)<g^{\prime}(x)<g_{2}(x)\] _for all \(x\in[a,b]\). The following hold:_ 1. _There exists a unique polynomial_ \(f_{*}\in\operatorname{lin}\mathcal{F}\) _such that_ 1. \(g_{1}(x)\leq f_{*}(x)\leq g_{2}(x)\) _for all_ \(x\in[a,b]\)_, and_ _,_ 2. _there exist_ \(n+1\) _points_ \(x_{1}<\dots<x_{n+1}\) _in_ \([a,b]\) _such that_ \[f_{*}(x_{n+1-i})=\begin{cases}g_{1}(x_{n+1-i})&\text{for $i=1,3,5,\dots$},\\ g_{2}(x_{n+1-i})&\text{for $i=0,2,4,\dots$}.\end{cases}\] 3. _There exists a unique polynomial_ \(f^{*}\in\operatorname{lin}\mathcal{F}\) _such that_ 1. \(g_{1}(x)\leq f^{*}(x)\leq g_{2}(x)\) _for all_ \(x\in[a,b]\)_, and_ 2. _there exist_ \(n+1\) _points_ \(y_{1}<\dots<y_{n+1}\) _in_ \([a,b]\) _such that_ \[f^{*}(y_{n+1-i})=\begin{cases}g_{2}(y_{n+1-i})&\text{for $i=1,3,5,\dots$},\\ g_{1}(y_{n+1-i})&\text{for $i=0,2,4,\dots$}.\end{cases}\] Proof.: See [11, p. 73]. In Karlin's Theorem 5.1.2 and Karlin's Corollary 5.1.3 we dealt with \(f\in\operatorname{lin}\mathcal{F}\) with \(f>0\), i.e., they are the Positivstellensatz. The following result allows for \(f\geq 0\) and is therefore together with Corollary 5.1.7 the T-system Nichtnegativstellensatz of Karlin. We get from Corollary 5.1.7 sparse algebraic Nichtnegativstellensatze (Theorem 5.2.7). **Theorem 5.1.6** ([11] or e.g. [11, p. 74, Thm. 10.3]).: _Let \(n\in\mathds{N}\), \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a continuous ET-system of order \(n\) on \([a,b]\) with \(a<b\), and let \(f\in C^{n}([a,b])\) be such that \(f\geq 0\) on \([a,b]\) and \(f\) has \(r<n\) zeros (counting multiplicities). The following hold:_ 1. _There exists a unique polynomial_ \(f_{*}\in\operatorname{lin}\mathcal{F}\) _such that_ 1. \(f(x)\geq f_{*}(x)\geq 0\) _for all_ \(x\in[a,b]\)_,_ 2. \(f_{*}\) _has_ \(n\) _zeros counting multiplicities,_ 3. _if_ \(x_{1}<\dots<x_{n-r}\) _in_ \((a,b)\) _are the zeros of_ \(f_{*}\) _which remain after removing the_ \(r\) _zeros of_ \(f\) _then_ \(f-f_{*}\) _vanishes at least twice more (counting multiplicities) in each open interval_ \((x_{i},x_{i+1})\)_,_ \(i=1,\dots,n-r-1\)_, and at least once more in each of the intervals_ \([a,x_{1})\) _and_ \((x_{n-r},b]\)_,_ 4. _the zeros_ \(x_{1},\dots,x_{n-r}\) _of (c) are a set of index_ \(n-r\)_, and_ 5. \(x_{n-r}<b\)_._ 2. _There exists a unique polynomial_ \(f^{*}\in\operatorname{lin}\mathcal{F}\) _satisfying the conditions (a) to (d) and (e')_ \(x_{n-r}=b\)_._ Proof.: See [11, p. 74-75]. **Corollary 5.1.7** ([11] or e.g. [11, p. 76, Cor. 10.3]).: _Let \(n\in\mathds{N}\), \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be an ET-system of order \(n\) on \([a,b]\) with \(a<b\), and let \(f\in\operatorname{lin}\mathcal{F}\) be such that \(f\geq 0\) on \([a,b]\) and \(f\) has \(r<n\) zeros (counting multiplicities). Then there exists a unique representation_ \[f=f_{*}+f^{*}\] _with \(f_{*},f^{*}\in\operatorname{lin}\mathcal{F}\) such that_ 1. \(f_{*},f^{*}\geq 0\) _on_ \([a,b]\)_,_ 2. \(f_{*}\) _and_ \(f^{*}\) _have_ \(n\) _zeros (counting multiplicity) which strictly interlace if the zeros of_ \(f\) _are removed,_ 3. \(f^{*}(b)=0\) ### Sparse Positivstellensatze and Nichtnegativstellensatze on \([a,b]\) for Algebraic Polynomials **Theorem 5.2.1** (Sparse Algebraic Positivstellensatze on \([a,b]\) with \(0<a<b\)).: _Let \(n\in\mathds{N}\), \(\alpha_{0},\ldots,\alpha_{n}\in\mathds{R}\) be real numbers with \(\alpha_{0}<\alpha_{1}<\cdots<\alpha_{n}\), and let \(\mathcal{F}=\{x^{\alpha_{i}}\}_{i=0}^{n}\). Then for any \(f=\sum_{i=0}^{n}a_{i}f_{i}\in\ln\mathcal{F}\) with \(f>0\) on \([a,b]\) and \(a_{n}>0\) there exists a unique decomposition_ \[f=f_{*}+f^{*}\] _with \(f_{*},f^{*}\in\ln\mathcal{F}\) such that_ 1. _for_ \(n=2m\) _there exist points_ \(x_{1},\ldots,x_{m},y_{1},\ldots,y_{m-1}\in[a,b]\) _with_ \[a<x_{1}<y_{1}<\cdots<x_{m}<b\] _and constants_ \(c_{*},c^{*}>0\) _with_ \[f_{*}(x)=c_{*}\cdot\det\begin{pmatrix}x^{\alpha_{0}}&x^{\alpha_{1}}&x^{\alpha _{2}}&\ldots&x^{\alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix}\geq 0\] _and_ \[f^{*}(x)=-c^{*}\cdot\det\begin{pmatrix}x^{\alpha_{0}}&x^{\alpha_{1}}&x^{ \alpha_{2}}&x^{\alpha_{3}}&\ldots&x^{\alpha_{2m-2}}&x^{\alpha_{2m-1}}&x^{ \alpha_{2m}}\\ x&a&(y_{1}&y_{1})&\ldots&(y_{m-1}&y_{m-1})&b\end{pmatrix}\geq 0\] _for all_ \(x\in[a,b]\)_, or_ 2. _for_ \(n=2m+1\) _there exist points_ \(x_{1},\ldots,x_{m},y_{1},\ldots,y_{m}\in[a,b]\) _with_ \[a<y_{1}<x_{1}<\cdots<y_{m}<x_{m}<b\] _and_ \(c_{*},c^{*}>0\) _with_ \[f_{*}(x)=-c_{*}\cdot\det\begin{pmatrix}x^{\alpha_{0}}&x^{\alpha_{1}}&x^{\alpha _{2}}&x^{\alpha_{3}}&\ldots&x^{\alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&a&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix}\geq 0\] _and_ \[f^{*}(x)=c^{*}\cdot\det\begin{pmatrix}x^{\alpha_{0}}&x^{\alpha_{1}}&x^{\alpha _{2}}&\ldots&x^{\alpha_{2m-1}}&x^{\alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&(y_{1}&y_{1})&\ldots&(y_{m}&y_{m})&b\end{pmatrix}\geq 0\] _for all_ \(x\in[a,b]\)_._ Proof.: By Example 4.4.5 we have that \(\mathcal{F}\) on \([a,b]\) is an ET-system. Hence, Karlin's Corollary 5.1.3 applies. We check both cases \(n=2m\) and \(n=2m+1\) separately. \(\underline{n=2m}\): By Karlin's Corollary 5.1.3 we have that the zero set \(\mathcal{Z}(f^{*})\) of \(f^{*}\) has index \(2m\) and contains \(b\) with index \(1\), i.e., \(a\in\mathcal{Z}(f^{*})\) and all other zeros have index \(2\). Hence, \(\mathcal{Z}(f^{*})=\{a=y_{0}<y_{1}<\cdots<y_{m-1}<y_{m}=b\}\). By Karlin's Corollary 5.1.3 we have that \(\mathcal{Z}(f_{*})\) also has index \(2m\) and the zeros of \(f_{*}\) and \(f^{*}\) interlace. Then the determinantal representations of \(f_{*}\) and \(f^{*}\) follow from Remark 4.3.9. \(\underline{n=2m+1}\): By Karlin's Corollary 5.1.3 we have that \(b\in\mathcal{Z}(f^{*})\) and since the index of \(\mathcal{Z}(f^{*})\) is \(2m+1\) we have that there are only double zeros \(y_{1},\ldots,y_{m}\in(a,b)\) in \(\mathcal{Z}(f^{*})\). Similar we find that \(a\in\mathcal{Z}(f_{*})\) since its index is odd and only double zeros \(x_{1},\ldots,x_{m}\in(a,b)\) in \(\mathcal{Z}(f_{*})\) remain. By Karlin's Corollary 5.1.3(iii) the zeros \(x_{i}\) and \(y_{i}\) strictly interlace and the determinantal representation of \(f_{*}\) and \(f^{*}\) follow again from Remark 4.3.9. Note, if \(\alpha_{0},\ldots,\alpha_{n}\in\mathds{N}_{0}\) then by Example 4.4.5 equation (9) the algebraic polynomials \(f_{*}\) and \(f^{*}\) can also be written down with Schur polynomials. Theorem 5.2.1 does not to hold for \(a=0\) and \(\alpha_{0}>0\) or \(\alpha_{0},\ldots,\alpha_{k}<0\). In case \(\alpha_{0}>0\) the determinantal representations of \(f^{*}\) for \(n=2m\) and \(f_{*}\) for \(n=2m+1\) are the zero polynomial. In fact, in this case \(\mathcal{F}\) is not even a T-system since in Lemma 4.1.4 the determinant contains a zero column if \(x_{0}=0\). We need to have \(\alpha_{0}=0\) (\(x^{\alpha_{0}}=1\)) to let \(a=0\). For \(\alpha_{0},\ldots,\alpha_{k}<0\) we have singularities at \(x=0\) and hence no T-system. **Corollary 5.2.2**.: _If \(\alpha_{0}=0\) in Theorem 5.2.1 then Theorem 5.2.1 also holds with \(a=0\)._ Proof.: The determinantal representations of \(f_{*}\) for \(n=2m+1\) and \(f^{*}\) for \(n=2m\) in Theorem 5.2.1 continuously depend on \(a\). It is sufficient to show that these representations are non-trivial (not the zero polynomial) for \(a=0\). We show this for \(f_{*}\) in case (ii) \(n=2m+1\). The other cases are equivalent. For \(\varepsilon>0\) small enough we set \[g_{\varepsilon}(x) =-\varepsilon^{-m}\cdot\det\begin{pmatrix}1&x^{\alpha_{1}}&x^{ \alpha_{2}}&x^{\alpha_{3}}&\ldots&x^{\alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&0&x_{1}&x_{1}+\varepsilon&\ldots&x_{m}&x_{m}+\varepsilon\end{pmatrix}\] \[=-\varepsilon^{-m}\cdot\det\begin{pmatrix}1&x^{\alpha_{1}}&x^{ \alpha_{2}}&\ldots&x^{\alpha_{2m+1}}\\ 1&0&0&\ldots&0\\ 1&x_{1}^{\alpha_{1}}&x_{1}^{\alpha_{2}}&\ldots&x_{1}^{\alpha_{2m+1}}\\ \vdots&\vdots&\vdots&\vdots\\ 1&(x_{m}+\varepsilon)^{\alpha_{1}}&(x_{m}+\varepsilon)^{\alpha_{2}}&\ldots &(x_{m}+\varepsilon)^{\alpha_{2m+1}}\end{pmatrix}\] develop with respect to the second row \[=\varepsilon^{-m}\cdot\det\begin{pmatrix}x^{\alpha_{1}}&x^{ \alpha_{2}}&\ldots&x^{\alpha_{2m-1}}\\ x_{1}^{\alpha_{1}}&x_{1}^{\alpha_{2}}&\ldots&x_{1}^{\alpha_{2m-1}}\\ \vdots&\vdots&&\vdots\\ (x_{m}+\varepsilon)^{\alpha_{1}}&(x_{m}+\varepsilon)^{\alpha_{2}}&\ldots&(x_ {m}+\varepsilon)^{\alpha_{2m+1}}\end{pmatrix}\] \[=\varepsilon^{-m}\cdot\det\begin{pmatrix}x^{\alpha_{1}}&x^{ \alpha_{2}}&x^{\alpha_{3}}&\ldots&x^{\alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&x_{1}&x_{1}+\varepsilon&\ldots&x_{m}&x_{m}+\varepsilon\end{pmatrix}.\] Then \(x_{1},x_{1}+\varepsilon,\ldots,x_{m},x_{m}+\varepsilon\in(0,b]\), i.e., \(\{x^{\alpha_{i}}\}_{i=1}^{n}\) is an ET-system on \([a^{\prime},b]\) with \(0=a<a^{\prime}<x_{1}\), see Example 4.4.5. By Remark 4.3.9 the representation is not the zero polynomial which ends the proof. The Theorem 5.2.1 is a complete description of \(\operatorname{int}\left(\operatorname{lin}\mathcal{F}\right)_{+}\). Since \(\mathcal{F}\) is continuous on the compact interval \([a,b]\) and \(x^{\alpha_{0}}>0\) on \([a,b]\), we have that the truncated moment cone is closed and hence \((\operatorname{lin}\mathcal{F})_{+}\) and the moment cone are dual to each other. With Theorem 5.2.1 we can now write down the conditions for the sparse truncated Hausdorff moment problem on \([a,b]\) with \(a>0\). We are not aware of a reference for the following result. **Theorem 5.2.3** (Sparse Truncated Hausdorff Moment Problem on \([a,b]\) with \(a>0\)).: _Let \(n\in\mathds{N}\), \(\alpha_{0},\ldots,\alpha_{n}\in[0,\infty)\) with \(\alpha_{0}<\cdots<\alpha_{n}\), and \(a,b\) with \(0<a<b\). Set \(\mathcal{F}=\{x^{\alpha_{i}}\}_{i=0}^{n}\). Then the following are equivalent:_ 1. \(L:\operatorname{lin}\mathcal{F}\to\mathds{R}\) _is a truncated_ \([a,b]\)_-moment functional._ 2. \(L(p)\geq 0\) _holds for all_ \[p(x):=\begin{cases}\det\begin{pmatrix}x^{\alpha_{0}}&x^{\alpha_{1}}&x^{\alpha _{2}}&\ldots&x^{\alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix}\\ -\det\begin{pmatrix}x^{\alpha_{0}}&x^{\alpha_{1}}&x^{\alpha_{2}}&x^{\alpha_{3} }&\ldots&x^{\alpha_{2m-2}}&x^{\alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&a&(x_{1}&x_{1})&\ldots&(x_{m-1}&x_{m-1})&b\end{pmatrix}\end{cases}\] if \(n=2m\)__ _or_ \[p(x):=\begin{cases}-\det\begin{pmatrix}x^{\alpha_{0}}&x^{\alpha_{1}}&x^{ \alpha_{2}}&x^{\alpha_{3}}&\ldots&x^{\alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&a&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{cases}\\ \det\begin{pmatrix}x^{\alpha_{0}}&x^{\alpha_{1}}&x^{\alpha_{2}}&\ldots&x^{ \alpha_{2m-1}}&x^{\alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix}\end{cases}\] if \(n=2m+1\) _and all \(x_{1},\dots,x_{m}\) with \(a<x_{1}<\dots<x_{m}<b\)._ Proof.: The implication (i) \(\Rightarrow\) (ii) is clear since all given polynomials \(p\) are non-negative on \([a,b]\). It is therefore sufficient to prove (ii) \(\Rightarrow\) (i). Since \(a>0\) we have that \(x^{\alpha_{0}}>0\) on \([a,b]\) and since \([a,b]\) is compact we have that the moment cone \(((\ln\mathcal{F})_{+})^{*}\) as the dual of the cone of non-negative (sparse) polynomials \((\ln\mathcal{F})_{+}\) is a closed pointed cone. To establish \(L\in((\ln\mathcal{F})_{+})^{*}\) it is sufficient to have \(L(f)\geq 0\) for all \(f\in(\ln\mathcal{F})_{+}\). Let \(f\in(\ln\mathcal{F})_{+}\). Then for all \(\varepsilon>0\) we have \(f_{\varepsilon}:=f+\varepsilon\cdot x^{\alpha_{0}}>0\) on \([a,b]\), i.e., by Theorem 5.2.1\(f_{\varepsilon}\) is a conic combination of the polynomials \(p\) in (ii) and hence \(L(f)+\varepsilon\cdot L(x^{\alpha_{0}})=L(f_{\varepsilon})\geq 0\) for all \(\varepsilon>0\). Since \(x^{\alpha_{0}}>0\) on \([a,b]\) we also have that \(x^{\alpha_{0}}\) is a conic combination of the polynomials \(p\) in (ii) and therefore \(L(x^{\alpha_{0}})\geq 0\). Then \(L(f)\geq 0\) follows from \(\varepsilon\to 0\) which proves (i). **Corollary 5.2.4**.: _If \(\alpha_{0}=0\) in Theorem 5.2.3 then Theorem 5.2.3 also holds with \(a=0\), i.e., the following are equivalent:_ 1. \(L:\operatorname{lin}\mathcal{F}\to\mathds{R}\) _is a truncated_ \([0,b]\)_-moment functional._ 2. \(L(p)\geq 0\) _holds for all_ \[p(x):=\begin{cases}\det\begin{pmatrix}1&x^{\alpha_{1}}&x^{\alpha_{2}}&\dots&x ^{\alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&(x_{1}&x_{1})&\dots&(x_{m}&x_{m})\end{pmatrix}\\ \det\begin{pmatrix}x^{\alpha_{1}}&x^{\alpha_{2}}&x^{\alpha_{3}}&\dots&x^{\alpha _{2m-2}}&x^{\alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&(x_{1}&x_{1})&\dots&(x_{m-1}&x_{m-1})&b\end{pmatrix}\end{cases}\) _if_ \(n=2m\)__ _or_ \[p(x):=\begin{cases}\det\begin{pmatrix}x^{\alpha_{1}}&x^{\alpha_{2}}&x^{\alpha_ {3}}&\dots&x^{\alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&(x_{1}&x_{1})&\dots&(x_{m}&x_{m})\end{pmatrix}\\ \det\begin{pmatrix}1&x^{\alpha_{1}}&x^{\alpha_{2}}&\dots&x^{\alpha_{2m-1}}&x^{ \alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&(x_{1}&x_{1})&\dots&(x_{m}&x_{m})&b\end{pmatrix}\end{cases}\] _if_ \(n=2m+1\)__ _and all \(x_{1},\dots,x_{m}\) with \(a<x_{1}<\dots<x_{m}<b\)._ Proof.: Follows from Corollary 5.2.2. For the following we want to remind the reader of the Muntz-Szasz Theorem [13, 14]. It states that for real exponents \(\alpha_{0}=0<\alpha_{1}<\alpha_{2}<\dots\) the vector space \(\operatorname{lin}\{x^{\alpha_{i}}\}_{i\in\mathds{N}_{0}}\) of finite linear combinations is dense in \(C([0,1])\) if and only if \(\sum_{i\in\mathds{N}}\frac{1}{\alpha_{i}}=\infty\). We state the following only for the classical case of the interval \([0,1]\). Other cases \([a,b]\subset[0,\infty)\) are equivalent. We are not aware of a reference for the following result. Hausdorff required \(\alpha_{i}\to\infty\). The Muntz-Szasz Theorem does not require \(\alpha_{i}\to\infty\). The conditions \(\alpha_{0}=0\) and \(\sum_{i\in\mathds{N}}\frac{1}{\alpha_{i}}=\infty\) already appear in [14, eq. (17)]. **Theorem 5.2.5** (Sparse Hausdorff Moment Problem).: _Let \(\{\alpha_{i}\}_{i\in\mathds{N}_{0}}\subset[0,\infty)\) with \(0=\alpha_{0}<\alpha_{1}<\dots\) and \(\sum_{i\in\mathds{N}}\frac{1}{\alpha_{i}}=\infty\). Let \(\mathcal{F}=\{x^{\alpha_{i}}\}_{i\in\mathds{N}_{0}}\). The following are equivalent:_ 1. \(L:\operatorname{lin}\mathcal{F}\to\mathds{R}\) _is a_ \([0,1]\)_-moment functional._ 2. \(L(p)\geq 0\) _holds for all_ \(p\in(\operatorname{lin}\mathcal{F})_{+}\)_._ 3. \(L(p)\geq 0\) _holds for all_ \(p\in\operatorname{lin}\mathcal{F}\) _with_ \(p>0\) _._ * \(L(p)\geq 0\) _holds for all_ \[p(x)=\begin{cases}\det\begin{pmatrix}1&x^{\alpha_{1}}&x^{\alpha_{2}}&\ldots&x^{ \alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix},\\ \det\begin{pmatrix}x^{\alpha_{1}}&x^{\alpha_{2}}&x^{\alpha_{3}}&\ldots&x^{ \alpha_{2m-2}}&x^{\alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m-1}&x_{m-1})&1\end{pmatrix},\\ \det\begin{pmatrix}x^{\alpha_{1}}&x^{\alpha_{2}}&x^{\alpha_{3}}&\ldots&x^{ \alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix},\text{ and}\\ \det\begin{pmatrix}1&x^{\alpha_{1}}&x^{\alpha_{2}}&\ldots&x^{\alpha_{2m-1}}&x^ {\alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})&1\end{pmatrix}\end{cases}\] _for all_ \(m\in\mathds{N}\) _and all_ \(0<x_{1}<x_{2}<\cdots<x_{m}<1\)_._ Proof.: The implications "(i) \(\Rightarrow\) (ii) \(\Leftrightarrow\) (iii)" are clear and "(iii) \(\Leftrightarrow\) (iv)" follows from Theorem 5.2.1. It is therefore sufficient to show "(ii) \(\Rightarrow\) (i)". Let \(f\in C([0,1])\) with \(f>0\). Since \(\lim\mathcal{F}\) is dense in \(C([0,1])\) by the Muntz-Szasz Theorem there are sequences \(\{g_{i}\}_{i\in\mathds{N}_{0}}\) and \(\{h_{i}\}_{i\in\mathds{N}_{0}}\) with \(0<g_{i}<f<h_{i}\) and \(\|g_{i}-h_{i}\|_{\infty}\to 0\) as \(i\to\infty\). Hence, \(L(f)\geq 0\). Since \(f\in C([0,1])\) with \(f>0\) was arbitrary we have that \(L(f)\geq 0\) for all \(f\in C([0,1])\) with \(f\geq 0\). Then by the Riesz-Markov-Kakutani Representation Theorem we have that \(L\) has a unique representing measure. The previous proof can be simplified by using Choquet's theory of adapted spaces, see [10] or for a more modern formulation [11] or [12, Ch. 1]. With that we can even remove the use of the Muntz-Szasz Theorem and therefore the condition \(\sum_{i\in\mathds{N}}\frac{1}{\alpha_{i}}=\infty\). Additionally, we can allow for negative exponents. We will use this approach below and also in all other proofs from here on. The following theorem has to our knowledge not been presented before. **Theorem 5.2.6** (General Sparse Hausdorff Moment Problem on \([a,b]\) with \(0\leq a<b\)).: _Let \(I\subset\mathds{N}_{0}\) be an index set (finite or infinite), let \(\{\alpha_{i}\}_{i\in I}\) be such that \(\alpha_{i}\neq\alpha_{j}\) for all \(i\neq j\) and_ 1. _if_ \(a=0\) _then_ \(\{\alpha_{i}\}_{i\in I}\subset[0,\infty)\) _with_ \(\alpha_{i}=0\) _for an_ \(i\in I\)_, or_ 2. _if_ \(a>0\) _then_ \(\{\alpha_{i}\}_{i\in I}\subset\mathds{R}\)_._ _Let \(\mathcal{F}=\{x^{\alpha_{i}}\}_{i\in I}\). Then the following are equivalent:_ 1. \(L:\lim\mathcal{F}\to\mathds{R}\) _is a Hausdorff moment functional._ 2. \(L(p)\geq 0\) _holds for all_ \(p\in(\lim\mathcal{F})_{+}\)_._ 3. \(L(p)\geq 0\) _holds for all_ \(p\in\lim\mathcal{F}\) _with_ \(p>0\)_._ 4. \(L(p)\geq 0\) _holds for all_ \[p(x)=\begin{cases}\det\begin{pmatrix}x^{\alpha_{i_{0}}}&x^{\alpha_{i_{1}}}&x^{ \alpha_{i_{2}}}&\ldots&x^{\alpha_{i_{2m-1}}}&x^{\alpha_{2m}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix},&\text{if $|I|=2m$ or $\infty$,}\\ \det\begin{pmatrix}x^{\alpha_{i_{1}}}&x^{\alpha_{i_{2}}}&x^{\alpha_{i_{3}}}& \ldots&x^{\alpha_{i_{2m-2}}}&x^{\alpha_{i_{2m-1}}}&x^{\alpha_{i_{2m}}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m-1}&x_{m-1})&b\end{pmatrix},&\text{if $|I|=2m$ or $\infty$,}\\ \det\begin{pmatrix}x^{\alpha_{i_{1}}}&x^{\alpha_{i_{2}}}&x^{\alpha_{i_{3}}}& \ldots&x^{\alpha_{i_{2m}}}&x^{\alpha_{i_{2m+1}}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix},&\text{if $|I|=2m+1$ or $\infty$, and}\\ \det\begin{pmatrix}x^{\alpha_{i_{0}}}&x^{\alpha_{i_{1}}}&x^{\alpha_{i_{2}}}& \ldots&x^{\alpha_{i_{2m-1}}}&x^{\alpha_{i_{2m}}}&x^{\alpha_{i_{2m+1}}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})&b\end{pmatrix},&\text{if $|I|=2m+1$ or $\infty$,}\end{cases}\] _for all_ \(m\in\mathds{N}\) _if_ \(|I|=\infty\)_, all_ \(0<x_{1}<x_{2}<\cdots<x_{m}<b\)_, and all_ \(\alpha_{i_{0}}<\alpha_{i_{1}}<\cdots<\alpha_{i_{m}}\) _with_ \(\alpha_{i_{0}}=0\) _if_ \(a=0\)_._ _If additionally \(\sum_{i:\alpha_{i}\neq 0}\frac{1}{|\alpha_{i}|}=\infty\) then \(L\) is determinate._ Proof.: The case \(|I|<\infty\) is Theorem 5.2.3. We therefore prove the case \(|I|=\infty\). The choice \(\alpha_{i_{0}}<\alpha_{i_{1}}<\cdots<\alpha_{i_{m}}\) with \(\alpha_{i_{0}}=0\) if \(a=0\) makes \(\{x^{\alpha_{i_{j}}}\}_{j=0}^{m}\) a T-system. The implications "(i) \(\Rightarrow\) (ii) \(\Leftrightarrow\) (iii)" are clear and "(iii) \(\Leftrightarrow\) (iv)" is Theorem 5.2.1. It is therefore sufficient to show "(ii) \(\Rightarrow\) (i)". The space \(\ln\mathcal{F}\) is an adapted space and the assertion follows therefore from [22, Thm. 1.8]. For the determinacy of \(L\) split \(\{\alpha_{i}\}_{i\in I}\) into positive and negative exponents. If \(\sum_{i:\alpha_{i}\neq 0}\frac{1}{|\alpha_{i}|}=\infty\) then the corresponding sum over at least one group is infinite. If the sum over the positive exponents is infinite apply the Muntz-Szasz Theorem. If the sum over the negative exponents is infinite apply the Muntz-Szasz Theorem to \(\{(x^{-1})^{-\alpha_{i}}\}_{i\in I:\alpha_{i}<0}\) since \(a>0\). Note, since \([a,b]\) is compact the fact that \(\{x^{\alpha_{i}}\}_{i\in I}\) is an adapted space is trivial. In the previous results we only needed the description of all strictly positive polynomials. The non-negative polynomials are described in the following result. Again, we are not aware of a reference. **Theorem 5.2.7** (Sparse Algebraic Nichtnegativstellensatz on \([a,b]\) with \(0<a<b\)).: _Let \(n\in\mathds{N}\), \(\alpha_{0},\ldots,\alpha_{n}\in\mathds{R}\) be real numbers with \(\alpha_{0}<\alpha_{1}<\cdots<\alpha_{n}\), and let \(\mathcal{F}=\{x^{\alpha_{i}}\}_{i=0}^{n}\). Let \(f\in\ln\mathcal{F}\) with \(f\geq 0\) on \([a,b]\). Then there exist points \(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\in[a,b]\) (not necessarily distinct) with \(y_{n}=b\) which include the zeros of \(f\) with multiplicities and there exist constants \(c_{*},c^{*}\in\mathds{R}\) such that_ \[f=f_{*}+f^{*}\] _with \(f_{*},f^{*}\in\ln\mathcal{F}\), \(f_{*},f^{*}\geq 0\) on \([a,b]\), and the polynomials \(f_{*}\) and \(f^{*}\) are given by_ \[f_{*}(x)=c_{*}\cdot\det\left(\begin{array}{c|ccc}f_{0}&f_{1}&\ldots&f_{n}\\ x&x_{1}&\ldots&x_{n}\end{array}\right)\qquad\text{and}\qquad f^{*}(x)=c_{*} \cdot\det\left(\begin{array}{c|ccc}f_{0}&f_{1}&\ldots&f_{n}\\ x&y_{1}&\ldots&y_{n}\end{array}\right)\] _for all \(x\in[a,b]\)._ _Removing the zeros of \(f\) from \(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\) we can assume that the remaining \(x_{i}\) and \(y_{i}\) are disjoint and when grouped by size the groups strictly interlace:_ \[a\ \leq\ x_{i_{1}}=\cdots=x_{i_{k}}\ <\ y_{j_{1}}=\cdots=y_{j_{l}}\ <\ \ldots\ <x_{i_{p}}=\cdots=x_{i_{q}}\ <\ y_{j_{r}}=\cdots=y_{j_{s}}=b.\] _Each such group in \((a,b)\) has an even number of members._ Proof.: By Example 4.4.5 we have that \(\mathcal{F}\) on \([a,b]\) is an ET-system. We then apply Corollary 5.1.7 similar to the proof of Theorem 5.2.1. The signs of \(c_{*}\) and \(c^{*}\) are determined by \(x_{1}\) and \(y_{1}\) and their multiplicity. If \(x_{1}=\cdots=x_{k}<x_{k+1}\leq\cdots\leq x_{n}\) then \(\operatorname{sgn}c_{*}=(-1)^{k}\). The same holds for \(c^{*}\) from the \(y_{i}\). **Corollary 5.2.8**.: _If \(\alpha_{0}=0\) in Theorem 5.2.7 then Theorem 5.2.7 also holds for \(\alpha_{0}=0\)._ **Example 5.2.9**.: Let \(\alpha\in(0,\infty)\) and let \(\mathcal{F}=\{1,x^{\alpha}\}\) on \([0,1]\). Then we have \(1=1_{*}+1^{*}\) with \(1_{*}=x^{\alpha}\) and \(1^{*}=1-x^{\alpha}\). \(\diamond\) ### Sparse Positivstellensatze and Nichtnegativstellensatze on \([0,\infty)\) In Section 5.1 we have seen the general Positivstellen- and Nichtnegativitatsstellensatze for T-systems and then applied these to the algebraic cases on \([a,b]\). We now show how the results from Section 5.1 on \([a,b]\) can be transferred to \([0,\infty)\). **Theorem 5.3.1** ([11] or e.g. [12, Thm. 8.1]).: _Let \(n\in\mathds{N}\) and \(\mathcal{F}=\{f_{i}\}_{i=0}^{n}\) be a continuous T-system of order \(n\) on \([0,\infty)\) such that_ 1. _there exists a_ \(C>0\) _such that_ \(f_{n}(x)>0\) _for all_ \(x\geq C\)_,_ 2. \(\lim_{x\to\infty}\frac{f_{i}(x)}{f_{n}(x)}=0\) _for all_ \(i=0,\ldots,n-1\)_, and_ 3. \(\{f_{i}\}_{i=0}^{n-1}\) _is a continuous T-system on_ \([0,\infty)\)_._ _Then for any \(f=\sum_{i=0}^{n}a_{i}f_{i}\in\ln\mathcal{F}\) with \(f>0\) on \([0,\infty)\) and \(a_{n}>0\) there exists a unique representation_ \[f=f_{*}+f^{*}\] _with \(f_{*},f^{*}\in\ln\mathcal{F}\) with \(f_{*},f^{*}\geq 0\) on \([0,\infty)\) such that the following hold:_ 1. _If_ \(n=2m\) _the polynomials_ \(f_{*}\) _and_ \(f^{*}\) _each possess_ \(m\) _distinct zeros_ \(\{x_{i}\}_{i=1}^{m}\) _and_ \(\{y_{i}\}_{i=0}^{m-1}\) _satisfying_ \[0=y_{0}<x_{1}<y_{1}<\cdots<y_{m-1}<x_{m}<\infty.\] _All zeros except_ \(y_{0}\) _are double zeros._ 2. _If_ \(n=2m+1\) _the polynomials_ \(f_{*}\) _and_ \(f^{*}\) _each possess the zeros_ \(\{x_{i}\}_{i=1}^{m+1}\) _and_ \(\{y_{i}\}_{i=1}^{m}\) _satisfying_ \[0=x_{1}<y_{1}<x_{2}<\cdots<y_{m}<x_{m+1}<\infty.\] _All zeros except_ \(x_{1}\) _are double zeros._ 3. _The coefficient of_ \(f_{n}\) _in_ \(f_{*}\) _is equal to_ \(a_{n}\)_._ Proof.: By (a) there exists a function \(w\in C([0,\infty))\) such that \(w>0\) on \([0,\infty)\) and \(\lim_{x\to\infty}\frac{f_{n}(x)}{w(x)}=1\). By (b) we define \[v_{i}(x):=\begin{cases}\frac{f_{i}(x)}{w(x)}&\text{if $x\in[0,\infty)$,}\\ \delta_{i,n}&\text{if $x=\infty$}\end{cases}\] for all \(i=0,1,\ldots,n\). Then by (c) \(\{v_{i}\}_{i=0}^{n}\) is a T-system on \([0,\infty]\) by Example 4.2.5. With \(t(x):=\tan(\pi x/2)\) we define \(g_{i}(x):=v_{i}\circ t\) for all \(i=0,1,\ldots,n\). Hence, \(\mathcal{G}=\{g_{i}\}_{i=0}^{n}\) is a T-system on \([0,1]\). We now apply Karlin's Corollary 5.1.3 to \(\mathcal{G}\). Set \(g:=(\frac{f}{w})\circ t\). (i): Let \(n=2m\). Then by Karlin's Corollary 5.1.3 there exits points \[0=y_{0}<x_{1}<y_{1}<\cdots<x_{m}<y_{m}=1\] and unique functions \(g_{*}\) and \(g^{*}\) such that \(g=g_{*}+g^{*}\), \(g_{*},g^{*}\geq 0\) on \([0,1]\), \(x_{1},\ldots,x_{m}\) are the zeros of \(g_{*}\), and \(y_{0},\ldots,y_{m}\) are the zeros of \(g^{*}\). Then \(f_{*}:=(g_{*}\circ t^{-1})\cdot w\) and \(f^{*}:=(g^{*}\circ t^{-1})\cdot w\) are the unique components in the decomposition \(f=f_{*}+f^{*}\). (ii): Similar to (i). (iii): From (i) (and (ii) in a similar way) we have \(g_{i}(1)=0\) for \(i=0,\ldots,n-1\) and \(g_{n}(1)=1\). Hence, we get with \(g^{*}(y_{m}=1)=0\) that \(g_{n}\) is not contained in \(g^{*}\), i.e., \(g_{*}\) has the only \(g_{n}\) contribution because \(\mathcal{G}\) is linearly independent. This is inherited by \(f_{*}\) and \(f^{*}\) which proves (iii). If \(\mathcal{F}\) is an ET-system then the \(f_{*}\) and \(f^{*}\) can be written down explicitly. **Corollary 5.3.2**.: _If in Theorem 5.3.1 we have additionally that \(\mathcal{F}\) is an ET-system on \([0,\infty)\) then the unique \(f_{*}\) and \(f^{*}\) are given_ 1. _for_ \(n=2m\) _by_ \[f_{*}(x)=a_{2m}\cdot\det\begin{pmatrix}f_{0}&f_{1}&f_{2}&\ldots&f_{2m-1}&f_{2m }\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix}\] _and_ \[f^{*}(x)=-c\cdot\det\begin{pmatrix}f_{0}&f_{1}&f_{2}&f_{3}&\ldots&f_{2m-2}&f_{ 2m-1}\\ x&y_{0}&(y_{1}&y_{1})&\ldots&(y_{m-1}&y_{m-1})\end{pmatrix},\] 2. _and for_ \(n=2m+1\) _by_ \[f_{*}(x)=-a_{2m+1}\cdot\det\begin{pmatrix}f_{0}&f_{1}&f_{2}&f_{3}&\ldots&f_{2m }&f_{2m+1}\\ x&x_{1}&(x_{2}&x_{2})&\ldots&(x_{m+1}&x_{m+1})\end{pmatrix}\] _and_ \[f^{*}(x)=c\cdot\det\begin{pmatrix}f_{0}&f_{1}&f_{2}&\ldots&f_{2m-1}&f_{2m}\\ x&(y_{1}&y_{1})&\ldots&(y_{m}&y_{m})\end{pmatrix}\] _for some \(c>0\)._ Proof.: Combine Theorem 5.3.1 with Remark 4.3.9. If we now plug Examples 4.2.1 into Theorem 5.3.1 we get the following. **Theorem 5.3.3** (Sparse Algebraic Positivstellensatz on \([0,\infty)\)).: _Let \(n\in\mathds{N}\), \(\alpha_{0},\ldots,\alpha_{n}\in[0,\infty)\) be real numbers with \(\alpha_{0}=0<\alpha_{1}<\cdots<\alpha_{n}\), and let \(\mathcal{F}=\{x^{\alpha_{i}}\}_{i=0}^{n}\) on \([0,\infty)\). Then for any \(f=\sum_{i=0}^{n}a_{i}f_{i}\in\ln\mathcal{F}\) with \(f>0\) on \([0,\infty)\) and \(a_{n}>0\) there exists a unique decomposition_ \[f=f_{*}+f^{*}\] _with \(f_{*},f^{*}\in\ln\mathcal{F}\) and \(f_{*},f^{*}\geq 0\) on \([0,\infty)\) such that the following hold:_ 1. _If_ \(n=2m\) _then the polynomials_ \(f_{*}\) _and_ \(f^{*}\) _each possess_ \(m\) _distinct zeros_ \(\{x_{i}\}_{i=1}^{m}\) _and_ \(\{y_{i}\}_{i=0}^{m-1}\) _satisfying_ \[0=y_{0}<x_{1}<y_{1}<\cdots<y_{m-1}<x_{m}<\infty.\] _The polynomials_ \(f_{*}\) _and_ \(f^{*}\) _are given by_ \[f_{*}(x)=a_{2m}\cdot\det\begin{pmatrix}1&x^{\alpha_{1}}&x^{\alpha_{2}}&\ldots&x ^{\alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&(x_{1}&x_{1})&\ldots&(x_{m}&x_{m})\end{pmatrix}\] _and_ \[f^{*}(x)=c\cdot\det\begin{pmatrix}x^{\alpha_{1}}&x^{\alpha_{2}}&x^{\alpha_{3} }&\ldots&x^{\alpha_{2m-2}}&x^{\alpha_{2m-1}}\\ x&(y_{1}&y_{1})&\ldots&(y_{m-1}&y_{m-1})\end{pmatrix}\] _for some_ \(c>0\)_._ 2. _If_ \(n=2m+1\) _then_ \(f_{*}\) _and_ \(f^{*}\) _have zeros_ \(\{x_{i}\}_{i=1}^{m+1}\) _and_ \(\{y_{i}\}_{i=1}^{m}\) _respectively which satisfy_ \[0=x_{1}<y_{1}<x_{2}<\cdots<y_{m}<x_{m+1}<\infty.\] _The polynomials_ \(f_{*}\) _and_ \(f^{*}\) _are given by_ \[f_{*}(x)=a_{2m+1}\cdot\det\begin{pmatrix}x^{\alpha_{1}}&x^{\alpha_{2}}&x^{\alpha _{3}}&\dots&x^{\alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&(x_{2}&x_{2})&\dots&(x_{m+1}&x_{m+1})\end{pmatrix}\] _and_ \[f^{*}(x)=c\cdot\det\begin{pmatrix}1&x^{\alpha_{1}}&x^{\alpha_{2}}&\dots&x^{ \alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&(y_{1}&y_{1})&\dots&(y_{m}&y_{m})\end{pmatrix}\] _for some \(c>0\)._ Proof.: We have that \(\mathcal{F}\) clearly fulfill condition (a) and (b) of Theorem 5.3.1 and by Examples 4.2.1 we known that \(\mathcal{F}\) on \([0,\infty)\) is also a T-system, i.e., (c) in Theorem 5.3.1 is fulfilled. We can therefore apply Theorem 5.3.1. (i) \(n=2m\): By Theorem 5.3.1(i) the unique \(f_{*}\) and \(f^{*}\) each possess \(m\) distinct zeros \(\{x_{i}\}_{i=1}^{m}\) and \(\{y_{i}\}_{i=0}^{m-1}\) with \(0\leq y_{0}<x_{1}<\dots<y_{m-1}<x_{m}<\infty\). Since \(x_{1},\dots,x_{m}\in(0,\infty)\) and \(\mathcal{F}\) on \([x_{1}/2,\infty)\) is an ET-system we immediately get the determinantal representation of \(f_{*}\) by Corollary 5.3.2 (combine Theorem 5.3.1 with Remark 4.3.9). For \(f^{*}\) we have \(y_{0}=0\) and by Example 4.4.4 this is no ET-system. Hence, we prove the representation of \(f^{*}\) by hand. Let \(\varepsilon>0\) be such that \(0=y_{0}<y_{1}<y_{1}+\varepsilon<\dots<y_{m-1}<y_{m-1}+\varepsilon\) holds. Then \[g_{\varepsilon}(x) =-\varepsilon^{-m+1}\cdot\det\begin{pmatrix}1&x^{\alpha_{1}}&x^{ \alpha_{2}}&x^{\alpha_{3}}&\dots&x^{\alpha_{2m-2}}&x^{\alpha_{2m-1}}\\ x&0&y_{1}&y_{1}+\varepsilon&\dots&y_{m-1}&y_{m-1}+\varepsilon\end{pmatrix}\] \[=-\varepsilon^{-m+1}\cdot\det\begin{pmatrix}1&x^{\alpha_{1}}&x^{ \alpha_{2}}&\dots&x^{\alpha_{2m-1}}\\ 1&0&0&\dots&0\\ 1&y_{1}^{\alpha_{1}}&y_{1}^{\alpha_{2}}&\dots&y_{1}^{\alpha_{2m-1}}\\ \vdots&\vdots&\vdots&&\vdots\\ 1&(y_{m-1}+\varepsilon)^{\alpha_{1}}&(y_{m-1}+\varepsilon)^{\alpha_{2}}&\dots &(y_{m-1}+\varepsilon)^{\alpha_{2m-1}}\end{pmatrix}\] expand by the second row \[=\varepsilon^{-m+1}\cdot\det\begin{pmatrix}x^{\alpha_{1}}&x^{ \alpha_{2}}&\dots&x^{\alpha_{2m-1}}\\ y_{1}^{\alpha_{1}}&y_{1}^{\alpha_{2}}&\dots&y_{1}^{\alpha_{2m-1}}\\ \vdots&\vdots&&\vdots\\ (y_{m-1}+\varepsilon)^{\alpha_{1}}&(y_{m-1}+\varepsilon)^{\alpha_{2}}&\dots &(y_{m-1}+\varepsilon)^{\alpha_{2m-1}}\end{pmatrix}\] \[=\varepsilon^{-m+1}\cdot\det\begin{pmatrix}x^{\alpha_{1}}&x^{ \alpha_{2}}&\dots&x^{\alpha_{2m-2}}&x^{\alpha_{2m-1}}\\ x&y_{1}&y_{1}+\varepsilon&\dots&y_{m-1}&y_{m-1}+\varepsilon\end{pmatrix}\] is non-negative on \([0,y_{1}]\) and every \([y_{i}+\varepsilon,y_{i+1}]\). Now \(y_{0}=0\) is removed and all \(y_{i},y_{i}+\varepsilon>0\). Hence, we can work on \([y_{1}/2,\infty)\) where \(\{x^{\alpha_{i}}\}_{i=1}^{2m}\) is an ET-system and we can go to the limit \(\varepsilon\to 0\) as in Remark 4.3.9. Then Corollary 5.3.2 proves the representation of \(f^{*}\). (ii) \(n=2m+1\): Similar to the case (i) with \(n=2m\). The previous result was reproved in [1]. Additionally, since the authors of [1] were not aware of [13, 14] their statement is much weaker and the proof is unnecessary long and complicated. In [1] several other results are reproved which already appeared in [14]. It is left to the reader to use Corollary 5.1.7 to gain the corresponding sparse Nichtnegativesetellensatz on \([0,\infty)\) for general T-systems and for \(\{x^{\alpha_{i}}\}_{i=0}^{n}\) with \(0=\alpha_{0}<\alpha_{1}<\dots<\alpha_{n}\) real numbers. The proofs follow the same line of thoughts as the proof of Theorem 5.2.7. If all \(\alpha_{i}\in\mathds{N}_{0}\) then we can express the \(f_{*}\) and \(f^{*}\) in Theorem 5.3.3 also with Schur polynomials, see (9) in Example 4.4.5. We have seen that Boas already investigated the sparse Stieltjes moment problem [1]. However, since Boas did not have access to Theorem 5.3.1 by Karlin and therefore Theorem 5.3.3 the description was complicated and incomplete. We get the following complete and simple description. To our knowledge this result did not appear somewhere else. **Theorem 5.3.4** (Sparse Stieltjes Moment Problem).: _Let \(\{\alpha_{i}\}_{i\in\mathds{N}_{0}}\subset[0,\infty)\) such that \(\alpha_{0}=0<\alpha_{1}<\alpha_{2}<\dots\) and let \(\mathcal{F}=\{x^{\alpha_{i}}\}_{i\in\mathds{N}_{0}}\). Then the following are equivalent:_ 1. \(L:\operatorname{lin}\mathcal{F}\to\mathds{R}\) _is a_ \([0,\infty)\)_-moment functional._ 2. \(L(p)\geq 0\) _for all_ \(p\in\operatorname{lin}\mathcal{F}\) _with_ \(p\geq 0\)_._ 3. \(L(p)\geq 0\) _for all_ \(p\in\operatorname{lin}\mathcal{F}\) _with_ \(p>0\)_._ 4. \(L(p)\geq 0\) _for all_ \[p(x)=\begin{cases}\det\begin{pmatrix}1&x^{\alpha_{1}}&x^{\alpha_{2}}&\dots&x^{ \alpha_{2m-1}}&x^{\alpha_{2m}}\\ x&(x_{1}&x_{1})&\dots&(x_{m}&x_{m})\end{pmatrix},\\ \det\begin{pmatrix}x^{\alpha_{1}}&x^{\alpha_{2}}&x^{\alpha_{3}}&\dots&x^{ \alpha_{2m-2}}&x^{\alpha_{2m-1}}\\ x&(x_{1}&x_{1})&\dots&(x_{m-1}&x_{m-1})\\ \det\begin{pmatrix}x^{\alpha_{1}}&x^{\alpha_{2}}&x^{\alpha_{3}}&\dots&x^{ \alpha_{2m}}&x^{\alpha_{2m+1}}\\ x&(x_{2}&x_{2})&\dots&(x_{m+1}&x_{m+1})\end{pmatrix},\text{ and }\\ \det\begin{pmatrix}1&x^{\alpha_{1}}&x^{\alpha_{2}}&\dots&x^{\alpha_{2m-1}}&x^{ \alpha_{2m}}\\ x&(x_{1}&x_{1})&\dots&(x_{m}&x_{m})\end{pmatrix}\end{cases}\] _for all_ \(m\in\mathds{N}_{0}\) _and_ \(0<x_{1}<\dots<x_{m}\)_._ Proof.: The implications "(i) \(\Rightarrow\) (ii) \(\Leftrightarrow\) (iii)" are clear and "(iii) \(\Leftrightarrow\) (iv)" is Theorem 5.3.3. It is therefore sufficient to prove "(ii) \(\Leftarrow\) (i)". We have \(\operatorname{lin}\mathcal{F}=(\operatorname{lin}\mathcal{F})_{+}-( \operatorname{lin}\mathcal{F})_{+}\), we have \(1=x^{\alpha_{0}}\in\operatorname{lin}\mathcal{F}\), and for any \(g=\sum_{i=0}^{m}a_{i}\cdot x^{\alpha_{i}}\in(\operatorname{lin}\mathcal{F})_{ +}\) we have \(\lim_{x\to\infty}\frac{g(x)}{x^{\alpha_{m+1}}}=0\), i.e., there exists an \(f\in(\operatorname{lin}\mathcal{F})_{+}\) which dominates \(g\). Hence, \(\operatorname{lin}\mathcal{F}\) is an adapted space and the assertion follows from [1, Thm. 1.8]. Note, in the previous result we did needed \(0=\alpha_{0}<\alpha_{1}<\alpha_{2}<\dots\) but we did not need \(\alpha_{i}\to\infty\). Hence, Theorem 5.3.4 also includes the case \(\sup_{i\in\mathds{N}_{0}}\alpha_{i}<\infty\). ### Sparse Positivstellensatze and Nichthegativstellensatze on \(\mathds{R}\) **Theorem 5.4.1** ([10] or e.g. [11, Thm. 8.1]).: _Let \(m\in\mathds{N}_{0}\) and \(\mathcal{F}=\{f_{i}\}_{i=0}^{2m}\) be a continuous T-system of order \(2m\) on \(\mathds{R}\) such that_ 1. _there exists a_ \(C>0\) _such that_ \(f_{2m}(x)>0\) _for all_ \(x\in(-\infty,-C]\cup[C,\infty)\)_,_ 2. \(\lim_{|x|\to\infty}\frac{f_{i}(x)}{f_{2m}(x)}=0\) _for all_ \(i=0,\dots,2m-1\)_, and_ 3. \(\{f_{i}\}_{i=0}^{2m-1}\) _is a continuous T-system of order_ \(2m-1\) _on_ \(\mathds{R}\)_._ _Let \(f=\sum_{i=0}^{2m}a_{i}f_{i}\) be such that \(f>0\) on \(\mathds{R}\) and \(a_{2m}>0\). Then there exists a unique representation_ \[f=f_{*}+f^{*}\] _with \(f_{*},f^{*}\in\operatorname{lin}\mathcal{F}\) and \(f_{*},f^{*}\geq 0\) on \(\mathds{R}\) such that_ * _the coefficient of_ \(f_{2m}\) _in_ \(f_{*}\) _is_ \(a_{2m}\)_, and_ * \(f_{*}\) _and_ \(f^{*}\) _are non-negative polynomials having zeros_ \(\{x_{i}\}_{i=1}^{m}\) _and_ \(\{y_{i}\}_{i=1}^{m-1}\) _with_ \[-\infty<x_{1}<y_{1}<x_{2}<\cdots<y_{m-1}<x_{m}<\infty.\] Proof.: Adapt the proof of Theorem 5.3.1 such that both interval ends of \([a,b]\) are mapped to \(-\infty\) and \(+\infty\), respectively. We have already seen how from Karlin's Theorem 5.1.2 and Karlin's Corollary 5.1.3 we gained Theorem 5.2.1 (sparse algebraic Positivstellensatz on \([a,b]\)), Theorem 5.2.7 (sparse algebraic Nichtnegativstellensatz on \([a,b]\)), and Theorems 5.2.5 and 5.2.6 (sparse Hausdorff moment problems). We have seen how from Karlin's Theorem 5.1.2 and Karlin's Corollary 5.1.3 we gained Theorem 5.3.1 (sparse Positivstellensatz for T-systems on \([0,\infty)\)), Theorem 5.3.3 (sparse algebraic Positivstellensatz on \([0,\infty)\)), and Theorem 5.3.4 (sparse Stieltjes moment problem). We will therefore not repeat this procedure for the case \(K=\mathds{R}\) from Theorem 5.4.1 but summarize the procedure in the following "cooking receipt". _Remark 5.4.2_ (A General Cooking Receipt).: We have the following general _cooking receipt_ for generating sparse Positivstellensatze, Nichtnegativstellensatze, and to generate and solve sparse moment problems: * Use Karlin's Theorem 5.1.2 or Karlin's Corollary 5.1.3, or extend these to sets \(K=[a,b]\cup[c,d],\ldots\) (for extensions see e.g. [10] and later literature on T-systems we did not discuss here). * Prove that your family \(\mathcal{F}=\{f_{i}\}_{i\in I}\) is a T-system (or even an ET-system). * Plug \(\mathcal{F}\) into (A) to get the sparse Positivstellensatz or sparse Nichtnegativestellensatz on \(K\). * Show that \(\operatorname{lin}\mathcal{F}\) is an adapted space. * Combine (C) and (D) to a sparse moment problem (use [16, Thm. 1.8] for an efficient proof). With this cooking receipt a large class of (sparse) moment problems, Nichtnegativestellensatze, and Positivstellensatze can be generated, solved, and efficiently proved. We think this makes it very useful for applications and further theoretical investigations. \(\circ\) ## 6 Summary In this work we review and deal with univariate sparse moment problems, Positivstellensatzen, and Nichtnegativestellensatzen. We look at earlier results and then move to the theory of T-systems. In the center are the works of Karlin [14] and Karlin and Studden [10]. From Karlin's Theorem 5.1.2 and Karlin's Corollary 5.1.3 on \([a,b]\) we deduce a complete description of all strictly positive sparse algebraic polynomials in Theorem 5.2.1. We also give the sparse algebraic Nichtnegativestellensatz on \([a,b]\) in Theorem 5.2.7. With these results we completely solve the sparse Hausdorff moment problem in Theorem 5.2.3, Theorem 5.2.5, and for the most general form in Theorem 5.2.6. Following the extension by Karlin and Studden of Karlin's Theorem 5.1.2 and Karlin's Corollary 5.1.3 from \([a,b]\) to \([0,\infty)\) we formulate the corresponding sparse algebraic Positivstellensatz on \([0,\infty)\) in Theorem 5.3.3. Only the sparse algebraic Positivstellensatz on \([0,\infty)\) is given since it already solves the sparse Stieltjes moment problem in Theorem 5.3.4. The sparse algebraic Nichtnegativstellensatz on \([0,\infty)\) can easily be derived like the sparse Nichtnegativestellensatz on \([a,b]\) in Theorem 5.2.7. We also give the general T-system Positivstellensatz on \(\mathds{R}\) by Karlin in Theorem 5.4.1. We give a general "cooking receipt" how other results in [10] (and later literature) can be used to generate additional sparse algebraic Positivstellensatze, Nichtnegativestellensatze, or to formulate and solve sparse moment problems. In this treatment we see the high value of the results in [10] which are rarely used today. Especially the analytic treatment of the algebraic questions seems unusual at first. However, we hope we convinced the reader that this approach has at least in the univariate case (Theorem 4.1.7) great value to gain sparse Positivstellensatze, sparse Nichtnegativstellensatze, and solutions to sparse moment problems. ## Funding The author and this project are supported by the Deutsche Forschungsgemeinschaft DFG with the grant DI-2780/2-1 and his research fellowship at the Zukunftskolleg of the University of Konstanz, funded as part of the Excellence Strategy of the German Federal and State Government.
2308.16826
Visual Orbits & Alignments of Planet Hosting Binary Systems
Roughly half of Solar-type planet hosts have stellar companions, so understanding how these binary companions affect the formation and evolution of planets is an important component to understanding planetary systems overall. Measuring the dynamical properties of planet host binaries enables a valuable test of planet formation in multi-star systems and requires knowledge of the binary orbital parameters. Using high resolution imaging, we have measured the relative astrometry and visual orbits of 13 binary systems where one of the stars is known to host a transiting exoplanet. Our results indicate that the mutual inclination between the orbits of the binary hosts and the transiting planets are well aligned. Our results for close binary systems (a<100 AU) complement past work for wide planet host binaries from Gaia.
Kathryn Lester, Steve Howell, Rachel Matson, Elise Furlan, Crystal Gnilka, Colin Littlefield, David Ciardi, Mark Everett, Sergio Fajardo-Acosta, Catherine Clark
2023-08-31T15:56:39Z
http://arxiv.org/abs/2308.16826v1
# Visual Orbits & Alignments of Planet Hosting Binary Systems ###### Abstract Roughly half of Solar-type planet hosts have stellar companions, so understanding how these binary companions affect the formation and evolution of planets is an important component to understanding planetary systems overall. Measuring the dynamical properties of planet host binaries enables a valuable test of planet formation in multi-star systems and requires knowledge of the binary orbital parameters. Using high resolution imaging, we have measured the relative astrometry and visual orbits of 13 binary systems where one of the stars is known to host a transiting exoplanet. Our results indicate that the mutual inclination between the orbits of the binary hosts and the transiting planets are well aligned. Our results for close binary systems (\(a<100\) AU) complement past work for wide planet host binaries from Gaia. ## 1 Introduction Multi-star systems make up about 50% of Solar-type stars (Raghavan et al., 2010) and 25% of M-type stars (Winters et al., 2019) in the Solar neighborhood. Recent work has shown that the fraction of planet hosting stars with stellar companions is similar to that of field binaries (Horch et al., 2014; Matson et al., 2018; Clark et al., 2022), so understanding how stellar companions affect the formation and evolution of exoplanets is an important component to understanding planetary systems overall. Observational radial velocity surveys and transit detections for exoplanets both have biases against the study of binary star systems and their planets; radial velocity studies often avoid known binaries due to contamination from the companion's spectral lines (e.g., Chontos et al., 2022), while transit studies often miss terrestrial-size planets when flux dilution from stellar companions causes a transit to become shallower than the detectability of the survey (Lester et al., 2021). Therefore, our knowledge of planetary architectures, characteristics, and occurrence rates is biased toward single-star systems, despite the fact that a significant fraction of binary systems are likely to host exoplanets. Theoretical studies show that a close stellar companion can impact planets through the truncation or misalignment of the protoplanetary disk (Artymowicz & Lubow, 1994; Kraus et al., 2012; Martin et al., 2014), the formation and migration of gas giant planets (Dawson & Johnson, 2018; Fontanive et al., 2019), and the scattering of planets in unstable triple star systems (Thebault & Haghighipour, 2015). For example, recent simulations of protoplanetary disks around the primary star in wide binary systems (with separations \(a=100-400\) AU) often result in the disk fragmentation needed to form giant planets (Cadman et al., 2022). Modeling also predicts that the shape and size of the companion's orbit can play a significant role in planet formation, such that close, eccentric, or highly inclined companions could hinder planet formation (Holman & Wiegert, 1999; Quintana et al., 2002; Jang-Condell, 2015; Cadman et al., 2022). Over the past decade, observational evidence has accumulated to indicate that planet formation is suppressed in close (\(a<100\) AU) binary systems. First, high resolution imaging surveys of known transiting planet host stars from Kepler, K2, and TESS have found a dearth of close stellar companions (Bergfors et al., 2013; Wang et al., 2014; Kraus et al., 2016; Fontanive et al., 2019; Moe & Kratter, 2021; Lester et al., 2021; Fontanive et al., 2021). & Bardalez Gagliuffi, 2021). Next, when searching for planets in binary systems, the frequency of giant planets in close binaries was found to be significantly less than the frequency in wide (\(a>\)100 AU) binaries (Wang et al., 2014; Hirsch et al., 2021). Su et al. (2021) also found that multi-planet systems are more often found in wide binaries. Furthermore, observations of young binaries show that protoplanetary disks are smaller and less massive in binaries than around single stars (Zurlo et al., 2021), suggesting that stellar companions within about 300 AU often truncate the protoplanetary disks (Harris et al., 2012). However, the detection of planets in systems with close companions (e.g., Hatzes et al., 2003; Dupuy et al., 2016; Winters et al., 2019) demonstrates it is possible for planets to form in such systems, so it is currently unclear why some close binaries are able to host planets and which factors influence the survival of the planet. Little observational evidence exists to test how the other binary orbital parameters (such as inclination and eccentricity) affect planet formation, primarily due to the high angular resolution and long time baselines required to measure the binary orbits. Several recent papers (Dupuy et al., 2022; Behmard et al., 2022; Christian et al., 2022) began probing the mutual inclination between transiting planets and stellar companions and found that the orbital planes of the host binaries are often well aligned with the planetary orbits. For example, Christian et al. (2022) studied wide binaries from Gaia that likely formed through turbulent fragmentation, which results in protoplanetary disks randomly aligned with the stellar companion (Offner et al., 2010). They concluded that subsequent gravitational interactions with a close companion could re-align the protoplanetary disk and produce the observed alignments. Long term observational monitoring of planet host binaries is necessary to determine how host multiplicity and binary orbital properties influence planet formation. For this purpose, we present the first results from our astrometric monitoring campaign of planet host binaries. In this paper, we explore the mutual orbital alignment of close binary systems (\(a<100\) AU) known to host at least one transiting planet, in order to help characterize the architectures of binary sytems with planets and help place constraints on the formation and evolutionary models. We present orbital inclinations and preliminary visual orbits of 13 binaries hosting circumstellar (S-type) planets to test if these systems also show planet-binary alignment. We describe our sample and observations in Sections 2 and 3, our visual orbit analysis in Section 4, planet-binary alignment results in Section 5, and our conclusions in Section 6. ## 2 Sample We started building our sample from transiting planet host stars from Kepler, K2, and TESS for which close stellar companions were previously detected using speckle interferometry (Furlan et al., 2017; Matson et al., 2018; Ziegler et al., 2020; Howell et al., 2021; Lester et al., 2021). For each binary, we estimated the projected physical separation using the projected angular separation from the most recent speckle epoch and the Gaia DR3 parallax (Gaia Collaboration et al., 2016, 2022). We then kept only those with projected separations less than 100 AU, where stellar companions are most likely to impact planet formation. Transit false positive systems identified by follow-up photometry as listed on the Exoplanet Follow-up Observing Program (ExoFOP) website were also removed. Our full sample contains 40 binaries, for which we are conducting an on-going astrometric and spectroscopic monitoring campaign with the Gemini, WIYN, and Keck telescopes. With angular separations less than 1 arcsec, these systems are expected to be gravitationally bound (Everett et al., 2015; Hirsch et al., 2017; Matson et al., 2018) but we confirm the bound nature of each system in Section 4. We present preliminary visual orbit solutions for 13 of the exoplanet host binaries in our sample, for which orbital motion can already be seen. These systems are listed in Table 1 with their TIC ID, primary star effective temperature (\(T_{\rm eff~{}A}\)) from the TIC catalog, estimates of the secondary star effective temperature (\(T_{\rm eff~{}B}\)), binary mass ratio (\(q\)) and total system mass (\(M_{tot}\)) from the magnitude difference (see Section 4), Gaia DR3 parallax (\(\pi\)) and proper motion (\(\mu\)), and companion detection reference. We list the planet properties in Table 2, including the planet name, period (\(P_{pl}\)), radius (\(R_{pl}\), uncorrected for flux dilution) and semi-major axis (\(a_{pl}\)) from the KOI/EPIC/TOI catalogs, whether each planet is designated on the NASA Exoplanet Archive as a planet candidate or confirmed planet (CP), and literature reference. Most of the binaries discussed herein are Kepler targets due to the long observational baselines available, and therefore Table 1 contains mainly Solar-type stars with Earth-sized planets. Four systems have multiple planets/planet candidates, so we list the estimated properties of each one. ## 3 Observations We observed our binary sample using the 'Alopeke and Zorro speckle cameras (Scott et al., 2021) on the Gemini 8.1 m North and South telescopes from June 2021 to September 2022 and using the NN-EXPLORE Exoplanet and Stellar Speckle Imager (NESSI) speckle camera on the WIYN 3.5 m telescope (Scott et al., 2018) \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Planet} & \(P_{pl}\) (days) & \(R_{pl}\) (\(R_{\earth}\)) & \(a_{pl}\) (AU) & Designation & Reference \\ \hline KOI 270.01 & 12.6 & 1.53 & 0.10 & CP (Kepler-449 b) & 1,2 \\ KOI 270.02 & 33.7 & 1.86 & 0.20 & CP (Kepler-449 c) & 1,2 \\ KOI 307.01 & 19.7 & 1.78 & 0.14 & CP (Kepler-520 b) & 1,3 \\ KOI 307.02 & 5.2 & 1.18 & 0.06 & CP (Kepler-520 c) & 1,3 \\ KOI 1613.01 & 15.9 & 1.31 & 0.12 & CP (Kepler-907 b) & 1,3 \\ KOI 1613.02 & 20.6 & 0.85 & 0.15 & Candidate & 4 \\ KOI 1613.03 & 94.1 & 0.90 & 0.40 & Candidate & 4 \\ KOI 1961.01 & 1.9 & 0.91 & 0.03 & CP (Kepler-1027 b) & 1,3 \\ KOI 2124.01 & 42.3 & 1.45 & 0.20 & Candidate & 4 \\ KOI 3234.01 & 2.4 & 0.83 & 0.04 & CP (Kepler-1443 b) & 3,4 \\ KOI 3456.01 & 30.9 & 1.08 & 0.19 & CP (Kepler-1505 b) & 3,4 \\ KOI 3456.02 & 486.1 & 1.18 & 1.20 & Candidate & 5 \\ KOI 4252.01 & 15.6 & 0.72 & 0.10 & CP (Kepler-1948 b) & 4,6 \\ KOI 5971.01 & 493.3 & 1.08 & 1.00 & Candidate & 5 \\ TOI 271.01 & 2.5 & 2.81 & 0.04 & Candidate & 7 \\ TOI 1287.01 & 9.6 & 2.52 & 0.09 & Candidate & 7 \\ EPIC 212303338.01 & 0.6 & 0.58 & 0.01 & Candidate & 8,9 \\ EPIC 220555384.01 & 4.3 & 1.20 & 0.05 & Candidate & 8,10 \\ \hline \end{tabular} * \end{table} Table 2: Estimated Planet Properties \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Target} & TIC ID & \(T_{\rm eff\ A}\) (K) & \(T_{\rm eff\ B}\) (K) & \(q\) & \(M_{tot}\) (\(M_{\odot}\)) & \(\pi\) (mas) & \(\mu_{RA}\) (mas/yr) & \(\mu_{DEC}\) (mas/yr) & Reference \\ \hline KOI 270 & 270779644 & 5650 & 5340 & 0.90 & 1.90 & \(3.84\pm 0.05\) & \(-9.40\pm 0.01\) & \(-44.40\pm 0.01\) & 1,3 \\ KOI 307 & 138097531 & 6000 & 5800 & 0.95 & 2.16 & \(1.27\pm 0.11\) & \(-4.10\pm 0.12\) & \(-3.88\pm 0.12\) & 1,2 \\ KOI 1613 & 120576846 & 6080 & 5340 & 0.75 & 2.10 & \(2.03\pm 0.50\) & \(-18.78\pm 0.54\) & \(-20.46\pm 0.58\) & 1,2,3 \\ KOI 1961 & 158552426 & 5350 & 5140 & 0.89 & 1.70 & \(2.47\pm 0.03\) & \(1.13\pm 0.03\) & \(-22.97\pm 0.03\) & 3 \\ KOI 2124 & 2774135 & 4060 & 3300 & 0.50 & 0.90 & \(3.33\pm 0.04\) & \(-12.85\pm 0.06\) & \(-18.33\pm 0.06\) & 1,3 \\ KOI 3234 & 164525743 & 6350 & 6000 & 0.83 & 2.44 & \(1.55\pm 0.06\) & \(-3.24\pm 0.07\) & \(-10.63\pm 0.08\) & 1 \\ KOI 3456 & 137408775 & 5600 & 5500 & 0.98 & 1.92 & \(2.05\pm 0.04\) & \(6.32\pm 0.04\) & \(0.41\pm 0.05\) & 1 \\ KOI 4252 & 158489110 & 3930 & 4000 & 0.83 & 1.10 & \(5.08\pm 0.02\) & \(4.41\pm 0.03\) & \(25.69\pm 0.03\) & 3 \\ KOI 5971 & 27778479 & 4620 & 4300 & 0.94 & 1.36 & \(2.52\pm 0.02\) & \(9.27\pm 0.03\) & \(28.23\pm 0.03\) & 1 \\ TOI 271 & 259511357 & 6110 & 3800 & 0.47 & 1.68 & \(10.01\pm 0.13\) & \(46.72\pm 0.15\) & \(49.46\pm 0.17\) & 4 \\ TOI 1287 & 352764091 & 5890 & 4500 & 0.71 & 1.80 & \(10.76\pm 0.03\) & \(33.87\pm 0.02\) & \(-88.51\pm 0.02\) & 4,5 \\ EPIC 212303338 & 422290347 & 5100 & 4410 & 0.79 & 1.54 & \(12.48\pm 0.09\) & \(31.54\pm 0.10\) & \(10.40\pm 0.06\) & 6 \\ EPIC 220555384 & 406410648 & 4160 & 4330 & 0.93 & 1.37 & \(6.85\pm 0.51\) & \(28.90\pm 1.45\) & \(-24.37\pm 1.22\) & 6 \\ \hline \end{tabular} * \end{table} Table 1: Sample of Planet Host Binaries Figure 1: Power spectra and relative astrometric solution for KOI 4252 on 2021 Oct 24. The top panel shows the observed binary power spectrum (left), the best-fit model (center), and the residuals (right). The bottom left plot shows the full reconstructed image from the speckle pipeline, which often has a reflected image of the companion. The bottom right plot shows a close-in view of the \(\chi^{2}\) values around the best-fit solution, where the best-fit position of the companion is marked with an X. The 1-, 2-, and 3-\(\sigma\)\(\chi^{2}\) contour levels (corresponding to \(\chi^{2}_{min}+1,\ \chi^{2}_{min}+4,\ \chi^{2}_{min}+9\)) are shown in black. from October 2022 to January 2023. At least three image sets were obtained for each target, where one set consists of 1000 60 ms (Gemini) or 40 ms (WIYN) exposures taken simultaneously in two filters. The 2021 data were taken using 562 nm and 832 nm narrow-band filters, while some of the 2022-2023 data were taken using the SDSS \(r^{\prime}\), \(i^{\prime}\), or \(z^{\prime}\) broad-band filters to increase the signal-to-noise ratio. Additional image sets were taken for fainter targets (\(V>9\) mag), and a point source standard star was observed immediately before or after each target for calibration. We reduced the data using the pipeline developed by the speckle team (Howell et al., 2011; Horch et al., 2011) to calculate the power spectrum of each target, divide the mean power spectrum of the target by that of the standard star, and fit the fringes for initial estimates of the binary parameters. For solutions with a 180 deg position angle ambiguity, we selected the solution consistent with other speckle or adaptive optics observations. We then determined the final relative positions and uncertainties from the binary power spectra by performing a grid search in relative separation and position angle based on the gridfit code of Schaefer et al. (2016). We first calibrated the \(uv\)-plane with the power spectra of known binary stars and the predicted relative positions from literature orbital solutions: HD 214850 and HIP 46454 (Muterspaugh et al., 2010), HIP 84949 (Muterspaugh et al., 2006), and HIP 4849 (Tokovinin et al., 2015). Once the \(uv\)-plane was calibrated for each observing run, we tested a range of separations in right ascension (\(\Delta\)RA) and declination (\(\Delta\)DEC) around the solution found by the speckle pipeline in steps of 1 mas. At each grid point, we created a model power spectrum for these binary parameters, fit for the magnitude difference of the binary, and calculated the \(\chi^{2}\) goodness-of-fit \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{ Target} & UT Date & MJD & \(\rho\) (mas) & \(\theta\) (deg) & \(\Delta m\) (mag) & Filter & Telescope \\ \hline KOI 270 & 2022-09-16 & 59836.30 & \(186.0\pm 1.6\) & \(65.0\pm 0.6\) & \(0.6\pm 0.5\) & \(i^{\prime}\) & WIYN \\ KOI 1613 & 2022-09-19 & 59836.21 & \(191.5\pm 4.0\) & \(185.3\pm 0.6\) & \(1.2\pm 0.5\) & \(i^{\prime}\) & WIYN \\ KOI 1961 & 2021-06-24 & 59389.47 & \(46.8\pm 2.0\) & \(275.2\pm 2.4\) & \(0.2\pm 0.4\) & 832 nm & Gemini \\ KOI 1961 & 2021-10-15 & 59502.25 & \(44.9\pm 2.5\) & \(276.7\pm 3.8\) & \(0.0\pm 0.5\) & 832 nm & Gemini \\ KOI 1961 & 2022-05-10 & 59709.55 & \(47.0\pm 3.5\) & \(277.6\pm 3.7\) & \(0.4\pm 0.3\) & 832 nm & Gemini \\ KOI 2124 & 2021-06-26 & 59391.53 & \(79.5\pm 3.3\) & \(53.3\pm 2.3\) & \(0.3\pm 0.4\) & 832 nm & Gemini \\ KOI 2124 & 2021-10-19 & 59506.26 & \(78.1\pm 2.5\) & \(53.5\pm 1.8\) & \(0.2\pm 0.3\) & 832 nm & Gemini \\ KOI 2124 & 2022-05-09 & 59708.61 & \(80.9\pm 5.2\) & \(53.2\pm 3.8\) & \(0.5\pm 0.4\) & 832 nm & Gemini \\ KOI 3234 & 2022-09-14 & 59836.77 & \(70.5\pm 5.0\) & \(158.6\pm 3.0\) & \(0.9\pm 0.5\) & \(z^{\prime}\) & WIYN \\ KOI 3456 & 2022-09-12 & 59834.28 & \(50.8\pm 3.5\) & \(11.9\pm 3.9\) & \(0.0\pm 1.3\) & 832 nm & Gemini \\ KOI 4252 & 2021-06-25 & 59390.50 & \(67.7\pm 2.5\) & \(325.3\pm 2.1\) & \(0.6\pm 0.2\) & 832 nm & Gemini \\ KOI 4252 & 2021-10-24 & 59511.21 & \(69.1\pm 4.2\) & \(325.0\pm 3.6\) & \(0.8\pm 0.2\) & 832 nm & Gemini \\ KOI 4252 & 2022-05-09 & 59708.55 & \(70.3\pm 4.5\) & \(323.7\pm 3.7\) & \(0.8\pm 0.2\) & 832 nm & Gemini \\ KOI 4252 & 2022-09-12 & 59834.28 & \(72.9\pm 3.5\) & \(323.2\pm 2.7\) & \(0.5\pm 0.2\) & 832 nm & Gemini \\ KOI 5971 & 2021-06-28 & 59393.50 & \(29.9\pm 4.5\) & \(128.0\pm 8.6\) & \(1.0\pm 0.9\) & 832 nm & Gemini \\ KOI 5971 & 2021-10-21 & 59508.25 & \(29.9\pm 3.3\) & \(128.0\pm 6.1\) & \(0.8\pm 0.5\) & 832 nm & Gemini \\ KOI 5971 & 2022-05-11 & 59710.56 & \(26.9\pm 5.7\) & \(130.2\pm 12.3\) & \(0.8\pm 1.0\) & 832 nm & Gemini \\ TOI 271 & 2021-09-18 & 59840.77 & \(153.0\pm 5.0\) & \(226.8\pm 2.0\) & \(5.1\pm 1.0\) & 832 nm & Gemini \\ TOI 1287 & 2021-06-24 & 59389.54 & \(131.5\pm 9.0\) & \(346.4\pm 3.9\) & \(3.2\pm 0.5\) & 832 nm & Gemini \\ TOI 1287 & 2021-10-23 & 59510.24 & \(135.8\pm 10.1\) & \(346.0\pm 4.6\) & \(3.3\pm 0.6\) & 832 nm & Gemini \\ TOI 1287 & 2022-05-11 & 59710.58 & \(144.7\pm 11.1\) & \(346.4\pm 4.7\) & \(3.3\pm 0.7\) & 832 nm & Gemini \\ TOI 1287 & 2022-09-18 & 59840.51 & \(147.0\pm 10.0\) & \(349.4\pm 5.0\) & \(2.7\pm 1.0\) & \(z^{\prime}\) & WIYN \\ EPIC 212303338 & 2023-01-28 & 59971.50 & \(124.0\pm 5.0\) & \(100.4\pm 2.0\) & \(1.8\pm 0.5\) & \(z^{\prime}\) & WIYN \\ EPIC 220555384 & 2021-10-16 & 59503.40 & \(210.9\pm 2.0\) & \(276.9\pm 0.5\) & \(0.7\pm 0.1\) & 832 nm & Gemini \\ EPIC 220555384 & 2021-12-09 & 59557.25 & \(204.9\pm 5.0\) & \(277.1\pm 2.6\) & \(0.7\pm 0.1\) & 832 nm & Gemini \\ EPIC 220555384 & 2022-09-14 & 59837.51 & \(211.5\pm 2.4\) & \(278.3\pm 0.5\) & \(0.7\pm 0.3\) & \(i^{\prime}\) & WIYN \\ EPIC 220555384 & 2022-09-15 & 59837.51 & \(211.9\pm 2.5\) & \(276.5\pm 0.5\) & \(0.7\pm 0.1\) & \(i^{\prime}\) & Gemini \\ \hline \end{tabular} \end{table} Table 3: New Relative Astrometry from Speckle Interferometry statistic between the observed and model fringes. We then mapped out the \(1\sigma\)\(\chi^{2}\) contour, fit for the uncertainties in \(\Delta\)RA and \(\Delta\)DEC, and converted these values & uncertainties to relative separation (\(\rho\)) and position angle (\(\theta\), measured East of North). An example power spectrum, reconstructed image, and \(\chi^{2}\) map are shown in Figure 1. Table 3 lists the UT date, Modified Julian Date (MJD), separation, position angle, magnitude difference, filter, and telescope for each observation. ## 4 Visual Orbits We combined our new relative astrometry with past measurements from Keck NIRC2 adaptive optics observations (Furlan et al., 2017; Dupuy et al., 2022), WIYN speckle observations (Matson et al., 2018; Colton et al., 2021; Howell et al., 2021), and Gemini speckle observations (Furlan et al., 2017; Lester et al., 2021). If uncertainties were not listed in the literature, we adopted values of 5 mas and 2 deg for the relative separation and position angle, repsectively (Howell et al., 2021). We first used the compiled relative astrometry data to test the bound nature of each binary system and confirm that the observed on-sky motion is actually orbital motion. Because our binary stars are unresolved by Gaia, we could not do a typical common proper motion analysis (e.g., Colton et al., 2021). Instead, we compared the proper motion (\(\mu\)) of the primary star from Gaia DR3 (listed in Table 1) to the observed relative motion of the secondary star. If the binary companion is unbound, i.e. a background line-of-sight companion, then the companion's observed motion would be equal in magnitude to the proper motion of the primary star. Figure 2 shows the ratio of the total proper motion to the mean angular speed of the companion, which was calculated in RA and DEC separately from the first to last observations then added in quadrature. We found that the proper motion was \(3-140\) times larger compared to the observed motion for all our binaries. Therefore, the observed motion is likely true orbital motion and can be fit with a Keplerian orbit. We also show the direction of the primary star's proper motion in the orbit plots in the Appendix to compare with the orbital motion. We fit for the visual orbits using the orbitize! package (Blunt et al., 2020) and Orbits For The Impatient (Blunt et al., 2017) module, which was built specifically for long period systems. For each binary, we estimated the primary star's mass from the effective temperature and the Modern Mean Dwarf Stellar Color and Effective Temperature Sequence (Pecaut and Mamajek, 2013), then used the speckle magnitude difference to estimate the secondary's mass (see Matson et al., 2018). We used the resulting total mass (with uncertainties of 30%) and the Gaia DR3 parallaxes & uncertainties (Gaia Collaboration et al., 2016) as priors, which are listed in Table 1. The free parameters were then the semi-major axis (\(a\)), inclination (\(i\)), eccentricity (\(e\)), argument of periastron of the companion (\(\omega_{B}\)) 1, longitude of the ascending node of the companion (\(\Omega_{B}\)), and epoch of periastron. Orbitize! uses a parameter \(\tau\) to represent the epoch of periastron as a fraction of the orbital period past the reference epoch MJD 58849. We ran orbitize! until \(10^{5}\) orbits were accepted, created histograms for each orbital parameter, and fit asymmetrical Gaussians to each distribution to find the best-fit values and uncertainties. Table 4 lists the orbital solutions for each binary. An example corner plot and visual orbit are shown in Figures 3 and 4, respectively, while the visual orbits for all systems are shown in Figures 7-19 in the Appendix. Next, we used the total system mass and the semi-major axis to estimate the orbital period (\(P\)) for each system. Our observations cover roughly 1-25% of the orbits so the orbital periods are not yet well constrained, but orbital coverage of a few percent is sufficient to reliably measure the orbital inclination (Dupuy et al., 2022). Figure 2: Ratio of the total Gaia DR3 proper motion to the total observed motion for each binary in our sample. The proper motion is at least three times larger compared to the observed relative motion for all systems, so this motion is likely true orbital motion of a bound companion rather than motion of an unbound, line-of-sight companion. As a consistency check, we also fit for the visual orbits using a custom code. We created \(10^{6}\) sets of random orbital parameters, calculated the predicted binary positions, and determined the \(\chi^{2}\) value of each solution. Orbital parameters for each iteration were drawn from uniform distributions. We then found parameters with the lowest reduced \(\chi^{2}\) value, fit a parabola to the bottom of the \(\chi^{2}\) distribution, and found the \(1\sigma\) uncertainties where \(\chi^{2}\leq\chi^{2}_{min}+1\). The inclinations from orbitize! are consistent with those found by our fitting method to within the uncertainties. However, our code could not converge on a full orbital solution as well as orbitize! due to the orbital period as a free parameter, so we used the orbitize! solutions in the rest of this paper. ## 5 Results ### Planet-Binary Orbital Alignment We compared the orbital inclinations of the stellar companions (\(i\)) and of the transiting planets (assumed to be \(90^{\circ}\), i.e. edge-on to our line of sight) to determine the planet-binary orbital alignment (\(\sin|90-i|\)) in each system. Note that this is only the minimum alignment, because we do not know the longitude of the ascending node of the transiting planet. Figure 5 shows a histogram of the planet-binary orbital alignment for our 13 binary host systems. The uncertainties for each histogram bin were found by varying each binary inclination within it's Gaussian uncertainty over \(10^{5}\) iterations, then taking the standard deviation of the values in each histogram bin from all iterations. In the case of asymmetric uncertainties in inclination, the larger uncertainty value was used. We found that our binary host orbits are more often aligned with the planetary orbits, with all mutual inclinations less than \(60^{\circ}\). Our result is consistent with the results of Dupuy et al. (2022) using linear orbital motion estimates and of Christian et al. (2022) using Gaia astrometric parameters. Specifically, Christian et al. (2022) found that only systems with \(a<700\) AU were preferentially aligned, so our sample confirms this result down to systems with much smaller separations. Furthermore, low mutual inclination between the planetary and binary orbits is consistent with theories of binary star formation and with planet formation in multi-star systems. Close binaries (such as those in this study), that formed in-situ via disk fragmentation or via turbulent fragmentation and migration, are expected to have binary orbits aligned with the primary stars' protoplanetary disks. From a planet formation perspective, all of the binaries in our sample have mutual inclinations less than \(60^{\circ}\), which is consistent with theoretical predictions. For example, (Quintana et al., 2002) simulated planet formation around each star in the \(\alpha\) Cen AB system; they found that planets could form more easily when the protoplanetary disk was inclined by \(30-45^{\circ}\) compared to the binary orbit, but were unstable when the disk was inclined by \(60^{\circ}\). Because most of our systems are well aligned, they likely did not undergo strong tidal interactions that would have torqued the protoplanetary disk and resulted in either non-transiting planets or poor binary-planet alignment. ### Planet Stability We next tested the binary and planet configurations of our sample against dynamical stability predictions from numerical simulations. We calculated the \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{ Target} & \(P\) (yr) & \(\tau\) & \(i\) (deg) & \(e\) & \(\omega_{B}\) (deg) & \(\Omega_{B}\) (deg) & \(a\) (AU) \\ \hline KOI 270 & \(290.0^{+251.0}_{-200.0}\) & \(0.65^{+0.10}_{-0.10}\) & \(86.8^{+1.3}_{-1.9}\) & \(0.10^{+0.18}_{-0.10}\) & \(323.5^{+8.7}_{-7.2}\) & \(248.8^{+3.5}_{-3.5}\) & \(55.0^{+11.1}_{-16.0}\) \\ KOI 307 & \(290.0^{+212.8}_{-75.5}\) & \(0.03^{+0.08}_{-0.08}\) & \(129.0^{+24.5}_{-23.7}\) & \(0.67^{+0.25}_{-0.25}\) & \(165.0^{+49.3}_{-54.4}\) & \(267.5^{+25.0}_{-25.0}\) & \(61.0^{+20.5}_{-14.9}\) \\ KOI 1613 & \(675.0^{+557.4}_{-218.3}\) & \(0.15^{+0.14}_{-0.14}\) & \(86.5^{+3.9}_{-4.5}\) & \(0.23^{+0.9}_{-0.19}\) & \(152.5^{+47.6}_{-53.8}\) & \(182.5^{+3.3}_{-27.0}\) & \(102.5^{+38.6}_{-27.0}\) \\ KOI 1961 & \(27.5^{+7.9}_{-9.5}\) & \(0.55^{+0.08}_{-0.08}\) & \(64.5^{+6.6}_{-1.8}\) & \(0.79^{+0.22}_{-0.22}\) & \(325.0^{+27.7}_{-21.9}\) & \(101.0^{+22.1}_{-2.2}\) & \(11.5^{+3.1}_{-2.3}\) \\ KOI 2124 & \(150.0^{+243.4}_{-163.5}\) & \(0.77^{+0.13}_{-0.13}\) & \(89.9^{+1.2}_{-1.2}\) & \(0.04^{+0.32}_{-0.04}\) & \(341.0^{+8.3}_{-11.9}\) & \(233.8^{+1.8}_{-1.8}\) & \(27.0^{+12.0}_{-3.3}\) \\ KOI 3234 & \(175.0^{+362.5}_{-54.7}\) & \(0.97^{+0.29}_{-0.29}\) & \(33.0^{+23.0}_{-31.7}\) & \(0.01^{+0.53}_{-0.01}\) & \(135.0^{+51.1}_{-59.9}\) & \(196.4^{+29.0}_{-42.9}\) & \(43.0^{+16.6}_{-9.0}\) \\ KOI 3456 & \(37.5^{+71.7}_{-10.0}\) & \(0.55^{+1.15}_{-0.15}\) & \(97.0^{+10.3}_{-10.1}\) & \(0.97^{+0.03}_{-0.63}\) & \(312.5^{+17.9}_{-28.6}\) & \(192.5^{+42.9}_{-4.2}\) & \(15.0^{+4.7}_{-4.3}\) \\ KOI 4252 & \(70.0^{+75.0}_{-40.0}\) & \(0.73^{+0.16}_{-0.16}\) & \(99.5^{+5.5}_{-3.5}\) & \(0.33^{+0.10}_{-0.10}\) & \(307.5^{+12.2}_{-9.6}\) & \(117.0^{+4.1}_{-4.1}\) & \(19.0^{+12.5}_{-6.3}\) \\ KOI 5971 & \(50.0^{+46.8}_{-40.0}\) & \(0.29^{+0.12}_{-0.12}\) & \(93.0^{+6.2}_{-6.2}\) & \(0.39^{+0.28}_{-0.28}\) & \(37.5^{+51.3}_{-32.3}\) & \(313.0^{+4.8}_{-4.8}\) & \(13.0^{+4.5}_{-4.7}\) \\ TOI 271 & \(22.5^{+47.2}_{-12.3}\) & \(0.73^{+0.11}_{-0.11}\) & \(98.5^{+11.0}_{-6.2}\) & \(0.95^{+0.05}_{-0.08}\) & \(327.5^{+59.6}_{-24.9}\) & \(49.8^{+4.2}_{-4.2}\) & \(11.0^{+4.3}_{-3.4}\) \\ TOI 1287 & \(27.5^{+39.6}_{-13.3}\) & \(0.75^{+0.11}_{-0.11}\) & \(86.8^{+1.3}_{-4.3}\) & \(0.29^{+0.49}_{-0.29}\) & \(341.0^{+49.9}_{-20.3}\) & \(169.0^{+48.3}_{-4.3}\) & \(11.5^{+3.9}_{-2.7}\) \\ EPIC 212303338 & \(57.5^{+18.4}_{-25.8}\) & \(0.63^{+0.27}_{-0.27}\) & \(103.5^{+4.0}_{-4.0}\) & \(0.47^{+0.14}_{-0.14}\) & \(102.5^{+34.0}_{-32.6}\) & \(44.5^{+5.8}_{-5.8}\) & \(17.5^{+14.3}_{-5.1}\) \\ EPIC 22055384 & \(67.5^{+29.10}_{-16.2}\) & \(0.59^{+0.12}_{-0.12}\) & \(77.0^{+6.6}_{-16.1}\) & \(0.91^{+0.09}_{-0.33}\) & \(293.0^{+14.3}_{-22.2}\) & \(99.0^{+3.4}_{-3.4}\) & \(18.5^{+11.3}_{-3.2}\) \\ \hline \end{tabular} \end{table} Table 4: Visual Orbit Solutions critical planet semi-major axis (\(a_{crit}\)) using Equation 1 in Holman & Wiegert (1999), for which planets with semi-major axes (\(a_{pl}\)) less than \(a_{crit}\) would be stable orbiting one star in a binary system over thousands of binary orbital cycles. For the multi-planet systems, we evaluated each planet separately. The critical value depends on the binary's semi-major axis, eccentricity, and mass ratio, so we used the mass ratios estimated from the speckle magnitude difference in Section 4. The uncertainties in \(a_{crit}\) were estimated by varying the binary parameters within their uncertainties over \(10^{5}\) iterations and taking the standard deviation of the results. Figure 6 compares the planet separations to the critical separations for the binaries in our sample. We found that all planets have separations less than \(a_{crit}\) and therefore would be dynamically stable. The only systems with planet separations near the critical separation are KOI 5971.01, with \(a_{pl}\approx 1.0\) AU and \(a_{crit}=1.3\pm 0.6\) AU, and KOI 3456.02, with \(a_{pl}\approx 1.2\) AU and \(a_{crit}=2.7\pm 2.1\) AU. These systems would benefit from the fact that the planets are not dynamically stable, and the planet separation is not dynamically stable. Figure 3: Example corner plot of the orbital solution for KOI 1961. The diagonal frames show posterior histograms for each orbital parameter, and the off-diagonal frames show the covariance between different pairs of parameters. efit from continued speckle monitoring to better constrain the binary orbits and confirm \(a_{crit}\), as well as additional transit follow-up to confirm the planetary nature of these planet candidates. Increasing the number of binary planet hosts in our sample and extending to longer period planets would provide additional tests of these dynamical stability models. ## 6 Conclusions We presented new relative astrometry of 13 planet host binary systems and measured preliminary visual orbits using the orbitize! code. We investigated the mutual orbital inclination between the binary orbits and the transiting planets, and found that our binary host stars have orbital inclinations similar to those of the planets. Our result for close (\(a<100\) AU) binaries is consistent with past work for wide, planet host binaries (e.g., Christian et al., 2022), and supports the predictions of planet formation simulations that binary companions inclined with respect to the protoplanetary disk will hinder planet formation. We plan to continue monitoring our full sample of 40 planet host binaries in order to increase our orbital coverage and sample size and better constrain all of the orbital parameters. Eccentric companions cause increased torque on the protoplanetary disks and could cause the planets to become misaligned relative to the stellar companion, so investigating planet-binary orbital alignment as a function of binary separation and eccentricity would be a valuable test of planet formation theory. Continued astrometric monitoring will better constrain the binary orbital parameters (e.g., \(i\) and \(e\)) and enable such investigations. We also started spectroscopic monitoring of these systems to measure the radial velocity trends and help break the degeneracy between the binary orbital inclination and eccentricity. Over Figure 4: Example visual orbit for planet hosting binary KOI 1961. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the orbital solutions from orbitize! are shown in grey. Figure 5: Alignment between the planetary and binary orbital inclinations (\(\sin|90-i|\)). A random inclination distribution would result in uniform histograms, but instead our binary host star orbits are often well aligned with the planets’ orbits. This is consistent with numerical simulations of planet formation, which found it more difficult to form planets in protoplanetary disks with a highly misaligned stellar companion (Quintana et al., 2002). Figure 6: Critical semi-major axis (\(a_{crit}\)) versus planet semi-major axis (\(a_{pl}\)) for the planet candidates in our sample. Holman & Wiegert (1999) predict that planets with \(a_{crit}>a_{pl}\) would be dynamically stable in binary systems. All systems in our sample lie above the 1:1 line (dotted) and therefore are expected to be dynamically stable, though two planets (labeled) are close to their critical separation. all, we are working to build the orbital demographics of planet host binaries to better understand how planets form in multi-star systems. Future work could also investigate the alignment of all components of the binary and planetary system, such as the spin-orbital alignment of the host star compared to the planetary and stellar companions. This would complement past work that typically studied planet-star alignment in single star systems (e.g., Winn et al., 2010; Triaud et al., 2010; Morton and Winn, 2014) and star-companion alignment in non-planet hosting binaries (e.g., Albrecht et al., 2007; Justesen and Albrecht, 2020), as well as theoretical modeling of misaligned disks in binary systems (Lai, 2014; Martin et al., 2014). Such an investigation would require the Rossiter-McLaughlin technique to measure the orientation of the planet's orbit with respect to the host star's rotation (Albrecht et al., 2022), as well as measurement of the binary orbital inclination with respect to the stellar rotation. One could estimate the stellar rotation angle based on the rotation period, projected rotational velocity, and radius for the stars with rotational spot modulation in the light curve (Justesen and Albrecht, 2020). The spin-orbit alignment of planetary and binary systems is a useful probe of formation and dynamical history (Winn and Fabrycky, 2015), so this work provides the binary orbital inclinations necessary for future studies. ## Acknowledgments The authors would like to thank the anonymous referee for their thorough review and helpful comments. We also thank the staff at Gemini and WIYN for their invaluable help conducting observations, as well as Josh Winn for useful conversations. KVL is supported by an appointment to the NASA Postdoctoral Program at the NASA Ames Research Center, administered by Oak Ridge Associated Universities under contract with NASA. This work made use of the High-Resolution Imaging instruments NESSI, 'Alopeke, and Zorro, which were funded by the NASA Exoplanet Exploration Program and built at the NASA Ames Research Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmett Quigley. Gemini Observatory is a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. Data presented herein were obtained at the WIYN Observatory from telescope time allocated to NN-EXPLORE through the scientific partnership of the National Aeronautics and Space Administration, the National Science Foundation, and the NSF's National Optical-Infrared Astronomy Research Laboratory. The WIYN Observatory is a joint facility of the NSF's NOIRLab, Indiana University, the University of Wisconsin-Madison, Pennsylvania State University, the University of Missouri, the University of California-Irvine, and Purdue University. DRC and CAC acknowledge partial support from NASA Grant 18-2XRP18_2-0007, and CAC acknowledges that this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). This research has made use of the Exoplanet Follow-up Observing Program (2022) website and NASA Exoplanet Archive (2019), which are operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Gemini North ('Alopeke), Gemini South (Zorro), WIYN (NESSI) Figure 8: _Left:_ Visual orbit solutions for KOI 307. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 7: _Left:_ Visual orbit solutions for KOI 270. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 10: _Left:_ Visual orbit solutions for KOI 1961. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 9: _Left:_ Visual orbit solutions for KOI 1613. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 11: _Left:_ Visual orbit solutions for KOI 2124. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 12: _Left:_ Visual orbit solutions for KOI 3234. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 14: _Left:_ Visual orbit solutions for KOI 4252. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 13: _Left:_ Visual orbit solutions for KOI 3456. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 16: _Left:_ Visual orbit solutions for TOI 271. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 15: _Left:_ Visual orbit solutions for KOI 5971. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 17: _Left:_ Visual orbit solutions for TOI 1287. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions. Figure 18: _Left:_ Visual orbit solutions for EPIC 212303338. The primary star is positioned at the origin (black cross), and the relative positions of the secondary component are marked with colored points. A random subset of the accepted orbital solutions from orbitize! are shown in grey. The green arrow shows the Gaia proper motion of the primary star (in a single year), which is inconsistent with the observed motion of the secondary. _Right:_ The observed changes in position angle and relative separation over time, plotted against the possible orbital solutions.
2309.10818
SlimPajama-DC: Understanding Data Combinations for LLM Training
This paper aims to understand the impacts of various data combinations (e.g., web text, Wikipedia, GitHub, books) on the pretraining of large language models using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source dataset, which has been refined and further deduplicated to 627B tokens from the extensive 1.2T token RedPajama dataset contributed by Together. We have termed our research as SlimPajama-DC, an empirical analysis designed to uncover fundamental characteristics and best practices associated with employing SlimPajama in the training of large language models. During our research with SlimPajama, two pivotal observations emerged: (1) Global deduplication vs. local deduplication. We analyze and discuss how global (across different sources of datasets) and local (within the single source of dataset) deduplications affect the performance of trained models. (2) Proportions of highly-deduplicated multi-source datasets in the combination. To study this, we construct six configurations on SlimPajama dataset and train individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best configuration outperforms the 1.3B model trained on RedPajama using the same number of training tokens by a significant margin. All our 1.3B models are trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16 mixed precision. We further extend our discoveries (such as increasing data diversity is crucial after global deduplication) on a 7B model with large batch-size training. Our SlimPajama-DC models are available at: https://huggingface.co/MBZUAI-LLM/SlimPajama-DC and the separate SlimPajama-DC datasets are available at: https://huggingface.co/datasets/MBZUAI-LLM/SlimPajama-627B-DC.
Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing
2023-09-19T17:59:54Z
http://arxiv.org/abs/2309.10818v3
# **SlimPajama-DC**: Understanding Data Combinations for LLM Training ###### Abstract This paper aims to understand the impacts of various data combinations (e.g., web text, wikipedia, github, books) on the training of large language models using SlimPajama. SlimPajama [33] is a rigorously deduplicated, multi-source dataset, which has been refined and further deduplicated to 627B tokens from the extensive 1.2T tokens RedPajama dataset [7] contributed by Together. We've termed our research as **SlimPajama-DC**, an empirical analysis designed to uncover fundamental characteristics and best practices associated with employing SlimPajama in the training of large language models. During our research with SlimPajama, two pivotal observations emerged: **(1)** Global deduplication vs. local deduplication. We analyze and discuss how global (across different sources of datasets) and local (within the single source of dataset) deduplications affect the performance of trained models. **(2)** Proportions of high-quality/highly-deduplicated multi-source datasets in the combination. To study this, we construct six configurations of SlimPajama dataset and train individual ones using 1.3B Cerebras-GPT [11] model with Alibi [28] and SwiGLU [32]. Our best configuration outperforms the 1.3B model trained on RedPajama using the same number of training tokens by a significant margin. All our 1.3B models are trained on Cerebras 16\(\times\) CS-2 cluster with a total of 80 PFLOP/s in bf16 mixed precision. We further extend our discoveries (such as _increasing data diversity is crucial after global deduplication_) on a 7B model with large batch-size training. Our models and the separate SlimPajama-DC datasets are available at: link1 and original SlimPajama is at: link2. ###### Contents * 1 Introduction Dataset Overview * 2.1 Number of Tokens * 2.2 Dataset Token Frequency Statistics * 2.3 Dataset Processing Procedure * 2.3.1 Low-length Document Filtering * 2.3.2 Global Deduplication * 3 Dataset Combination Configurations * 3.1 SlimPajama * 3.2 RefinedWeb * 4 Network Architecture and Training Details * 4.1 Network Architecture * 4.2 Training Details * 5 Results and Analysis * 5.1 Huggingface Leaderboard Evaluation with Harness * 5.2 More Evaluations * 5.3 Training Loss * 6 Application: Large Batch-size Training on 7B * 6.1 7B Training Data Combination * 6.2 7B Model Training Configurations * 6.3 Fast Training with Large Batch-size * 6.4 Progressive Training on Weight Decay * 6.5 Results of Pre-training and Instruction Tuning * 7 Related Work * 7.1 RedPajama, SlimPajama and Others. * 7.2 Data Processing and Optimization Approaches * 7.3 Data Combination for Training Large Language Models * 7.4 Large Batch Training for Large Language Models * 8 Conclusion * A Data Proportion Details * B MMLU ## 1 Introduction The success of modern large-scale models is deeply rooted in their training data. For large language models, the emphasis is not merely on generic text but on "diverse text". To guarantee the model's linguistic expertise and its comprehensive understanding of the world, this text must span a broad spectrum of domains, genres, languages, and more. Consequently, the composition of the pretraining data domains, such as Github, Wikipedia, books, and web text like CommonCrawl, plays a critical role in the performance of large language models. In our research, we delve into the domain/source weightings of training data. Leveraging **SlimPajama-DC**, we investigate two primary areas: (1) global-level and local-level deduplication, and (2) the efficacy of various combinations of thoroughly deduplicated datasets. The first emphasis basically encourages the model to be trained on all sources as no cross-domain overlaps inside, and the second helps us understand how to manage the integration and proportions of diverse domains, especially as datasets for LLM training continue to expand in variety. **Generic Deduplication.** Multi-source datasets often combine data from various origins, each with its unique distribution of information. When training large language models, handling data redundancy is critical to ensure that the model generalizes well and does not exhibit undue biases, making training faster and more efficient. Highly deduplicated datasets ensure that the model isn't repeatedly exposed to the same or very similar data points, making the training more efficient. Redundant data can slow down convergence and might make the model overfit to frequently seen patterns. Deduplication helps in efficient utilization of the model's capacity. In general, deduplication is the process of removing duplicate data to address this redundancy. **Global Deduplication _vs._ Local Deduplication.** The global deduplication process removes duplicates from the entire combined datasets. When we're using data from multiple sources, there might be overlaps across sources. Global deduplication identifies and removes these overlapping instances irrespective of their source. In local deduplication, duplicates are removed within each individual source dataset before merging them. However, if two source datasets have overlapping data, those duplicates will still be present in the final combined dataset since deduplication was only done locally within each dataset. In most current open-source LLM training data [7, 36, 38], only local deduplication is performed within each data source, which neglects the redundancy across the different sources. Given the effects, global deduplication performed in SlimPajama is generally preferable for training large language models, especially when using multi-source datasets. It ensures a balanced representation of information and prevents the pitfalls associated with data redundancy. However, more hardware memory is naturally required by this strategy. **Different Combinations of Highly-deduplicated Datasets.** A model trained on diverse data is more likely to generalize well across various tasks. It's exposed to a wider range of vocabulary, syntax, and semantics, enabling it to handle a broad scope of queries. If diverse sources are chosen such that they represent different cultures, beliefs, and demographics, the model might be more balanced and less prone to biases. However, if many sources share common biases, the final dataset might amplify them. Different sources can provide both a breadth and depth of knowledge on various topics. Combining a technical dataset with a general news dataset, for example, would allow the model to understand both in-depth technical details and broad general knowledge. It's crucial to note that data quality often outweighs the quantity. In this work, we aim to shed light on this fascinating perspective of comprehensive data combination on SlimPajama. **Specialization vs. Generalization Trade-off.** In general, combining many specialized datasets can lead to a jack-of-all-trades model, which might not be as adept at specific tasks as a model trained on a specialized dataset. While the model can tackle a wide range of tasks, it might not have the depth of understanding that a specialized model might have for a particular domain. In this study, we also explore specialization and generalization ability using both individual and combined data sources. The remainder of this paper is organized as follows. In Section 2, we elaborate the details of dataset statistics, token distributions, and data processing procedure. Section 3 describes dataset combination configurations for this SlimPajama-DC study. Our model architecture and training details are provided in Section 4, followed by the results and analysis in Section 5 on the range of various tasks in the zero- and few-shot settings. Section 6 presents an application of efficient Large Batch-size (LBS) training on a 7B model. Section 7 reviews related work and Section 8 concludes this study. ## 2 Dataset Overview ### Number of Tokens SlimPajama has a total of 627B tokens across different domains, as shown in Table 1. It includes validation and test sets with 500M tokens each, and these have been cleaned to ensure no overlap with the training data. For the **SlimPajama-DC** study, our entire training dataset for each configuration contains 330B tokens after tokenization which is carefully selected from the original SlimPajama dataset. We tested different sampling strategies for different domains of our training data: (1) each token is trained only once during training, such as Commoncrawl, and (2) we perform more than one epoch for training on particular sources, such as the Wikipedia and Github domains. The detailed domain source proportions of various combinations are shown in Table 3. \begin{table} \begin{tabular}{l|c c c c c c} **Dataset** & SlimPaj. & RedPaj. & LLaMA-1 & RefinedWeb & GPT3 & MassiveText \\ \hline Commoncrawl & 52.2\% & 72.6\% & 67.0\% & 100\% & 60.0\% & 0.0\% \\ C4 & 26.7\% & 14.4\% & 15.0\% & 0.0\% & 0.0\% & 10.0\% \\ GitHub & 5.2\% & 4.9\% & 4.5\% & 0.0\% & 0.0\% & 3.0\% \\ Books & 4.2\% & 2.1\% & 4.5\% & 0.0\% & 16.0\% & 27.0\% \\ ArXiv & 4.6\% & 2.3\% & 2.5\% & 0.0\% & 0.0\% & 0.0\% \\ Wikipedia & 3.8\% & 2.0\% & 4.5\% & 0.0\% & 3.0\% & 2.0\% \\ StackExchange & 3.3\% & 1.7\% & 2.0\% & 0.0\% & 0.0\% & 0.0\% \\ WebText2 & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 22.0\% & 0.0\% \\ MassiveWeb & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 48.0\% \\ News & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 10.0\% \\ \hline Total tokens & 637B & 1.2T & 1.0/1.4T & 600B & 300B & 300B \\ \hline \end{tabular} \end{table} Table 1: Data source proportions for various datasets. ### Dataset Token Frequency Statistics To examine the similarity between various datasets in SlimPajama, we calculate the KL divergence between two domain distributions of token counts from different datasets, as shown in Fig. 1a. Given that distinct datasets may emphasize dissimilar token types, we subsequently delve into the differences in the distribution of these datasets across token subsets exhibiting distinct characteristics: (1) Tokens exclusively comprising letters (Fig. 1b); (2) The union set of tokens with the top 1000 frequencies on each dataset (Fig. 1c); (3) Numbers and commonly used operators, like '30', '+' and '=' (Fig. 1d); (4) Whitespace Tokens, like '\(\backslash\)n\(\backslash\)n' and '\(\backslash\)t' (Fig. 1e); (5) Non-alphanumeric tokens, like '#' and '====' (Fig. 1f). There exists a degree of similarity in the distribution of different token subsets among RefinedWeb, Book, C4, and CommonCrawl, as well as between Github and StackExchange. Notably, when it comes to the distribution of non-alphanumeric tokens, Arxiv differs significantly from most datasets. While on the distribution of whitespace tokens, Refinedweb shows notable distinctions in comparison to Github and StackExchange. Among numbers and commonly used operators, the distribution of all datasets is relatively consistent. ### Dataset Processing Procedure SlimPajama was created by filtering low-length documents and applying MinHashISH deduplication to the 1.2T token RedPajama dataset to reduce it to 627B tokens. RefinedWeb [27] shows that training on deduplicated data improves training compute efficiency and decreases the chance of LLMs generating memorized text from the dataset. By removing duplicate and low-length examples, it ultimately improves the training compute efficiency and model performance. The overview of SlimPajama preprocessing pipeline is shown in Fig. 2 and the preprocessing code is under [https://github.com/Cerebras/modelzo](https://github.com/Cerebras/modelzo). Figure 1: Confusion matrix using KL divergence between the distributions of token statistics for different datasets. #### 2.3.1 Low-length Document Filtering Additional global filtering is performed to remove short, low-quality documents. After removing punctuation, consecutive spaces, newlines, tabs, and leading or trailing escape characters, documents with less than 200 characters were further filtered out. These documents typically contain only metadata and no useful information. A low-length filter was applied to every corpora other than Books and GitHub where it was found useful for short documents. The percentage of documents filtered out from each corpus within the SlimPajama dataset is detailed in Table 2. In total, this additional step removed 1.86% of the documents. #### 2.3.2 Global Deduplication When building SlimPajama, it is observed that every corpus included in it contained duplicates with the most significant duplication found in Commonwealth and GitHub. RefinedWeb [27] also found similar rates of deduplication in the CommonCrawl data. It is most common to perform deduplication within each dataset source separately [36, 7, 42, 13] to reduce implementation complexity and meet resource constraints. This local deduplication approach does not have the ability to remove overlap between data sources which can be significant for web-scraped data. Instead, global deduplication removes duplication within and between each data source. Following [4, 27, 1, 31], global-level deduplication is performed using MinHashLSH algorithm. To facilitate global deduplication efforts and reproducibility for other researchers, a tool designed for scalable performance is offered under the above link. Specifically, global MinHashLSH deduplication is performed using a Jaccard similarity threshold of 0.8, document signatures constructed with preprocessed lowercase 13-grams, and schema following [22]. To unify a representation of the same content, punctuation, consecutive spaces, newlines, tabs, and leading or trailing escape characters are removed. The level of deduplication Figure 2: SlimPajama preprocessing pipeline. performed per data source is presented in Table 2. The initial implementation of MinHashLSH did not scale to trillion token datasets like RedPajama without running out of memory. This is overcome by optimizing the memory usage and parallelization to perform deduplication on 64 CPU cores with 1.4TB GB peak memory usage, which can be easily decreased by creating multiple MinHashLSH objects to query. ## 3 Dataset Combination Configurations ### SlimPajama **Combination Strategies.** As shown in Table 3, the adjusted domain weights establish a new training distribution. Using this distribution, we adopt a standard training approach to learn a consistent model architecture. This architecture remains unchanged across various domain weights and is trained using data from diverse combination distributions. Across different setups, we maintain the total training tokens to be the same. Our examination of domain weights in large language model training focuses on three main areas: 1) Incrementally increasing the diversity of source combinations, as seen in configurations 1, 2, and 3. 2) With consistent data sources, we explore varying domain proportions as presented in configurations 2, 4, and 5. 3) We assess the significance of individual domain sources concerning the final model's performance. Note that given the minimal impact of ArXiv and StackExchange, we have opted to omit them from the ablations in configuration 3 to conserve training resources and keep relatively sufficient training tokens for CommonCrawl. The detailed configurations are as follows: * Configuration-1: 330B CommonCrawl * Configuration-2: 300B CommonCrawl + 30B Github * Configuration-3: 250B CommonCrawl + 30B Github + 26B Books + 24B Wikipedia * Configuration-4: 250B CommonCrawl + 80B Github (adjust sampling proportion) * Configuration-5: 250B CommonCrawl + 80B Wikipedia (adjust sampling proportion) * Configuration-6: 330B RefinedWeb CommonCrawl ### RefinedWeb RefinedWeb [27] is a massive English web dataset that is constructed using rigorous filtering and extensive deduplication of CommonCrawl. We use it as the comparison to our SlimPajama-DC CommonCrawl-only training. ## 4 Network Architecture and Training Details ### Network Architecture **Cerebras-GPT Architecture**[11]. Cerebras-GPT architecture shares similarities with those built on GPT-3 [4], particularly in the use of an autoregressive transformer decoder. However, a key difference lies in the attention mechanism employed. While GPT-3 utilizes a mix of dense and sparse-banded attention, Cerebras-GPT consistently uses dense attention across all decoder blocks. In terms of model dimensions, we either adhere to an aspect ratio of approximately 80 (\(\text{d}_{\text{model}}/\text{n}_{\text{layers}}\)) or maintain dimensions that are congruent with GPT-3 models. Additionally, all of our models are trained to handle a maximum sequence length of 2,048 tokens. The detailed architecture is shown in Table 4. **Alibi**[28]. Alibi introduces a more streamlined and efficient positional approach called _Attention with Linear Biases_. Rather than adding positional embeddings to word embeddings, ALiBi applies a bias to query-key attention scores, penalizing them based on their distance. **SwiGLU**[32]. SwiGLU is an activation function which is a variant of GLU [9]. The formulation is as follows: \[\text{SwiGLU}(x,W,V,b,c,\beta)=\text{Swish}_{\beta}(xW+b)\otimes(xV+c) \tag{1}\] where \(x\) is a vector of the hidden representation at a particular position in the sequence. \(W,V,b,c\) are the matrices and bias vectors, respectively. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} Model & n\_params & n\_layers & d\_model & n\_heads & d\_heads & batch size & learning rate \\ \hline GPT-3 XL & 1.3B & 24 & 2,048 & 24 & 128 & 1M & 2.0\(\times\)10-4 \\ **Our DC** & 1.3B & 24 & 2,048 & 24 & 128 & 2M & 1.2\(\times\)10-2 \\ GPT-3 & 6.7B & 32 & 4,096 & 32 & 128 & 2M & 1.2\(\times\)10-4 \\ LLaMA & 6.7B & 32 & 4,096 & 32 & 128 & 4M & 3.0\(\times\)10-4 \\ **Our LBS** & 6.7B & 32 & 4,096 & 32 & 128 & **14.3M** & 1.8\(\times\)10-4 \\ \end{tabular} \end{table} Table 4: Detailed model sizes, architectures, and optimization hyperparameters. Our LBS model details are presented in Sec. 6. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} & sub dataset & DC-1 & DC-2 & DC-3 & DC-4 & DC-5 & DC-6 \\ \hline \multirow{8}{*}{SlimPajama} & Commoncrawl & 100.0\% & 90.9\% & 75.8\% & 75.8\% & 75.8\% & 0.0\% \\ & C4 & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% \\ & GitHub & 0.0\% & 9.1\% & 9.1\% & 24.2\% & 0.0\% & 0.0\% \\ & Books & 0.0\% & 0.0\% & 7.9\% & 0.0\% & 0.0\% & 0.0\% \\ & ArXiv & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% \\ & Wikipedia & 0.0\% & 0.0\% & 7.3\% & 0.0\% & 24.2\% & 0.0\% \\ & StackExchange & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% \\ \hline RefinedWeb & Commoncrawl & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 100.0\% \\ \hline Total (Tokens) & & 330B & 330B & 330B & 330B & 330B & 330B \\ \end{tabular} \end{table} Table 3: Six configurations of sub-dataset combinations in SlimPajama. ### Training Details **Tokenizer.** We use an adapted GPT-NeoX [2] BPE-based tokenizer similar to that used in GPT-2 for all of our experiments, which has a vocabulary size of 50,277. Our entire training dataset for each configuration contains 330B tokens after tokenization, and each model takes about 2.5 days on Cerebras 16\(\times\) CS-2S cluster. **Optimizer.** We employ the AdamW optimizer [26] to train our models, adopting these specific hyper-parameters: \(\beta_{1}\) = 0.9, \(\beta_{2}\) = 0.95, and eps = 1.0e-08. Our chosen learning rate follows a linear scheduler, culminating in a final learning rate that's 10% of its peak value. Additionally, we apply a weight decay of 0.1, limit the gradient using a clip value of 1.0, and implement a 150-step warmup. **Other Hyperparameters.** In our model, the filter size is 5,461, hidden size is 2,048 and attention dropout rate is 0. _SwiGLU_ is used as the nonlinearity and _alibi_ is used for position embedding. _Mixed precision_ and _bfloat16_ are employed during model training. More hyperparameters are shown in Table 4. ## 5 Results and Analysis This section presents the analytical experiments and results on different combinations of SlimPajama. We first discuss the results following Huggingface Leaderboard Evaluation. Then, we demonstrate the importance of global deduplication and a diverse range of data sources in enhancing LLM's performance by conducting additional comprehensive evaluations across various topics. Finally, we visualize the training loss curves of different data domain combinations and provide insights on how they connect to the models' performance. ### Huggingface Leaderboard Evaluation with Harness Following the Huggingface Leaderboard Evaluation [12], we also assess our models on four key benchmarks using the Eleuther AI Language Model Evaluation Harness [14]. This unified framework facilitates the evaluation of generative language models across a broad scope of tasks. Specifically, our tests comprised: 1) **AI2 Reasoning Challenge (25-shot)**[6]: This entails a series of grade-school level science questions. 2) **HellaSwag (10-shot)**[41]: This benchmark gauges commonsense inference. While straightforward for humans, with an average accuracy of 95%, it poses challenges for state-of-the-art models. 3) **MMLU (5-shot)**[16]: Designed to assess a text model's multitask proficiency, this test spans 57 diverse tasks, including elementary mathematics, US history, computer science, law, among others. 4) **TruthfulQA (0-shot)**[23]: This evaluates a model's inclination to echo inaccurate information frequently encountered online. However, it's pertinent to note that within the Harness, TruthfulQA is essentially a 6-shot task, as it consistently commences with six examples, even when initialized with zero for the number of few-shot examples. As shown in Table 5, with the exception of DC-5, our average results are all better than RedPajama-1.3B which is also trained on 330B tokens. Among our combinations, the DC-1 (which relies solely on SlimPajama Commoncrawl) achieves the highest scores for ARC and MMLU among all tested configurations. Yet, its performance on TruthfulQA ranks at the bottom. On the other hand, DC-3 obtains the top average accuracy across all SlimPajama data combinations, while DC-6 stands out with the best results on HellaSwag and superior average performance across the board. A potential strategy to harness the strengths of each configuration might involve a sequential training process on DC-1, DC-3, and DC-6. Furthermore, SlimPajama is built using global deduplication across all sources. This suggests that merging all domains typically yields better results than selective combinations, given the absence of overlaps among different domain datasets. This also highlights the importance of global deduplication and a diverse range of data sources in enhancing LLM overall performance. ### More Evaluations As shown in Table 6, we present additional evaluations across various domains to investigate the fine-grained capabilities offered by different data combinations. Except for DC-6 (model trained on RefinedWeb data), incorporating more sources, such as DC-3, typically leads to improved average performance. Upon analysis, we find that specific mixtures excel in particular evaluation benchmarks. For example, DC-1 obtains the highest accuracy in the arc challenge and race. Meanwhile, DC-3 outperforms others in the wsc273, swag, and pawsx, and DC-5 emerges as the top performance in the xstory cloze evaluation. Moreover, all of our configurations are superior in the average performance over the comparisons of GPT-neo-1.3B [3] and RedPajama-1.3B [7]. \begin{table} \begin{tabular}{l|c|c c c c} **Model** & **Average** & **ARC** & **HellaSwag** & **MMLU** & **TruthfulQA** \\ \hline Cerebras-GPT-1.3B [11] & 33.5 & 26.3 & 38.5 & 26.6 & 42.7 \\ GPT-neo-1.3B [3] & 36.0 & 31.2 & 48.5 & 24.8 & 39.6 \\ RedPajama-1.3B [7] & 38.0 & 37.2 & 55.8 & 24.9 & 34.3 \\ \hline DC-1-1.3B & 38.5 & 36.3 & 56.0 & 27.0 & 34.8 \\ DC-2-1.3B & 38.4 & 33.9 & 55.5 & 25.7 & 38.6 \\ DC-3-1.3B & **38.6** & 34.7 & 56.0 & 25.6 & 38.0 \\ DC-4-1.3B & 38.5 & 35.2 & 54.7 & 25.7 & 38.3 \\ DC-5-1.3B & 37.6 & 33.4 & 53.3 & 26.0 & 37.6 \\ \hline DC-6-1.3B & **41.0** & 35.1 & 64.7 & 26.2 & 37.9 \\ \end{tabular} \end{table} Table 5: Results of six dataset combination configurations following Hugging-face Leaderboard Evaluation [12] with Harness [14]. **Risk of random guessing score on 1.3B models.** It is widely recognized that small models, such as the 1.3B variant, may struggle to achieve satisfactory predictions on specific benchmarks like MMLU. Their results could resemble random choices, not truly capturing the model's actual capabilities. To more accurately showcase a model's true potential and reflect the ability of different data combinations, we introduce a novel metric RRGS (risk of random guessing score) to evaluate the degree of random guessing. Since 25% in MMLU represents the baseline score for a guess, this metric evaluates the variance using average \(\ell_{1}\) distance around this base value across all sub-items. A larger variance would suggest a reduced likelihood of predictions resulting from mere chance. Given a MMLU score vector \(X\) of length \(N\) with sub-item scores \(s_{1},s_{2},\ldots,s_{n}\), RRGS can be formulated as: \[\text{RRGS}=1-\frac{1}{N}\sum_{i=1}^{N}(|s_{i}-0.25|) \tag{2}\] where \(i\) is the index of sub-item in MMLU and \(N\) is the number of items of MMLU. This metric utilizes the probabilities of variance to baseline 25%, aiming to assess the extent to which a model's prediction resembles random guessing on the MMLU benchmark. The metric has three variations: (1) Consider only items with scores exceeding 25%, i.e., \(i\in\{\text{positive item set}\}\). (2) Focus solely on items with scores less than 25%, i.e., \(i\in\{\text{negative item set}\}\). (3) Include all items and sum them up. The results are shown in Table 7. Generally, a model with a higher MMLU average score will have a low risk of random guessing. \begin{table} \begin{tabular}{l|c c|c c c c c c|c} **Eval** & **Neo [3]** & **RedPaj. [7]** & **DC-1** & **DC-2** & **DC-3** & **DC-4** & **DC-5** & **DC-6** & **LBS** \\ & \multicolumn{3}{c|}{**1.3B**} & \multicolumn{3}{c}{**1.3B**} & \multicolumn{3}{c|}{**1.3B**} & \multicolumn{3}{c|}{**7B**} \\ \hline humaneval (p@1) & - & - & - & - & - & - & - & - & 9.5 \\ bigbench* & 32.4 & 33.1 & 33.8 & 32.0 & 34.0 & **34.5** & 33.0 & 33.8 & 35.0 \\ arc\_easy & 61.1 & 66.7 & 66.1 & **66.9** & 66.5 & 66.4 & 65.5 & 66.8 & 74.7 \\ arc\_challenge & 25.9 & 33.5 & **36.3** & 33.9 & 34.7 & 35.2 & 33.4 & 35.1 & 44.3 \\ bool & 62.0 & 55.6 & 63.4 & **65.6** & 62.5 & 64.2 & 50.6 & 61.7 & 66.9 \\ PIQA & 71.1 & 72.4 & 70.8 & 69.2 & 70.7 & 68.6 & 67.8 & **75.7** & 77.4 \\ race & 34.1 & 34.4 & **37.3** & 36.7 & **37.3** & 36.5 & 34.6 & 36.6 & 38.2 \\ winogrande & 54.9 & 60.5 & 60.3 & 59.7 & 59.8 & 60.1 & 60.5 & **61.2** & 64.4 \\ openbookqa & 33.6 & 33.0 & 35.6 & 34.8 & 34.0 & 34.0 & 34.4 & **37.4** & 39.8 \\ copa & 69.0 & 77.0 & 70.0 & 73.0 & 75.0 & 74.0 & 70.0 & **81.0** & 86.0 \\ wsc273 & 75.1 & 78.0 & 76.2 & 78.0 & **81.0** & 76.9 & 76.6 & 79.5 & 85.0 \\ swag & 67.8 & 68.8 & 69.2 & 68.5 & **70.1** & 67.8 & 68.3 & 70.0 & 73.8 \\ pawsx & 50.6 & 51.5 & 51.4 & 52.3 & **53.1** & 52.2 & 50.5 & 50.8 & 54.7 \\ xstory\_cloze* & 51.1 & 51.5 & 51.0 & 51.3 & 52.0 & 51.5 & **52.2** & 51.6 & 55.3 \\ \hline Average & 53.0 & 55.1 & 55.5 & 55.5 & 56.2 & 55.5 & 53.6 & **57.0** & 61.2 \\ \end{tabular} \end{table} Table 6: Results of six dataset combination configurations of 1.3B models and our LBS-7B model details are presented in Sec. 6. Bigbench is evaluated under 3-shot using the average of multiple choice grade. Arc_easy and arc_challenge are evaluated using 5-shot, 25-shot, and 25-shot, respectively. All other evaluation benchmarks are tested on 0-shot. * represents the results are averaged across multiple sub-items inside each benchmark dataset. guessing probability. It is also crucial to employ a broader and more diverse set of benchmarks, such as in Table 6. Additionally, for a detailed understanding, we have cataloged the complete MMLU results for every sub-item in Table 12. This offers a lens into the knowledge assimilated by the pretrained models within each sub-domain on this comprehensive benchmark. ### Training Loss Fig. 3 presents the training loss curves for various data combinations, from which several insights can be observed: 1) While DC-6 demonstrated the highest average accuracy in our quantitative evaluations, its training loss was also the most substantial. This suggests that a lower training loss doesn't necessarily correlate directly with superior model performance. 2) DC-4, with a considerable portion of its data coming from code domain, exhibited the lowest training loss. This implies that as the amount of code in training increases, the training loss diminishes. 3) The training loss values for other combinations appeared to be relatively consistent with one another. \begin{table} \begin{tabular}{l|c c c c c|c} & DC-1 & DC-2 & DC-3 & DC-4 & DC-5 & DC-6 \\ \hline MMLU & 0.27 & 0.257 & 0.256 & 0.257 & 0.260 & 0.262 \\ \hline RRGSpos & **0.964** & **0.964** & 0.968 & 0.965 & 0.970 & 0.963 \\ RRGSneg & 0.974 & 0.973 & 0.975 & 0.974 & **0.969** & 0.973 \\ RRGSall & **0.968** & **0.968** & 0.971 & 0.969 & 0.970 & 0.967 \\ \end{tabular} \end{table} Table 7: Evaluation of random guessing probability on sub-items of MMLU. Figure 3: Illustration of training loss curves. DC-2’s curve closely resembles those of DC-3 and 5, so it has been excluded from the figure for clarity. Application: Large Batch-size Training on 7B ### 7B Training Data Combination Our 7B large batch size (LBS) training dataset is primarily based on Slimpajama, however, to obtain a sufficient proportion of web text, we have incorporated additional web data from the Commoncrawl corpus in RedPajama. We have also adjusted the proportions of various data sources in line with our 1.3B model training. For instance, we elevate the sampling frequency of Github and Wikipedia and increase the diversity of data sources by adding S2orc [25] and Stack-Markdown [21] following [38], as detailed in Table 8. It's crucial to understand that our primary focus is not solely on achieving the best performance. Instead, we place a higher emphasis on optimizing data combinations and ensuring the convergence of training large language models with large batch sizes. Consequently, we continue to utilize the SlimPajama/RedPajama Commoncrawl instead of higher-quality RefinedWeb. ### 7B Model Training Configurations **Architecture.** For the 7B model training, we adopt MPT architecture [38], the max sequence length is 2,048. We use Triton [35] with Flash Attention [8] as the self-attention implementation. Alibi is enabled to make model more flexible for input length extrapolation. The model's total number of parameters is 6.7B. **Tokenizer.** The tokenizer used for 7B training is adapted GPT-NeoX-20b. Following [38], the model's vocabulary size is adjusted to 50,432 for improved mfu and leaving a few tokens available that can be used in subsequent training. **Optimizer.** We employ the AdamW optimizer to train our models, adopting these specific hyper-parameters: \(\beta_{1}\) set at 0.9 and \(\beta_{2}\) at 0.95. We adopt a learning rate schedule that traces a cosine pattern, concluding with a learning rate that is 10% of its maximum value. Along with this, we use a multi-stage weight \begin{table} \begin{tabular}{l|l} dataset & proportion \\ \hline Slimpj.Arxiv & 4\% (54B) \\ Slimpj.StackExchanges & 3.2\% (43B) \\ Slimpj.Github & 4.9\% (66B) \\ Slimpj.Wikipedia & 7.5\% (101B) \\ Slimpj.Books & 4.3\% (57B) \\ Slimpj.C4 & 17.6\% (236B) \\ S2orc & 3\% (40B) \\ Markdown & 3\% (40B) \\ Slimpj.CC & 34.5\% (462B) \\ Redpaj.CC (ext.) & 18\% (241B) \\ \hline Total & 1.34T \\ \end{tabular} \end{table} Table 8: Data combination of 7B model training in large batch size style. decay scheduler as described in Sec. 6.4, cap the gradient with a clipping value of 1.0, and use a warmup spanning 2,000 steps. **System and platform.** For our 7B model training with a large batch size, we use 232 NVIDIA A100 GPUs (80G). We employ llm-foundry [37] as the training platform. We use FSDP with activation checkpointing enabled to save memory consumption. We also use the automatic mixed precision of bf16 in training. ### Fast Training with Large Batch-size Large batch training allows a larger learning rate, leading to a faster convergence of large models. Also, utilizing a larger batch size can optimize hardware resource usage to make training procedures more efficient. Additionally, fewer batches are required, which further accelerates the training process. As shown in Table 9, our large batch training scheme achieves much higher throughput and mfu than LLaMA [36] and MPT [38] with fewer total training GPU hours. Overall, in a convex optimization framework, leveraging a larger portion of the dataset typically leads to enhanced results. However, for most large deep models that involve non-convex optimizations, the precise nature of the loss landscape remains elusive, making the scenario more intricate. Many prior works [17, 19] have noticed that training with larger batches often results in overfitting compared to those using smaller batch sizes for the same network. When utilizing large batch training, there is a propensity for the model to become stuck or even gravitate towards potential saddle points within the loss landscape. While large batch training methods often focus on the nearest relative minima they encounter, networks trained with smaller batches usually navigate the loss landscape more thoroughly before committing to an optimal minimum. The minima reached through large batch training can be distinctly different from those achieved with smaller batch training methods. In the following, we introduce an approach to mitigate overfitting when training large language models in a large batch-size scheme. ### Progressive Training on Weight Decay Prior work [24] observed that dropout operation is utilized only in the early stages of training and is deactivated in subsequent phases. Models that incorporate this early dropout strategy tend to exhibit reduced final training loss compared to models that do not use dropout. In contrast to this, our approach \begin{table} \begin{tabular}{l|c|c|c|c|c} model & batch size & \# GPUs (A100-80G) & throughput & mfu & GPU-hours \\ \hline LLaMA-7B & 4M & – & – & – & 82,432 \\ MPT-7B & 4M & 232 & 3,310 & 0.4575 & 84.351 \\ LBS-7B (ours) & **14M** & 232 & **3,626** & **0.5011** & **76,999** \\ \end{tabular} \end{table} Table 9: Training speed of throughput (tokens per sec on each GPU), _model FLOPs utilization_ (mfu) [5] and total GPU-hours (per trillion training tokens). e a novel training strategy for large language models, wherein the training process is segmented into various stages. Within each stage, a distinct weight decay is applied to the model to serve specific objectives. We've termed this approach _Progressive Training on Weight Decay_ (PTWD). Owing to this methodology, our model, even when trained with a large batch size and extremely small iterations, achieves smooth convergence. As illustrated in Fig. 4, our training strategy consists of three distinct phases. Initially, we negate weight decay by setting it to zero and allow the model to train until full convergence is achieved. It usually can reach a lower loss level within this stage compared to using weight decay, even if it slightly overfits. Following this, in the second phase, we introduce a substantial weight decay, with a value of 0.5 in our experiments, to suppress the overfitting. Once the loss values stabilize, we transition to the third phase, wherein a standard weight decay of 0.1 is implemented, a value consistent with many other LLMs training. Intriguing, each phase spontaneously converges to roughly 1/3 of the total training budget, ensuring effective allocation of training budget throughout the process. ### Results of Pre-training and Instruction Tuning The results from our pretraining and subsequent instruction tuning on ShareGPT dataset are presented in Table 10. Notably, after instruction tuning, there is a significant enhancement in MMLU and TruthfulQA metrics. In contrast, the performance on ARC and HellaSwag has a slight decrease. On the whole, the average accuracy witnessed a substantial boost following instruction tuning. More evaluation results on the pretrained LBS model are provided in Table 6. Figure 4: Loss curve of our LBS-7B training. ## 7 Related Work ### RedPajama, SlimPajama and Others. RedPajama [7] aims to develop open-source large language models and begins by replicating the LLaMA training dataset [36], which boasts over 1.2 trillion tokens. This collaborative effort involves entities such as Together, Onto-cord.ai, ETH DS3Lab, Stanford CRFM, Hazy Research, and the MILA Quebec AI Institute. SlimPajama [33] stands as the highly deduplicated, multi-source, open-source dataset tailored for training large language models. This dataset emerged by refining and eliminating duplicates from the whole 1.2T token RedPajama dataset. Through meticulous filtering of subpar data and repetitive content, it reduced the dataset size by 49.6%, scaling it down from 1.2T to 627B tokens. SlimPajama provides superior quality and computational efficiency for training tasks than the original RedPajama dataset. Other efforts also have been made in this direction to construct diverse datasets, such as Pile [13]. It is an English text corpus of 825 GiB, which is designed for the training of large-scale language models with increased training dataset diversity to improve general cross-domain knowledge and downstream generalization capability. It contains a combination of 22 distinct, high-quality subsets. These subsets incorporate both pre-existing and freshly curated data, with a significant portion sourced from scholarly or professional domains. ### Data Processing and Optimization Approaches There have been several advancements in data processing and optimization. The seminal method of importance sampling [20] stands out as a Monte Carlo approach designed to evaluate attributes of a particular distribution, even when the samples are drawn from a distribution that differs from the one under exploration. SlimPajama's deduplication mechanism is an adaptation of importance sampling, incorporating a heuristic that values unique data points. Recently, several data selection frameworks [18, 15, 34, 40] have been introduced, inspired by the concept of importance sampling. Among them, DSIR [40] presents a framework for the data selection challenge by aiming to choose a subset from a large, unlabeled raw dataset that aligns with a specific target distribution, given a set of unlabeled target examples. It builds upon the traditional importance resampling method, adapting it for data selection in large-scale models. DSIR operates as a scalable algorithm, determining importance weights within a reduced feature space and then selecting data based on these \begin{table} \begin{tabular}{l|c|c c c c} **Model** & **Average** & **ARC** & **HellaSwag** & **MMLU** & **TruthfulQA** \\ \hline Ours-LBS-7B-Base & 44.1 & 44.3 & 69.8 & 26.1 & 36.1 \\ Ours-LBS-7B-Instruct & 46.4 & 43.5 & 68.0 & 32.1 & 42.1 \\ \end{tabular} \end{table} Table 10: Results of our large batch-size (LBS) trained 7B models following Huggingface Leaderboard Evaluation [12] using Harness [14]. importance resampling weights. In [34], the authors delve into the relationship between error scaling and dataset size. Their theoretical exploration suggests that by using a robust data pruning metric, which prioritizes which training examples to remove, the proposed method can suppress traditional power law scaling, potentially reaching exponential scaling for pruned dataset sizes. ### Data Combination for Training Large Language Models The training of large language models, such as GPT [29, 30, 4] and BERT [10], requires significant amounts of data to capture and generalize over the vast intricacies of human language. As a result, researchers often combine data from various sources, such as web text, Github, Books, ArXiv, Wikipedia, etc. There are some related work and difficulties that have been explored in the context of data combination for training large language models. (1) Concatenation of diverse datasets: One of the simplest methods for combining data is to concatenate various corpora, covering diverse topics, styles, and sources. This ensures that the model gets a broad view of the language. (2) WebText and similar corpora: For OpenAI's GPT-2, a dataset called WebText [30] was curated by scraping content from the internet. This kind of data provides a rich mix of formal, informal, factual, and opinionated text, thus offering diverse training material. (3) Balancing and weighting: Simply combining data may lead to issues if one source is overrepresented. Prior studies have applied weights to different data portions or ensure that the combined dataset is balanced in terms of sources, styles, and other criteria. For instance, DoReMi [39] first trains a small proxy model using group distributionally robust optimization across domains, generating domain weights (or mixture proportions) without relying on information from subsequent tasks. Following this, they utilize these domain weights to resample a dataset, on which then train a full-size model. (4) Multimodal Training: Combining text with other data forms, like images or sounds, can also enhance language model training, especially for tasks that require understanding across modalities. ### Large Batch Training for Large Language Models Large language models inherently possess a structure that supports parallelization, especially when optimized using techniques that allow for batch training. When computational resources permit, large batch sizes are favored to expedite the training of large models containing potentially millions or billions of parameters. At a fundamental level, larger batch sizes enhance the quality of each gradient update since they consider a more considerable chunk of the dataset. Conversely, a smaller batch size means that model parameter updates are based on gradients derived from a limited dataset portion. This smaller dataset slice might not comprehensively capture the intricate relationships between features and labels. Therefore, it might seem that larger batch sizes consistently offer advantages in training. However, [19] pointed out that this perspective does not factor in the model's capacity to generalize to new, unseen data, nor the intricate, non-convex optimization landscape of contemporary large models. In practice, multiple studies [17, 19] have demonstrated that while larger batch sizes might hasten convergence, they can impair a model's generalization to new datasets, irrespective of the deep network type. This observed disparity has been named as the _Generalization Gap_. A method [17] to address this gap involves starting from a smaller batch size and gradually enlarging it as training advances. In our study, we explore this problem through a new and unique angle of progressive weight decay training. ## 8 Conclusion We have presented **SlimPajama-DC**, a comprehensive study on understanding the data domain weights and combinations for training large language models. Notably, SlimPajama-DC can operate on compact models, and its advantages can be seamlessly transferred to models that are several times larger. This leads to a remarkable acceleration in training on the SlimPajama with the optimal sampling probabilities across domains for larger models. Through this, we aim to spark further exploration into data-centric methods to enhance the efficiency of large language model training.
2309.09391
Towards a dual formulation of quantum gravity via metric-curvature bijections
We prove that Riemannian metrics in General Relativity in the \emph{`normal-coordinates'} gauge are in one-to-one correspondence with curvature 2-forms. We discuss how this can be used as a change of variables in the operator formalism to construct a dual formulation of quantum gravity pertinent in the context of asymptotic safety-like approaches to quantum gravity.
Praveen Dennis Xavier
2023-09-17T22:44:48Z
http://arxiv.org/abs/2309.09391v2
# Towards a dual formulation of quantum gravity via metric-curvature bijections ###### Abstract We prove that Riemannian metrics in General Relativity in the _'normal-coordinates'_ gauge are in one-to-one correspondence with curvature 2-forms. We discuss how this can be used as a change of variables in the operator formalism to construct a dual formulation of quantum gravity pertinent in the context of asymptotic safety-like approaches to quantum gravity. Keywords:dual formulations, quantum gravity, curvature, non-perturbative methods, general relativity, asymptotic safety-like approaches, bijections, operator formalism + Footnote †: institutetext: \({}^{*}\)Université de Paris, CNRS, 91105 Orsay, France ###### Contents * 1 Introduction * 1.1 Revisiting a classical problem... * 1.2...and its quantum application * 1.3 Plan * 2 Conventions and Basics * 3 The Durand-Mendel result in Yang-Mills * 4 The Muller-Schubert-van de Ven result in gravity * 4.1 Vielbein formalism * 4.2 Recovering the spin-connection from the curvature * 4.3 Recovering the vielbein from the spin-connection * 5 Bijection between the curvature and spin-connection * 6 Bijection between the spin-connection and vielbein * 7 Bijection between the vielbein and metric * 8 Conclusions ## 1 Introduction ### Revisiting a classical problem... In Yang-Mills (YM) theory, Wu and Yang have shown that there exists connections, unrelated by gauge transformation, that have the same curvature [1]. This is known as the _field copy problem_[2]. Halpern pointed out that the curvature map restricted to connections in the _axial gauge_ is, however, injective [3]. Later, Durand and Mendel showed this also in the _Fock-Shwinger gauge_[4]. (The Fock-Schwinger gauge is preferred to the axial gauge because of its relative simplicity.) To be precise, Durand and Mendel showed that connections in the Fock-Shwinger gauge are mapped, by the curvature map, _bijectively_ to curvature 2-forms, \(F\), satisfying the 'YM Bianchi identity for curvature': \[\begin{split}& dF+ig[A\wedge F]=0,\\ \text{where}& A_{\mu}(x)=\int_{0}^{1}tx^{\nu}F_{\nu \mu}(tx)dt.\end{split} \tag{1}\] Just as in YM, in General Relativity, Riemannian metrics unrelated by coordinate transformation may have the same curvature. Muller, Schubert and van de Ven have shown, however, that the curvature map restricted to metrics in the _'normal-coordinates'_ gauge is _injective_[5]. So that we can set up a bijection, we pose the following question: what is the image of this map? In this paper we prove that the image (of the curvature map restricted to metrics in the 'normal coordinates' gauge) is the restricted set of curvature 2-forms, \(R^{a}_{b}\), satisfying what we call the _'1st and 2nd Bianchi identities for curvature'_: \[\begin{split}& R^{a}_{b}\wedge e^{b}=0,\\ \text{where}& e^{a}_{\mu}(x)=\delta^{a}_{\mu}+\int_{0}^{1}t(1-t )x^{b}x^{\nu}R^{a}_{b\nu\mu}(tx)dt\end{split} \tag{2}\] and \[\begin{split}& dR^{a}_{b}+\omega^{a}_{c}\wedge R^{c}_{b}-R^{a}_{c} \wedge\omega^{c}_{b}=0,\\ \text{where}&\omega^{a}_{b\mu}(x)=\int_{0}^{1}tx^{ \nu}R^{a}_{b\nu\mu}(tx)dt\end{split} \tag{3}\] respectively. This result proves that there is a bijection between metrics in normal coordinates and curvature 2-forms satisfying the 1st and 2nd Bianchi identities for curvature. We discuss, below, how this result can be used to derive a dual formulation of quantum gravity. ###...and its quantum application A quantum theory is usually formulated in terms of operators \(\vec{q}\) (which denotes the collection \(\{q^{i}_{\alpha}\}\) - where \(\alpha\) may be a continuous index and \(i\) may be a discrete index) and their canonical conjugates \(\vec{p}\), with dynamics governed by a Hamiltonian \(H(\vec{q},\vec{p})\). If one makes a change of variables \(\vec{q}\to\vec{Q}(\vec{q})\) (these are known as point canonical transformations in Hamiltonian mechanics), one can reformulate the quantum theory in terms of \(\vec{Q}\) and its canonical conjugate \(\vec{P}\). The theory is governed by the _dual_ Hamiltonian: \(H(\vec{q},\vec{p})\to H_{\text{dual}}(\vec{Q},\vec{P})\). The corresponding phase-space path integral formulation will involve integrations over paths \(\vec{Q}(t)\) and \(\vec{P}(t)\). (If either \(\vec{Q}(t)\) or \(\vec{P}(t)\) appears at most quadratically in the phase-space action, then it can be integrated out of the path integral - as it happens in certain cases of interest.) In this way one can arrive at a _dual_ path integral formulation of the theory. In soliton physics, a change of variables has been used to calculate transition amplitudes involving solitons [6; 7; 8; 9; 10]. In the statistical mechanics problem of the Coulomb gas, a quantum change of variables has been used to prove the occurrence of a mass gap [11] (see also the discussion in [12] SS6.6). As an application of this method to the non-perturbative dynamics of Yang-Mills (YM), consider the Durand-Mendel result of SS1.1. Using this result, the connection on a spatial slice in the Fock-Shwinger gauge can be mapped bijectively to the magnetic field subject to the YM Bianchi identity for curvature. The field that is canonically conjugate to the magnetic field is the _dual connection_[13] on the spatial slice [3] (see Fig. 1). So the change of variables allows one to reformulate the theory in terms of the magnetic field and spatial dual connection. The magnetic field can be integrated out of the phase-space path integral completely (because it appears only quadratically in the phase-space action), allowing the path integral to be expressed solely in terms of the dual connection. Now, a formulation of YM in terms of the dual connection has long been a candidate for solving the non-perturbative dynamics of YM [14]: it is established that if the dual connection acquires a v.e.v. [15; 16] or a mass [17] (SS8.3.2), [12], then confinement follows. In quantum gravity, the effective field theory (EFT) formalism arises naturally because counter-terms to the Einstein action generate all interactions allowed by symmetry. In the EFT of gravity there are an infinite number of allowed interactions. The space of couplings associated with these interactions is, obviously, infinite-dimensional. The question of the high energy behaviour of the theory comes down to the renormalization group flow of the couplings with the energy scale - the initial conditions for this flow are determined by the requirement that at low-energies the theory reproduces general relativity, i.e., at low energies, all but Newton's constant and the cosmological constant are zero (these themselves being determined by low-energy, classical experiments). Since Newton's constant, \(G\), has negative mass dimension (which is the source of gravity's nonrenormalizability), the free-fixed-point (with all couplings equal to zero) is ultraviolet repulsive in the direction of \(G\). Therefore the couplings grow as the energy increases, and perturbation theory can no longer describe the flow accurately. Nonrenormalizability is in itself not disastrous (in fact, from this point of view, nonrenormalizability is inconsequential for the UV finiteness of the theory), but what would be disastrous for the theory (in the sense that the theory cannot be physical) is if reaction rates develop singularities at finite, but very high, energies (otherwise reaction rates are finite at all energies and everything is dandy). It was Wienberg [18] who first proposed this more general criteria. Notice that it is vastly more accommodating than the criteria of renormalizability. We will say that a theory satisfying Weinberg's criteria is _UV finite_. In addition, Weinberg [19; 20; 21] suggested that one way of, fairly surely, guaranteeing that this condition is met is to require that the couplings asymptotically approach fixed values at high energies - in this case, reaction rates can, reasonably, be expected to be finite at all energies. This goes under the name of _asymptotic safety_. Asymptotic safety appears to be a _sufficient_ condition for UV finiteness but it is certainly not a _necessary_ condition. In fact, even if the couplings diverged at finite, but high, energy (let alone asymptotically), it is not clear whether reactions rates would too. So we learn that 'nonrenormalizability' cannot be conflated with the theory being unphysical; it is simply an indication that we must look beyond perturbation theory. What Figure 1: The chain of relations. is more important than the pronouncement that gravity is nonrenormalizable is to determine the flow of the coupling constants and the reaction rates as functions of the couplings beyond perturbation theory. What is required, then, is a formulation of quantum gravity in which non-perturbative calculations are made possible. The result of this paper can be used to derive a dual formulation of quantum gravity as follows: one can bijectively map the _spatial metric_ in the normal-coordinates gauge to the spatial curvature 2-form subject to the 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) Bianchi identities for curvature. In line with the discussions above, this change of variables allows one to reformulate quantum gravity in terms of the spatial curvature and its canonical conjugate. The change of variables from the metric to the curvature is in analogy with the change of variables from the connection to the curvature in YM. (We should also point out that, as in YM, it may happen that the spatial curvature can be integrated out of the phase-space path integral in the dual formulation of quantum gravity, allowing a complete formulation in terms of its canonical conjugate.) The expectation that this dual formulation of quantum gravity will be non-perturbative appeals to the fact that, as we have discussed, the analogous variables change in YM provides a formulation which has strong reasons to be non-perturbative. ### Plan The plan of the paper is as follows. In SS3 we review the Durand-Mendel result in YM. In SS4 we review the Muller-Schubert-van de Ven result in gravity. In SS5-SS7 - which are our original contributions - we prove the following statements respectively: * Curvatures satisfying the 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) Bianchi identities for curvature are bijective with spin-connections satisfying the Fock-Shwinger gauge and torsionless conditions. * Spin-connections satisfying the Fock-Shwinger gauge and torsionless conditions are bijective with vielbeins satisfying (6.1)-(6.3). * Vielbeins satisfying (6.1)-(6.3) are bijective with metrics in normal coordinates. These statements, taken together, allow us to prove that: _curvatures satisfying the 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) Bianchi identities for curvature are bijective with metrics in normal coordinates._ Fig. 2 below summarizes the proof. ## 2 Conventions and Basics Let \(x^{\mu}\) denote the coordinates. We adopt Greek indices for the components of tensors in the coordinate basis: \[T^{\alpha_{1}...\alpha_{p}}_{\beta_{1}...\beta_{q}}\equiv T(dx^{\alpha_{1}},...,dx^{\alpha_{p}},\partial_{\beta_{1}},...,\partial_{\beta_{q}}) \tag{2.1}\] Figure 2: A summary of the proof, with references to equation numbers. We adopt Wald's [22] (appendix B) conventions for the standard operations on differential forms: \[\text{ exterior derivative:}\quad(d\omega)_{\mu_{1}...\mu_{p+1}} =(p+1)\partial_{[\mu_{1}}\omega_{\mu_{2}...\mu_{p+1}]} \tag{2}\] \[\text{wedge product:}\quad(\omega\wedge\sigma)_{\mu_{1}...\mu_{p+ q}} =\frac{(p+q)!}{p!q!}\omega_{[\mu_{1}...\mu_{p}}\sigma_{\mu_{p+1}...\mu_{p+q}]}\] (3) \[\text{interior product:}\quad(i_{X}\omega)_{\mu_{2}...\mu_{p}} =X^{\mu_{1}}\omega_{\mu_{1}\mu_{2}...\mu_{p}} \tag{4}\] where antisymmetrization is defined in such a way that \(\omega_{\mu_{1}...\mu_{n}}=\omega_{[\mu_{1}...\mu_{n}]}\). The Lie derivative is defined as \[(\mathcal{L}_{X}T)_{\mu_{1}...\mu_{p}}=X(T_{\mu_{1}...\mu_{p}})-\sum_{i=1}^{p} T(\partial_{\mu_{1}},...,\partial_{\mu_{i-1}},[X,\partial_{\mu_{i}}], \partial_{\mu_{i+1}},...,\partial_{\mu_{p}}) \tag{5}\] and satisfies \[\mathcal{L}_{X}=\{d,i_{X}\}. \tag{6}\] As a result, \[[\mathcal{L}_{X},d]=0. \tag{7}\] We define the _radial vector field_ \[\mathfrak{r}\equiv x^{\mu}\partial_{\mu}. \tag{8}\] Since \([\mathfrak{r},\partial_{\mu}]=-\partial_{\mu}\) \[(\mathcal{L}_{\mathfrak{r}}\sigma)_{\mu_{1}...\mu_{p}}=(\mathfrak{r}+p)\sigma _{\mu_{1}...\mu_{p}} \tag{9}\] for any \(p\)-form \(\sigma\). If \(\eta\) is a \(p\)-form, less singular than \(|x|^{-p}\) near \(x=0\), then \[\mathcal{L}_{\mathfrak{r}}\eta=0\Longleftrightarrow\eta=0. \tag{10}\] This follows from the fact that the l.h.s. of (10) is equivalent to (using (9)) \[\frac{d}{dt}(t^{p}\eta_{\mu_{1}...\mu_{p}}(tx))=0 \tag{11}\] which integrates to \(\eta=0\) assuming \(\eta\) is less singular than \(|x|^{-p}\) near \(x=0\). ## 3 The Durand-Mendel result in Yang-Mills In this section we review the results of [4] which introduces several of the notions relevant for the latter discussions on gravity. Connections in Yang-Mills (YM) theory are Lie algebra valued 1-forms \(A\). Without loss of generality we may assume that the Lie algebra is a matrix algebra. The wedge product of matrix valued forms, of degree \(p\) and \(q\), is defined as \[[\omega\wedge\eta]^{a}_{b}:=\omega^{a}_{c}\wedge\eta^{c}_{b}+(-)^{pq+1}\eta^{ a}_{c}\wedge\omega^{c}_{b} \tag{12}\] (where \(a,b\) and \(c\) are the'matrix' indices). Consider the _Fock-Schwinger gauge_[23; 24] \[i_{\mathfrak{r}}A=0 \tag{3.2}\] where \(\mathfrak{r}\) is the radial vector field (2.8). This is a completely fixed gauge1. When we map connections in this gauge to field strengths via Footnote 1: Which means that no infinitesimal gauge transformation preserves the gauge condition. To see this, note that an infinitesimal gauge transformation \(A\to A+i[A,\omega]-\frac{1}{g}d\omega\) (where the matrix \(\omega(x)\) is the infinitesimal gauge transformation parameter) preserves the gauge condition iff \(\frac{d}{dt}\omega(tx)=0\). But since \(\omega(x)=0\) when \(|x|=\infty\) (this is because bona fide gauge transformations must approach the identity at infinity), we must have \(\omega(x)=0\). \[F=dA+\frac{ig}{2}[A\wedge A] \tag{3.3}\] there are two questions which arise: 1. is the map injective? 2. what is the image of the map? The answer to Q1 is yes. To see this, apply \(i_{\mathfrak{r}}\) to (3.3) and make use of (2.6) and (3.2): \[i_{\mathfrak{r}}F=\mathcal{L}_{\mathfrak{r}}A. \tag{3.4}\] Using (2.9) this becomes \[x^{\mu}F_{\mu\nu}=(\mathfrak{r}+1)A_{v} \tag{3.5}\] which is easily integrated, assuming the connection is not singular at \(x=0\), to give \[A_{\mu}(x)=\int_{0}^{1}tx^{\nu}F_{\nu\mu}(tx)dt. \tag{3.6}\] (Note that \(i_{\mathfrak{r}}A=0\) automatically from above.) To answer Q2 we first note that the Bianchi identity for curvature \[\begin{split}& dF+ig[A\wedge F]=0,\\ \text{where}& A_{\mu}(x)=\int_{0}^{1}tx^{\nu}F_{\nu \mu}(tx)dt\end{split} \tag{3.7}\] is a necessary condition for a field strength to lie in the image. But it is also a sufficient condition, which is seen by applying \(i_{\mathfrak{r}}\) to (3.7) and using (2.7) and (3.4): \[\begin{split}&\mathcal{L}_{\mathfrak{r}}(F-dA-\frac{ig}{2}[A \wedge A])=0,\\ \text{where}& A_{\mu}(x)=\int_{0}^{1}tx^{\nu}F_{\nu \mu}(tx)dt.\end{split} \tag{3.8}\] From (2.10), we find that this is equivalent to \[\begin{split}& F=dA+\frac{ig}{2}[A\wedge A]\\ \text{where}& A_{\mu}(x)=\int_{0}^{1}tx^{\nu}F_{\nu \mu}(tx)dt,\end{split} \tag{3.9}\] This shows that if \(F\) satisfies the Bianchi identity for curvature then it is indeed in the image of the curvature map restricted to connections in the Fock-Shwinger gauge. This completes the proof that the space of connections in YM satisfying the Fock-Shwinger gauge condition is bijective with the space of field strengths satisfying the Bianchi identity for curvature. ## 4 The Muller-Schubert-van de Ven result in gravity In this section we review the results of [5], crucial for the latter developments. ### Vielbein formalism See [25] for the basics of the vielbein formalism. Consider the vielbein 1-forms \(e^{a}\), and their vector duals \(e_{a}\), satisfying \[g(e_{a},e_{b}) =\delta_{ab} \tag{10}\] \[e^{a}(e_{b}) =\delta^{a}_{b} \tag{11}\] (Note that we have taken the metric signature to be Euclidean because, ultimately, what we have in mind is to perform the change of variables on the spatial metric.) In the coordinate basis, \(e^{a}_{\mu}\) and \(e^{\mu}_{a}\) are matrix inverses of each other so they satisfy \[e^{a}_{\mu}e^{\nu}_{a}=\delta^{\nu}_{\mu} \tag{12}\] We use Roman indices for the vielbein basis and Greek indices for the coordinate basis. The vielbeins obviously possess an \(SO(d)\) gauge freedom (in \(d\) dimensions). Assuming the torsion is zero, the spin-connection and curvature 2-form are defined as \[\omega^{a}_{b} =\frac{1}{2}(-e^{c}\wedge i_{b}i_{a}de^{c}+i_{b}de^{a}-i_{a}de^{b }), \tag{13}\] \[R^{a}_{b} =d\omega^{a}_{b}+\omega^{a}_{c}\wedge\omega^{c}_{b} \tag{14}\] respectively, where \(i_{a}\) is the interior product w.r.t. \(e_{a}\). We should point out that (13) is equivalent to the vanishing of the torsion: \[de^{a}+\omega^{a}_{b}\wedge e^{b}=0. \tag{15}\] To prove this use the fact that \(e^{b}\wedge i_{b}(\eta)=p\eta\) for any \(p\)-form \(\eta\). To see that (15) implies (13) use \(de^{a}=-\omega^{a}_{b}\wedge e^{b}\) to simplify the r.h.s. of (13). ### Recovering the spin-connection from the curvature Impose the Fock-Shwinger gauge condition on the connection: \[i_{\mathfrak{r}}\omega^{a}_{b}=0. \tag{16}\] This fixes the \(SO(d)\) gauge freedom completely. Then, in just the same way as in SS3, we can prove the following: the space of spin-connections satisfying the Fock-Shwinger gauge condition is in bijection with the space of curvatures satisfying the _2nd Bianchi identity for curvature_: \[\begin{split}& dR^{a}_{b}+\omega^{a}_{c}\wedge R^{c}_{b}-R^{a}_{c} \wedge\omega^{c}_{b}=0,\\ \text{where}&\omega^{a}_{b\mu}(x)=\int_{0}^{1}tx^{ \nu}R^{a}_{b\nu\mu}(tx)dt.\end{split} \tag{4.8}\] The forward map for this bijection is given by (4.5); and the reverse map is given by \[\omega^{a}_{b\mu}(x)=\int_{0}^{1}tx^{\nu}R^{a}_{b\nu\mu}(tx)dt. \tag{4.9}\] ### Recovering the vielbein from the spin-connection Applying \(i_{\mathfrak{r}}\) to (4.6) gives \[-di_{\mathfrak{r}}e^{a}+(\mathfrak{r}+1)e^{a}-\omega^{a}_{b}\wedge i_{ \mathfrak{r}}e^{b}=0. \tag{4.10}\] Assume we are working in _'normal-coordinates'_, i.e. a coordinate system in which \[x^{\mu}g_{\mu\nu}(x)=x^{\nu} \tag{4.11}\] (note that this implies \(g_{\mu\nu}(0)=\delta_{\mu\nu}\)). As a result, \[x^{\mu}e^{a}_{\mu}(x)=x^{a} \tag{4.12}\] \[x^{a}e^{a}_{\mu}(x)=x^{\mu} \tag{4.13}\] (note that this implies \(e^{a}_{\mu}(0)=\delta^{a}_{\mu}\)). So (4.10) becomes \[(\mathfrak{r}+1)e^{a}_{\mu}=\delta^{a}_{\mu}+\omega^{a}_{b\mu}x^{b} \tag{4.14}\] which is easily integrated to give \[e^{a}_{\mu}(x)=\delta^{a}_{\mu}+\int_{0}^{1}\omega^{a}_{b\mu}(tx)tx^{b}dt. \tag{4.15}\] Using (4.9) then gives \[e^{a}_{\mu}(x)=\delta^{a}_{\mu}+\int_{0}^{1}\int_{0}^{1}t_{1}t_{2}^{2}x^{b}x^{ \nu}R^{a}_{b\nu\mu}(t_{1}t_{2}x)dt_{1}dt_{2} \tag{4.16}\] Fixing \(t_{1}t_{2}\) and doing the \(t_{2}\) integral first gives, finally, \[e^{a}_{\mu}(x)=\delta^{a}_{\mu}+\int_{0}^{1}t(1-t)x^{b}x^{\nu}R^{a}_{b\nu\mu}( tx)dt. \tag{4.17}\] Note that \(x^{\mu}e^{a}_{\mu}(x)=x^{a}\) and \(x^{a}e^{a}_{\mu}(x)=x^{\mu}\) follow automatically from this. Bijection between the curvature and spin-connection **Theorem 1**.: _Curvatures satisfying the 1st and 2nd Bianchi identities for curvature are bijective with spin-connections satisfying the Fock-Shwinger gauge and torsionless conditions._ Proof.: We pointed out in SS4.2 that the space of curvatures satisfying the 2nd Bianchi identity for curvature is in bijection with the space of connections satisfying \(i_{\mathfrak{r}}\omega_{b}^{a}=0\). We now place the _1st Bianchi identity for curvature_ \[\begin{split}& R_{b}^{a}\wedge e^{b}=0,\\ \text{where}& e_{\mu}^{a}(x)=\delta_{\mu}^{a}+\int_{0}^ {1}t(1-t)x^{b}x^{\nu}R_{b\nu\mu}^{a}(tx)dt\end{split} \tag{5.1}\] as an additional restriction on the space of curvatures. Via the bijection, this descends into an extra restriction on the space of connections: \[\begin{split}& R_{b}^{a}\wedge e^{b}=0,\\ \text{where}& R_{b}^{a}=d\omega_{b}^{a}+\omega_{c}^{a} \wedge\omega_{b}^{c}\\ \text{and}& e_{\mu}^{a}(x)=\delta_{\mu}^{a}+\int_{0 }^{1}\omega_{b\mu}^{a}(tx)tx^{b}dt.\end{split} \tag{5.2}\] Applying \(i_{\mathfrak{r}}\) to this and manipulating gives \[\begin{split}&\mathcal{L}_{\mathfrak{r}}(de^{a}+\omega_{b}^{a} \wedge e^{b})=0,\\ \text{where}& e_{\mu}^{a}(x)=\delta_{\mu}^{a}+\int_{0 }^{1}\omega_{b\mu}^{a}(tx)tx^{b}dt\end{split} \tag{5.3}\] which implies (using (2.10)) that \[\begin{split}& de^{a}+\omega_{b}^{a}\wedge e^{b}=0,\\ \text{where}& e_{\mu}^{a}(x)=\delta_{\mu}^{a}+\int_{0 }^{1}\omega_{b\mu}^{a}(tx)tx^{b}dt.\end{split} \tag{5.4}\] We will refer to (5.4) as the _torsionless conditon_ on connections (c.f. (4.6)). ## 6 Bijection between the spin-connection and vielbein **Theorem 2**.: _Spin-connections satisfying the Fock-Shwinger gauge and torsionless conditions are bijective with vielbeins satisfying (6.1)-(6.3)._ Proof.: Consider the space of vielbeins satisfying \[x^{\mu}e_{\mu}^{a}(x) =x^{a} \tag{6.1}\] \[x^{a}e_{\mu}^{a}(x) =x^{\mu}\] (6.2) \[i_{a}\mathcal{L}_{\mathfrak{r}}e^{b} =i_{b}\mathcal{L}_{\mathfrak{r}}e^{a} \tag{6.3}\] We leave it as an exercise to the reader to show that connections arising from such vielbeins satisfy the Fock-Shwinger gauge and torsionless conditions automatically. Conversely, consider the image of connections satisfying the Fock-Shwinger gauge and torsionless conditions under the map (4.15). It is immediately clear that such vielbeins satisfy (6.1)-(6.2). But, in addition, such vielbeins satisfy (6.3), which can be proved as follows: we pointed out, in SS4.1, that (4.6) is equivalent to (4.4). This means that the torsionless condition is equivalent to \[\begin{split}&\omega^{a}_{b}=\frac{1}{2}(-e^{c}\wedge i_{b}i_{a} de^{c}+i_{b}de^{a}-i_{a}de^{b})\\ \text{where}& e^{a}_{\mu}(x)=\delta^{a}_{\mu}+\int_{0 }^{1}\omega^{a}_{b\mu}(tx)tx^{b}dt.\end{split} \tag{6.4}\] Applying \(i_{\mathbf{t}}\) to this and using \(x^{a}\wedge e^{a}=x^{\mu}dx^{\mu}\) shows that (6.3) is indeed satisfied. From (6.4), it is clear that mapping a connection - satisfying the torsionless condition - to a vielbein (via (4.15)) and back is equivalent to the identity operation. ## 7 Bijection between the vielbein and metric **Theorem 3**.: _Vielbeins satisfying (6.1)-(6.3) are bijective with metrics in normal coordinates._ Proof.: The metric is obtained from the vielbein using \[g_{\mu\nu}=e^{a}_{\mu}e^{a}_{\nu} \tag{7.1}\] Conversely, to obtain the vielbein from the metric, we differentiate (7.1) and use (6.3) (which, using (2.9), can be shown to be equivalent to \(e^{a}_{[\mu}(tx)\frac{d}{dt}e^{a}_{\nu]}(tx)=0\), and which completely fixes the \(SO(d)\) gauge freedom the vielbeins possess): \[\frac{d}{dt}e^{a}_{\mu}(tx)=\frac{1}{2}e^{\nu}_{a}(tx)\frac{d}{dt}g_{\nu\mu}( tx). \tag{7.2}\] Given the metric, this can be integrated uniquely for the vielbeins (a fact essentially guaranteed by the Picard-Lindelof theorem [26] (SS2.2)) using the initial condition \(e^{a}_{\mu}(0)=\delta^{a}_{\mu}\) (note that this initial condition is consistent with (6.1)-(6.2)). This is the forward map, taking metrics to vielbeins. Metrics in normal coordinates satisfy (see (4.11)) \[x^{\mu}g_{\mu\nu}(x)=x^{\nu} \tag{7.3}\] From (7.1) it is clear that vielbeins satisfying (6.1)-(6.2) map to metrics in normal coordinates. The question is whether metrics in normal coordinates map to vielbeins satisfying (6.1)-(6.3). (6.3) is easily shown by multiplying (7.2) by \(e^{\mu}_{b}(tx)\) and using the symmetry of the metric. From (7.2) we have \[\frac{d}{dt}(tx^{\mu}e^{a}_{\mu}(tx))=x^{\mu}e^{a}_{\mu}(tx) \tag{7.4}\] which implies \[x^{\mu}e^{a}_{\mu}(tx)=\text{const.}=x^{a} \tag{10}\] proving (11). (10) implies \(x^{a}e^{\mu}_{a}(x)=x^{\mu}\), which together with (12) gives \[\frac{d}{dt}(tx^{a}e^{a}_{\mu}(tx))=x^{a}e^{a}_{\mu}(tx) \tag{11}\] which implies \[x^{a}e^{a}_{\mu}(tx)=\text{const.}=x^{\mu} \tag{12}\] proving (12). We leave it as an simple exercise to the reader to convince themselves that (11) and (12) are indeed inverses of each other when restricted to the subspaces in question. ## 8 Conclusions Combining Theorems 1, 2 and 3 finally proves that **Theorem 4**.: _The space of curvatures satisfying the 1st and 2nd Bianchi identities for curvature is bijective with the space of metrics in normal coordinates._ The proof is schematically represented in Fig 2. ## Acknowledgments The author thanks the Rudolf Peierls Centre for their hospitality. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2310.20660
Pseudo-Kähler and hypersymplectic structures on semidirect products
We study left-invariant pseudo-K\"ahler and hypersymplectic structures on semidirect products $G\rtimes H$; we work at the level of the Lie algebra $\mathfrak{g}\rtimes\mathfrak{h}$. In particular we consider the structures induced on $\mathfrak{g}\rtimes\mathfrak{h}$ by existing pseudo-K\"ahler structures on $\mathfrak{g}$ and $\mathfrak{h}$; we classify all semidirect products of this type with $\mathfrak{g}$ of dimension $4$ and $\mathfrak{h}=\mathbb{R}^2$. In the hypersymplectic setting, we consider a more general construction on semidirect products. We construct new $2$-step nilpotent hypersymplectic Lie algebras; to our knowledge, these are the first such examples whose underlying complex structure is not abelian
Diego Conti, Alejandro Gil-García
2023-10-31T17:27:12Z
http://arxiv.org/abs/2310.20660v1
# Pseudo-Kahler and hypersymplectic structures on semidirect products ###### Abstract We study left-invariant pseudo-Kahler and hypersymplectic structures on semidirect products \(G\rtimes H\); we work at the level of the Lie algebra \(\mathfrak{g}\rtimes\mathfrak{h}\). In particular we consider the structures induced on \(\mathfrak{g}\rtimes\mathfrak{h}\) by existing pseudo-Kahler structures on \(\mathfrak{g}\) and \(\mathfrak{h}\); we classify all semidirect products of this type with \(\mathfrak{g}\) of dimension \(4\) and \(\mathfrak{h}=\mathbb{R}^{2}\). In the hypersymplectic setting, we consider a more general construction on semidirect products. We construct new \(2\)-step nilpotent hypersymplectic Lie algebras; to our knowledge, these are the first such examples whose underlying complex structure is not abelian. _Keywords: Pseudo-Kahler, hypersymplectic, semidirect product, Ricci-flat MSC classification: Primary 53C26; Secondary 53C50, 22E25, 53C15_ ###### Contents * 1 Construction of pseudo-Kahler structures * 2 Examples of dimension 6 and 8 * 2.1 Case of abelian \(\mathfrak{h}\) * 2.2 Case of non-abelian \(\mathfrak{h}\) * 3 Classification on 6-dimensional semidirect products \(\mathfrak{g}\rtimes\mathbb{R}^{2}\) * 4 Hypersymplectic structures ## Introduction Left-invariant metrics on a Lie group provide a setting for the study of geometric structures which is particularly suited to the construction of explicit examples: since all computations can be performed at the Lie algebra level, the PDE's characterizing the integrability conditions reduce to linear equations. Among left-invariant metrics, a widely studied class consists of semidirect products. Indeed, the study of Riemannian Einstein homogeneous spaces of negative scalar curvature, thanks to [8, 19, 21], reduces to the study of standard solvmanifolds of Iwasawa type, namely semidirect products \(\mathfrak{g}\rtimes_{D}\mathbb{R}\), where \(\mathfrak{g}\) is nilpotent and \(D\) symmetric. This does not extend to indefinite signature, but even in this context semidirect products still provide a large class of Einstein metrics [11]. Semidirect products have also been studied in the context of non-negative Ricci curvature (see [14]) and ad-invariant metrics, where one exploits the fact that any Lie algebra \(\mathfrak{g}\) yields a canonical semidirect product \(\mathfrak{g}^{*}\rtimes\mathfrak{g}\), with \(\mathfrak{g}\) acting on \(\mathfrak{g}^{*}\) via the coadjoint representation, admitting an ad-invariant metric of neutral signature (see [25]). Finally, it should be noted that every Lie algebra which is neither solvable nor semisimple has a nontrivial Levi decomposition, i.e. it is the semidirect product of its radical and a semisimple subalgebra; this has been exploited for instance in [17] to construct closed \(\mathrm{G}_{2}\)-structures on non-solvable Lie groups. The structures studied in this paper belong to the class of pseudo-Kahler metrics. Whilst positive-definite Kahler left-invariant metrics, or more generally homogeneous, are fairly well understood (see [13, 22]), pseudo-Kahler invariant metrics show much greater flexibility. This is already evident in real dimension 4, where Ovando's classification lists eleven distinct Lie algebras carrying a pseudo-Kahler metric, but only six admitting a definite Kahler metric (see [25]). Moreover, Kahler nilpotent Lie algebras are necessarily abelian by [18], but this does not hold in the pseudo-Kahler case, as one can see from Ovando's classification. Pseudo-Kahler Lie groups find application in the construction of homogeneous quaternion-Kahler manifolds via the c-map (see [23]). They can also be used in the construction of Sasaki-Einstein solvmanifolds of indefinite signature, yielding some of the few known examples of left-invariant metrics admitting a Killing spinor (see [12]). Complex and pseudo-Kahler structures on semidirect products have already been considered in [9], where in particular the authors characterize the situation in which the product almost complex structure on a semidirect product of two complex Lie algebras is integrable. In addition, [9] contains several examples of semidirect products \(\mathfrak{g}\rtimes\mathfrak{h}\) in dimension 6 endowed with a complex structure for which \(\mathfrak{h}\) is totally real. A particular class of pseudo-Kahler metrics is formed by hypersymplectic structures, introduced by Hitchin in [20]. A hypersymplectic structure on a \(4n\)-dimensional manifold is determined by a complex structure and a product structure that anticommute, together with a compatible metric such that the associated 2-forms are closed. The holonomy of the metric is contained in the non-compact Lie group \(\mathrm{Sp}(2n,\mathbb{R})\), which is the split-real form of \(\mathrm{Sp}(2n,\mathbb{C})\). Hence, hypersymplectic manifolds are neutral-signature analogues of hyperkahler manifolds, whose holonomy is contained in the compact real form \(\mathrm{Sp}(n)\). Due to the common complexification of the holonomy groups, many facts from hyperkahler geometry carry over to hypersymplectic manifolds. In particular, hypersymplectic manifolds are complex symplectic and Ricci-flat. Moreover, since \(\mathrm{Sp}(2n,\mathbb{R})\subset\mathrm{U}(n,n)\), hypersymplectic are particular examples of neutral Calabi-Yau manifolds. These have been studied in [16]. Left-invariant hypersymplectic structures have been widely studied. In [1], the author characterizes hypersymplectic Lie algebras in terms of two Lie algebras equipped with flat torsion-free connections and parallel symplectic forms. This allows him to classify \(4\)-dimensional hypersymplectic Lie algebras. A procedure to construct hypersymplectic structures on \(\mathbb{R}^{4n}\) beginning with affine-symplectic data on \(\mathbb{R}^{2n}\) was given in [3]. These hypersymplectic structures are shown to be invariant by a \(3\)-step nilpotent double Lie group and the resulting metrics are complete and not necessarily flat. The first \(4\)-step nilpotent examples of hypersymplectic Lie algebras were obtained in [6], where the authors provide a method to construct hypersymplectic structures from the data of a pseudo-Kahler and a complex symplectic structure. Two outstanding questions at the time of writing are whether it is possible to construct \(2\)-step nilpotent hypersymplectic Lie algebras with a non-abelian complex structure, or such that the metric is non-flat. We will answer the first question in this paper, and leave the second open. The key tool of this paper is a method to construct pseudo-Kahler structures on some semidirect products of Lie algebras (see Theorem 1.3). We start with two pseudo-Kahler Lie algebras \(\mathfrak{g}\) and \(\mathfrak{h}\) and define a natural almost pseudo-Hermitian structure of the semidirect product \(\mathfrak{g}\rtimes\mathfrak{h}\). Then we determine the conditions on the representation defining the semidirect product to ensure that the almost complex structure on \(\mathfrak{g}\rtimes\mathfrak{h}\) is parallel. This allows us to construct several examples of pseudo-Kahler Lie algebras in dimension \(6\) and \(8\), starting both with an abelian \(\mathfrak{h}\) and a non-abelian \(\mathfrak{h}\) (see Section 2). We then consider a special case of \(6\)-dimensional extensions of the form \(\mathfrak{g}\rtimes\mathbb{R}^{2}\), and provide a classification up to isometry (in the restricted sense of Definition 1.13). It turn out that of the eleven \(4\)-dimensional Lie algebras which admit a pseudo-Kahler structure classified in [25], only three admit an extension. Taking into account the metrics, this gives rise to four families of pseudo-Kahler Lie algebras of dimension \(6\) (in one of which the parameter can be eliminated by rescaling). We also study hypersymplectic structures on semidirect products of pseudo-Kahler Lie algebras as considered above. Examples of this type have been constructed in [10]; they all satisfy a special condition which we call _Kodaira type_, namely they are \(2\)-step nilpotent with \(J\)-invariant center of dimension equal to half the dimension of the Lie algebra. We obtain \(2\)-step nilpotent hypersymplectic Lie algebras which are neither of Kodaira type nor equipped with an abelian complex structure. However, all the \(2\)-step nilpotent examples that we obtain are flat. Together with the existing examples in the literature, this leads us to conjecture that every \(2\)-step nilpotent hypersymplectic Lie algebra is flat. We note that the metrics constructed in this paper are of potential interest in the study of Einstein metrics. Indeed, all the hypersymplectic examples are Ricci-flat (indeed, neutral Calabi-Yau), and so are most of the pseudo-Kahler metrics, though not all. In the same spirit as [12], one can then apply a construction of [7] and obtain a pseudo-Kahler-Einstein bundle in two dimensions higher, on which one can consider the Sasaki-Einstein cone. ### Acknowledgements D. C. would like to acknowledge the PRIN2022 project "Interactions between Geometric Structures and Function Theories". A. G. is supported by the German Science Foundation (DFG) under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306. Construction of pseudo-Kahler structures In this section we introduce some fundamental objects that will appear throughout the paper; in particular, pseudo-Kahler structures, semidirect products, and \(\mathrm{U}(p,q)\)-structures on a Lie algebra. We then introduce conditions for a semidirect product of two pseudo-Kahler Lie algebras to admit an induced pseudo-Kahler structure. Given a manifold \(M\) of dimension \(n\) and a group \(K\subset\mathrm{GL}(n,\mathbb{R})\), a \(K\)-structure is a reduction to \(K\) of the bundle of frames; notice that we have replaced with \(K\) the more conventional symbol \(G\), in order to reserve the latter for the ambient manifold. We will be interested in the particular case where \(K=\mathrm{U}(p,q)\) is the group that preserves a complex structure on \(\mathbb{R}^{n}=\mathbb{R}^{2(p+q)}\) and a scalar product \(g\) of signature \((2p,2q)\) such that \(g(J\cdot,J\cdot)=g\). A \(\mathrm{U}(p,q)\)-structure may be identified with a pair \((g,J)\), where \(g\) is a pseudo-Riemannian metric on \(M\) of signature \((2p,2q)\), and \(J\) an almost complex structure such that \(g(J\cdot,J\cdot)=g\). We point out that the terminology "almost Hermitian" is also used in the literature, but mostly reserved to the case \(q=0\). A \(\mathrm{U}(p,q)\)-structure is pseudo-Kahler if \(J\) is a complex structure and \(\omega=g(J\cdot,\cdot)\) is closed, or equivalently if \(J\) is parallel relative to the Levi-Civita connection. Analogous definitions can be given on a Lie algebra \(\mathfrak{g}\). Thus, a pseudo-Kahler Lie algebra is a triple \((\mathfrak{g},g,J)\), where \(\mathfrak{g}\) is a real Lie algebra, \(g\) a non-degenerate scalar product on \(\mathfrak{g}\), \(J\colon\mathfrak{g}\to\mathfrak{g}\) a complex structure such that \(g(J\cdot,J\cdot)=g\), and \(J\) is parallel with respect to the Levi-Civita connection; equivalently, one may impose that \(J\) is integrable, i.e. \(N_{J}=0\), and \(\omega=g(J\cdot,\cdot)\) is closed. It is clear that a pseudo-Kahler structure on a Lie algebra defines a left-invariant pseudo-Kahler structure on a Lie group with that Lie algebra. However, we shall perform all computations at the Lie algebra level, with no need to consider the group. We aim at constructing pseudo-Kahler structures on a semidirect product. This means that we have pseudo-Kahler Lie algebras \((\mathfrak{g},g,J_{g})\) and \((\mathfrak{h},h,J_{h})\), and in addition a homomorphism \(\varphi\colon\mathfrak{h}\to\mathrm{Der}(\mathfrak{g})\). We can then define the semidirect product \[\tilde{\mathfrak{g}}=\mathfrak{g}\rtimes_{\varphi}\mathfrak{h},\quad[X+A,Y+B] _{\tilde{\mathfrak{g}}}=[X,Y]_{\mathfrak{g}}+[A,B]_{\mathfrak{h}}+\varphi(A) Y-\varphi(B)X.\] Here and in the sequel, we adopt the convention that \(X,Y,Z\) denote elements of \(\mathfrak{g}\), and \(A,B,C\) denote elements of \(\mathfrak{h}\). As a vector space, \(\tilde{\mathfrak{g}}\) is isomorphic to \(\mathfrak{g}\oplus\mathfrak{h}\), so it has an induced \(\mathrm{U}(p,q)\)-structure \[\tilde{g}(X+A,Y+B)=g(X,Y)+h(A,B),\quad\tilde{J}(X+A)=J_{g}(X)+J_{h}(A).\] Notice that the subscripts in \(J_{g}\) and \(J_{h}\) have been chosen to suggest the fact that they are complex structures on the Lie algebra denoted by the corresponding (gothic) letter. **Remark 1.1**.: If \(\mathfrak{g}\) and \(\mathfrak{h}\) are abelian, then \(\widetilde{\mathfrak{g}}\) is 2-step solvable. If in addition \(\varphi(A)\varphi(B)=0\) for all \(A,B\in\mathfrak{h}\), then \(\widetilde{\mathfrak{g}}\) is 2-step nilpotent. Indeed, the derived Lie algebra \([\widetilde{\mathfrak{g}},\widetilde{\mathfrak{g}}]\) is spanned by the vectors \([X+A,Y+B]=\varphi(A)Y-\varphi(B)X\in\mathfrak{g}\). Since \(\mathfrak{g}\) is abelian, then \([\widetilde{\mathfrak{g}},\widetilde{\mathfrak{g}}]\) is also abelian, which means that \(\widetilde{\mathfrak{g}}\) is 2-step solvable. Using that \(\varphi(A)\varphi(B)=0\) for all \(A,B\in\mathfrak{h}\), we get \[[X+A,[Y+B,Z+C]] =[X+A,\varphi(B)Z-\varphi(C)Y]\] \[=\varphi(A)(\varphi(B)Z-\varphi(C)Y)=0,\] which means that \(\widetilde{\mathfrak{g}}\) is 2-step nilpotent. We aim at constructing pseudo-Kahler structures on semidirect products. To that end, we need to introduce more notation and a lemma. For any \(f\colon\mathfrak{g}\to\mathfrak{g}\), we will write \(f=f^{s}+f^{a}\), where \(f^{s}\) and \(f^{a}\) denote the symmetric and antisymmetric parts of \(f\) relative to the scalar product \(g\). **Lemma 1.2**.: _Let \(X,Y,Z\in\mathfrak{g}\) and \(A,B,C\in\mathfrak{h}\). Then_ \[\tilde{g}(\tilde{\nabla}_{X}Y,Z+C)=g(\nabla^{g}_{X}Y,Z)+g(\varphi(C )^{s}X,Y);\] \[\tilde{\nabla}_{X}B=-\varphi(B)^{s}X;\quad\tilde{\nabla}_{A}B= \nabla^{h}_{A}B;\quad\tilde{\nabla}_{A}Y=\varphi(A)^{a}Y.\] Proof.: Koszul's formula for the Levi-Civita connection gives \[2\tilde{g}(\tilde{\nabla}_{X}Y,Z+C) =\tilde{g}([X,Y]_{\mathfrak{g}},Z+C)-\tilde{g}(Y,[X,Z+C]_{ \mathfrak{g}})-\tilde{g}(X,[Y,Z+C]_{\mathfrak{g}})\] \[=g([X,Y]_{\mathfrak{g}},Z)-g(Y,[X,Z]_{\mathfrak{g}})-g(X,[Y,Z]_{ \mathfrak{g}})\] \[\quad+\tilde{g}([X,Y]_{\mathfrak{g}},C)-\tilde{g}(Y,[X,C]_{ \mathfrak{g}})-\tilde{g}(X,[Y,C]_{\mathfrak{g}})\] \[=2g(\nabla^{g}_{X}Y,Z)+g(\varphi(C)X,Y)+g(\varphi(C)Y,X),\] giving the first equation. The second follows similarly from \[2\tilde{g}(\tilde{\nabla}_{X}B,Z+C) =g([X,B]_{\mathfrak{g}},Z)-\tilde{g}(B,[X,C]_{\mathfrak{g}})- \tilde{g}(X,[B,Z]_{\mathfrak{g}})\] \[=-g(\varphi(B)X,Z)-g(X,\varphi(B)Z).\] For the last two, we compute \[2\tilde{g}(\tilde{\nabla}_{A}B,Z+C)=h([A,B]_{\mathfrak{h}},C)-h( B,[A,C]_{\mathfrak{h}})-h(A,[B,C]_{\mathfrak{h}})=2h(\nabla_{A}B,C),\] \[2\tilde{g}(\tilde{\nabla}_{A}Y,Z+C)=g([A,Y]_{\widetilde{ \mathfrak{g}}},Z)-g(Y,[A,Z]_{\widetilde{\mathfrak{g}}})-\tilde{g}(A,[Y,Z+C]_{ \widetilde{\mathfrak{g}}})=2g(\varphi(A)^{a},Z).\qed\] We can now prove: **Theorem 1.3**.: _Let \((\mathfrak{g},g,J_{g})\) and \((\mathfrak{h},h,J_{h})\) be pseudo-Kahler Lie algebras and let \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) be a representation. Then \((\widetilde{\mathfrak{g}},\widetilde{g},\widetilde{J})\) is a pseudo-Kahler Lie algebra if and only if_ * \(J_{g}\circ\varphi(A)^{s}=\varphi(J_{h}A)^{s}\)_,_ * \(J_{g}\circ\varphi(A)^{a}=\varphi(A)^{a}\circ J_{g}\)_,_ _for all \(A\in\mathfrak{h}\)._ Proof.: The pseudo-Kahler condition is equivalent to \(\widetilde{\nabla}\widetilde{J}=0\). Using Lemma 1.2 and \(\widetilde{g}(\widetilde{J},\cdot)=-\widetilde{g}(\cdot,\widetilde{J}\cdot)\), we find \[\widetilde{g}((\widetilde{\nabla}_{X}\widetilde{J})Y,Z+C) =\widetilde{g}(\widetilde{\nabla}_{X}\widetilde{J}Y,Z+C)- \widetilde{g}(\widetilde{J}\widetilde{\nabla}_{X}Y,Z+C)\] \[=\widetilde{g}(\widetilde{\nabla}_{X}J_{g}Y,Z+C)+\widetilde{g}( \widetilde{\nabla}_{X}Y,J_{g}Z+J_{h}C)\] \[=g(\nabla^{g}_{X}J_{g}Y,Z)+g(\varphi(C)^{s}X,J_{g}Y)\] \[\quad+g(\nabla^{g}_{X}Y,J_{g}Z)+g(\varphi(J_{h}C)^{s}X,Y).\] Notice that since \(\nabla^{g}J_{g}=0\), \[g(\nabla^{g}_{X}J_{g}Y,Z)+g(\nabla^{g}_{X}Y,J_{g}Z)=g(\nabla^{g}_{X}J_{g}Y,Z) -g(J_{g}\nabla^{g}_{X}Y,Z)=0.\] Hence, we obtain \[\widetilde{g}((\widetilde{\nabla}_{X}\widetilde{J})Y,Z+C)=g((\varphi(J_{h}C)^ {s}-J_{g}\varphi(C)^{s})X,Y).\] The other component of \(\widetilde{\nabla}_{X}\widetilde{J}\) is determined by \[(\widetilde{\nabla}_{X}\widetilde{J})B=\widetilde{\nabla}_{X}\widetilde{J}B- \widetilde{J}(\widetilde{\nabla}_{X}B)=\widetilde{\nabla}_{X}J_{h}B- \widetilde{J}(-\varphi(B)^{s}X)=-\varphi(J_{h}B)^{s}X+J_{g}\varphi(B)^{s}X.\] On the other hand, \(\widetilde{\nabla}_{A}\widetilde{J}\) is determined by \[(\widetilde{\nabla}_{A}\widetilde{J})Y =\widetilde{\nabla}_{A}\widetilde{J}Y-\widetilde{J}(\widetilde {\nabla}_{A}Y)=\widetilde{\nabla}_{A}J_{g}Y-\widetilde{J}(\varphi(A)^{a}Y)\] \[=\varphi(A)^{a}J_{g}Y-J_{g}\varphi(A)^{a}Y,\] \[(\widetilde{\nabla}_{A}\widetilde{J})B =\widetilde{\nabla}_{A}\widetilde{J}B-\widetilde{J}\widetilde{ \nabla}_{A}B=\nabla^{h}_{A}J_{h}B-J_{h}\nabla^{h}_{A}B=0,\] where we have used \(\nabla^{h}J_{h}=0\) We will say that a \((\widetilde{\mathfrak{g}},\widetilde{g},\widetilde{J})\) constructed as in Theorem 1.3 is an _extension_ of \((\mathfrak{g},g,J_{g})\) by \((\mathfrak{h},h,J_{h})\). **Example 1.4**.: Let us consider an example in real dimension \(4\). Take two copies of the abelian \(2\)-dimensional Lie algebra, with a positive-definite and a negative-definite pseudo-Kaler structure, i.e. \[\mathfrak{g} =\operatorname{Span}\{e_{1},e_{2}\}, g =e^{1}\otimes e^{1}+e^{2}\otimes e^{2}, J_{g}e_{1} =e_{2},\] \[\mathfrak{h} =\operatorname{Span}\{a_{1},a_{2}\}, g =-a^{1}\otimes a^{1}-a^{2}\otimes a^{2}, J_{g}a_{1} =a_{2}.\] The notation \(\{e^{1},e^{2}\}\) represents the basis of \(\mathfrak{g}^{*}\) dual to \(\{e_{1},e_{2}\}\); similarly for \(a^{1},a^{2}\). This convention will be used again in Section 3. Any representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) satisfying the conditions of Theorem 1.3 will have \(\varphi(a_{1})^{s}=\varphi(a_{2})^{s}=0\). The space of endomorphisms commuting with \(J_{g}\) and with symmetric part equal to zero is spanned by \(J_{g}\) itself, so up to a change of basis we can assume \(\operatorname{ad}(a_{1})=\lambda J_{g}\) and \(\operatorname{ad}(a_{2})=0\). The resulting \(4\)-dimensional Lie algebra takes the form \[\tilde{\mathfrak{g}}=\operatorname{Span}\{e_{1},e_{2},e_{3},e_{4}\},\quad e_{ 3}=a_{1},e_{4}=a_{2},\] with \[\operatorname{d}e^{1}=-\lambda e^{2}\wedge e^{3},\,\operatorname{d}e^{2}= \lambda e^{1}\wedge e^{3},\,\operatorname{d}e^{3}=0=\operatorname{d}e^{4}.\] Throughout the paper, we will write more succinctly \[\tilde{\mathfrak{g}}=(-\lambda e^{23},\lambda e^{13},0,0),\] with notation adapted from [27]. The pseudo-Kahler structure is given by \[\tilde{J}e_{1}=e_{2},\tilde{J}e_{3}=e_{4},\quad\tilde{g}=e^{1}\otimes e^{1}+ e^{2}\otimes e^{2}-e^{3}\otimes e^{3}-e^{4}\otimes e^{4}.\] The Lie algebra \(\tilde{\mathfrak{g}}\) is abelian if \(\lambda=0\), and otherwise isomorphic to the Lie algebra denoted by \(\mathfrak{rr}^{\prime}_{3,0}\) in [25] with a flat pseudo-Kahler structure. **Remark 1.5**.: The pseudo-Kahler Lie algebras constructed in Theorem 1.3 have a nontrivial \(\widetilde{J}\)-invariant ideal of the same real dimension as \(\mathfrak{g}\). This shows in particular that not every pseudo-Kahler Lie algebra can be obtained from this construction. **Remark 1.6**.: The above construction also works in the case of _para-Kahler_ Lie algebras. These are triples \((\mathfrak{g},g,E)\) where \(E\) is an integrable para-complex structure, \(g(E\cdot,E\cdot)=-g\) and \(\omega:=g(E\cdot,\cdot)\) is closed. Recall that a complex structure \(J\) is called _abelian_ if \([JX,JY]=[X,Y]\) for all \(X,Y\in\mathfrak{g}\). Then we have the following result. **Lemma 1.7**.: _In the situation of Theorem 1.3, the complex structure \(\widetilde{J}\) is abelian if and only if both \(J_{g}\) and \(J_{h}\) are abelian and \(\varphi(J_{h}A)^{a}J_{g}=\varphi(A)^{a}\) for all \(A\in\mathfrak{h}\)._ Proof.: Let \(X,Y\in\mathfrak{g}\) and \(A,B\in\mathfrak{h}\). Then * \([\widetilde{J}X,\widetilde{J}Y]=[X,Y]\) if and only if \(J_{g}\) is abelian, * \([\widetilde{J}A,\widetilde{J}B]=[A,B]\) if and only if \(J_{h}\) is abelian, * \([\widetilde{J}A,\widetilde{J}Y]=[A,Y]\) if and only if \(\varphi(J_{h}A)J_{g}=\varphi(A)\) for all \(A\in\mathfrak{h}\). Decomposing the last equation into its symmetric and anti-symmetric part, and using that \(\varphi(A)^{s}=-J_{g}\varphi(J_{h}A)^{s}=\varphi(J_{h}A)^{s}J_{g}\), we get \(\varphi(J_{h}A)^{a}J_{g}=\varphi(A)^{a}\). In general, the representation \(\varphi\) may have a kernel. It turns out that if the kernel is \(J\)-invariant, it can be factored out before performing the semidirect product construction, giving rise to a semidirect product \(\mathfrak{g}\rtimes_{\varphi^{\prime}}\mathfrak{h}/\ker\varphi\) where the induced map \(\varphi^{\prime}\) is injective. More generally, we have the following: **Proposition 1.8**.: _In the hypotheses of Theorem 1.3, let \(\mathfrak{k}\) be a non-degenerate \(J_{h}\)-invariant ideal in \(\ker\varphi\). Then both \(\mathfrak{k}\) and \(\mathfrak{h}/\mathfrak{k}\) are pseudo-Kahler, the induced map \(\varphi^{\prime}\colon\mathfrak{h}/\mathfrak{k}\to\operatorname{Der}( \mathfrak{g})\) still satisfies the hypotheses of Theorem 1.3, and we have an exact sequence of pseudo-Kahler Lie algebras_ \[0\to\mathfrak{k}\to\mathfrak{g}\rtimes_{\varphi}\mathfrak{h}\to\mathfrak{g} \rtimes_{\varphi^{\prime}}\frac{\mathfrak{h}}{\mathfrak{k}}\to 0.\] Proof.: Denote by \(\nabla^{k}\) the Levi-Civita connection on \(\mathfrak{k}\). Then for \(A,B,C\in\mathfrak{k}\) we have \[h(\nabla^{k}_{A}B,C)=h(\nabla^{h}_{A}B,C).\] therefore, \(\nabla^{k}J_{h}=0\). Similarly, the projection map \(\pi\colon\mathfrak{h}/\mathfrak{k}\to\mathfrak{k}^{\perp}\) defines an isomorphism of vector spaces \(\mathfrak{k}^{\perp}\cong\mathfrak{h}/\mathfrak{k}\) and we have \[h(\nabla^{\pi}_{\pi(A)}\pi(B),\pi(C))=h(\nabla^{h}_{A}B,C),\quad A,B,C\in \mathfrak{k},\] where \(\nabla^{\pi}\) indicates the covariant derivative on \(\mathfrak{h}/\mathfrak{k}\). Therefore, \(\nabla^{\pi}J_{\mathfrak{k}/\mathfrak{h}}=0\). Now the fact that \(\varphi^{\prime}\) satisfies the hypotheses of Theorem 1.3 is straightforward. **Remark 1.9**.: At the Lie group level, we may interpret the long exact sequence as a principal bundle A special class of semidirect products appears in the study of Einstein Riemannian solvmanifolds under the name of _standard_ solvmanifolds (see [19]). This condition was generalized to the pseudo-Riemannian case in [11]: a standard decomposition of a Lie algebra \(\tilde{\mathfrak{g}}\) endowed with a metric \(\tilde{g}\) is an orthogonal decomposition \(\tilde{\mathfrak{g}}=\mathfrak{g}\rtimes\mathfrak{h}\), where \(\mathfrak{g}\) is a nilpotent ideal and \(\mathfrak{h}\) an abelian subalgebra. We can use Theorem 1.3 to construct standard pseudo-Kahler extensions of a fixed nilpotent Lie algebra (though nilpotency is not essential in what follows); motivated by Proposition 1.8, we illustrate this in the case where \(\varphi\) is injective. **Corollary 1.10**.: _If \((\mathfrak{g},g,J)\) is a pseudo-Kahler Lie algebra and \(\mathfrak{h}\subset\operatorname{Der}(\mathfrak{g})\) an abelian subalgebra closed under \(f\mapsto f^{*}\), write_ \[\mathfrak{h}=\mathfrak{h}_{0}\oplus\mathfrak{h}_{1},\] _where \(\mathfrak{h}_{0}\) consists of skew-symmetric derivations and \(\mathfrak{h}_{1}\) consists of symmetric derivations; assume furthermore that \(\mathfrak{h}_{0}\) has even dimension and for every \(A\in\mathfrak{h}_{1}\), \(J\circ A\in\mathfrak{h}_{1}\), and \(J\circ A=A\circ J\) for all \(A\) in \(\mathfrak{h}_{0}\). Then \(\mathfrak{g}\rtimes\mathfrak{h}\) has a pseudo-Kahler structure._ Proof.: Define an almost complex structure \(J_{h}\) on \(\mathfrak{h}_{1}\) by \(J_{h}(A)=J\circ A\), and extend it to \(\mathfrak{h}\) by choosing an arbitrary complex structure on \(\mathfrak{h}_{0}\). Since \(\mathfrak{h}\) is abelian, any compatible metric defines a pseudo-Kahler structure. Now denote the inclusion \(\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) by \(\varphi\); then the conditions of Theorem 1.3 hold. **Remark 1.11**.: In the situation of Corollary 1.10, if we further assume that \(\operatorname{tr}X\) and \(\operatorname{tr}XY\) vanish for all \(X,Y\in\mathfrak{h}\), we see that the metric on \(\mathfrak{g}\rtimes\mathfrak{h}\) is Ricci-flat by [11, Proposition 4.1]. A natural question is when two extensions of a Lie algebra \(\mathfrak{g}\) obtained by the method of Theorem 1.3 should be regarded as different. We will make use of the following criterion: **Proposition 1.12** ([4, 12]).: _Let \(K\) be a subgroup of \(\mathrm{SO}(r,s)\) with Lie algebra \(\mathfrak{k}\) and \(\widetilde{\mathfrak{g}}\) a Lie algebra of the form \(\widetilde{\mathfrak{g}}=\mathfrak{g}\rtimes\mathfrak{h}\) endowed with a \(K\)-structure. Let \(\chi\colon\mathfrak{h}\to\mathrm{Der}(\mathfrak{g})\) be a Lie algebra homomorphism such that, extending \(\chi(A)\) to \(\widetilde{\mathfrak{g}}\) by declaring it to be zero on \(\mathfrak{h}\),_ \[\chi(A)-\mathrm{ad}(A)\in\mathfrak{k},\quad[\chi(A),\mathrm{ad}(B)]=0,\ A,B \in\mathfrak{h}. \tag{1}\] _Let \(\widetilde{\mathfrak{g}}^{*}\) be the Lie algebra \(\mathfrak{g}\rtimes_{\chi}\mathfrak{h}\). If \(\widetilde{G}\) and \(\widetilde{G}^{*}\) denote the connected, simply connected Lie groups with Lie algebras \(\widetilde{\mathfrak{g}}\) and \(\widetilde{\mathfrak{g}}^{*}\), with the corresponding left-invariant \(K\)-structures, there is an isometry from \(\widetilde{G}\) to \(\widetilde{G}^{*}\), whose differential at \(e\) is the identity of \(\mathfrak{g}\oplus\mathfrak{h}\) as a vector space, mapping the \(K\)-structure on \(\widetilde{G}\) into the \(K\)-structure on \(\widetilde{G}^{*}\)._ Beside identifying extensions related by symmetries as in Proposition 1.12, we also identify extensions related by isomorphisms of \(\mathfrak{g}\) and \(\mathfrak{h}\). More precisely: **Definition 1.13**.: Given pseudo-Kahler Lie algebras \((\mathfrak{g},g,J_{g})\), \((\mathfrak{g}^{\prime},g^{\prime},J^{\prime}_{g})\), \((\mathfrak{h},h,J_{h})\) and \((\mathfrak{h}^{\prime},h^{\prime},J^{\prime}_{h})\), and given \(\varphi\colon\mathfrak{h}\to\mathrm{Der}(\mathfrak{g})\), \(\varphi^{\prime}\colon\mathfrak{h}^{\prime}\to\mathrm{Der}(\mathfrak{g}^{ \prime})\) satisfying the conditions of Theorem 1.3, we will say that the extensions \(\mathfrak{g}\rtimes_{\varphi}\mathfrak{h}\) and \(\mathfrak{g}^{\prime}\rtimes_{\varphi^{\prime}}\mathfrak{h}^{\prime}\) are _isometric_ if there are Lie algebra isomorphisms \[f_{g}\colon\mathfrak{g}\to\mathfrak{g}^{\prime},\quad f_{h}\colon\mathfrak{h} \to\mathfrak{h}^{\prime},\] respecting the pseudo-Kahler structures and such that for all \(A\in\mathfrak{h}\), \(A^{\prime}\in\mathfrak{h}^{\prime}\) \[(f_{g}\varphi(A)f_{g}^{-1}-\varphi^{\prime}(f_{h}(A)))^{s}=0,\quad[f_{g} \varphi(A)f_{g}^{-1},\varphi^{\prime}(A^{\prime})]=0.\] We will say an extension is _trivial_ if it is isometric to one with \(\varphi=0\), i.e. a direct product. The definition of isometric extensions implies that \(\varphi(A)-\varphi^{\prime}(f(A))\) is skew-symmetric for all \(A\), and therefore, by the hypotheses on \(\varphi,\varphi^{\prime}\), an element of the Lie algebra \(\mathfrak{k}\cong\mathfrak{u}(p,q)\) of skew endomorphisms of \(\mathfrak{g}\) that commute with \(J_{g}\). It follows then from Proposition 1.12 that, at the Lie group level, two isometric extensions \((\hat{G},\tilde{g},\tilde{J})\), \((\tilde{G}^{\prime},\tilde{g}^{\prime},\tilde{J}^{\prime})\) are related by an isometry of pseudo-Riemannian manifolds that respects the complex structures. **Remark 1.14**.: An extension is trivial if and only if \(\varphi(A)\) is skew-symmetric for all \(A\). **Remark 1.15**.: We do not expect isometry of extensions as defined in Definition 1.13 to define an equivalence relation. For instance, consider an extension with \(\varphi=0\), i.e. a direct product \(\widetilde{\mathfrak{g}}=\mathfrak{g}\times\mathfrak{h}\). Then any two extensions \(\mathfrak{g}\rtimes_{\varphi_{1}}\mathfrak{h}\), \(\mathfrak{g}\rtimes_{\varphi_{2}}\mathfrak{h}\) with the \(\varphi_{i}\) skew-symmetric are isometric to \(\widetilde{\mathfrak{g}}=\mathfrak{g}\times\mathfrak{h}\), but there is no reason to expect that \(\varphi_{1}(A)\) commutes with all \(\varphi_{2}(B)\) for all \(A,B\in\mathfrak{h}\). This fact alone does not rule out that the extensions may be equivalent if one takes into account isomorphisms. Indeed, the corresponding Lie groups \(\widetilde{G}_{i}\) are isometric pseudo-Riemannian manifolds, both embedded in \(\mathrm{Aut}(\widetilde{G})\rtimes\widetilde{G}\) (see the proof of Proposition 1.12). However, in order to realize one from the other from the same construction, one would have to show that isometries of \(\widetilde{G}_{1}\) that fix the origin in \(\widetilde{G}_{2}\) are automorphisms of \(\widetilde{G}_{2}\). This is not necessarily true in our general context (though it does hold for Riemannian metrics on compact or nilpotent Lie groups, see [24, 29]). ## 2 Examples of dimension 6 and 8 In this section we describe several explicit examples of pseudo-Kahler Lie algebras constructed using Theorem 1.3. We are interested in particular in dimensions 6 and 8, although some of these examples can be also generalized to arbitrary dimensions. We will distinguish the cases in which the Lie algebra \(\mathfrak{h}\) is abelian and those where it is not; we will see that with this method one can produce non-flat examples starting with flat Lie algebras. ### Case of abelian \(\mathfrak{h}\) In the following examples, we take \(\mathfrak{h}\) to be abelian; in the first three, \(\mathfrak{g}\) is also abelian. **Example 2.1**.: Let \(\mathfrak{h}=\mathbb{R}^{2}=\langle a_{1},a_{2}\rangle\) be the abelian Kahler Lie algebra with the Euclidean metric and complex structure \(J_{h}a_{1}=a_{2}\), and let \(\mathfrak{g}=\mathbb{R}^{4}=\langle e_{1},e_{2},e_{3},e_{4}\rangle\) be a pseudo-Kahler abelian Lie algebra with complex structure and metric given by \[J_{g}e_{1}=e_{2},J_{g}e_{3}=e_{4}\quad\text{and}\quad g=e^{1}\otimes e^{1}+e^{2 }\otimes e^{2}-e^{3}\otimes e^{3}-e^{4}\otimes e^{4}.\] We define the representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})=\operatorname{Mat}_{4 }(\mathbb{R})\) by \[\varphi(a_{1})=\left(\begin{array}{rrrr}1&1&1&1\\ 1&-1&1&-1\\ -1&-1&-1&-1\\ -1&1&-1&1\end{array}\right)\quad\text{and}\quad\varphi(a_{2})=\left(\begin{array} []{rrrr}-1&1&-1&1\\ 1&1&1&1\\ 1&-1&1&-1\\ -1&-1&-1&-1\end{array}\right).\] The map \(\varphi\) defined in this way satisfies the conditions in Theorem 1.3 and \(\varphi(a_{1})\) and \(\varphi(a_{2})\) commute, hence \((\widetilde{\mathfrak{g}},\widetilde{g},\widetilde{J})\) is a \(6\)-dimensional pseudo-Kahler Lie algebra. Furthermore, the matrices \(\varphi(a_{1})\) and \(\varphi(a_{2})\) satisfy \(\varphi(a_{1})^{2}=\varphi(a_{2})^{2}=\varphi(a_{1})\varphi(a_{2})=0\). Then, by Remark 1.1, \(\widetilde{\mathfrak{g}}\) is \(2\)-step nilpotent. Moreover, a computation shows that the metric \(\widetilde{g}\) is non-flat. **Remark 2.2**.: Notice that every pseudo-Kahler metric on a nilpotent Lie algebra is Ricci-flat by [15, Lemma 6.4]. **Example 2.3**.: Let \(\mathfrak{h}=\mathfrak{g}=\mathbb{R}^{4}\) both with complex structure and metric given by \[J_{g}e_{1}=e_{3},J_{g}e_{2}=e_{4}\quad\text{and}\quad g=e^{1}\otimes e^{1}-e^ {2}\otimes e^{2}+e^{3}\otimes e^{3}-e^{4}\otimes e^{4}.\] We define the representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) by \[\varphi(a_{1})=\varphi(a_{2})=\left(\begin{array}{rrrr}1&2&-1&0\\ 0&1&0&1\\ 1&0&-1&0\\ 0&-1&2&-1\end{array}\right),\quad\varphi(a_{3})=\varphi(a_{4})=\left( \begin{array}{rrrr}0&1&2&1\\ 1&0&-1&0\\ 0&1&0&1\\ -1&2&1&0\end{array}\right).\] The map \(\varphi\) defined like this satisfies the conditions of Theorem 1.3, so we have a pseudo-Kahler structure on \(\widetilde{\mathfrak{g}}\). Moreover, since the derived algebra of \(\widetilde{\mathfrak{g}}\) is given by \([\widetilde{\mathfrak{g}},\widetilde{\mathfrak{g}}]=\langle e_{1},e_{2},e_{3},e_{4}\rangle\), \(\widetilde{\mathfrak{g}}\) is \(2\)-step solvable. The metric \(\widetilde{g}\) in this example is also non-flat. **Example 2.4**.: Let \(\mathfrak{h}=\mathbb{R}^{2}\) be the abelian Kahler Lie algebra with metric \(h=-\mathbb{1}_{2}\) and complex structure \(J_{h}a_{1}=a_{2}\). Let \(\mathfrak{g}=\mathbb{R}^{6}\) be the abelian pseudo-Kahler Lie algebra with metric \[g=e^{1}\otimes e^{1}+e^{2}\otimes e^{2}+e^{3}\otimes e^{3}+e^{4}\otimes e^{4}- e^{5}\otimes e^{5}-e^{6}\otimes e^{6}\] and complex structure \(J_{g}e_{2j-1}=e_{2j}\) for \(j=1,2,3\). Consider the representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) given by \[\varphi(a_{1})=\left(\begin{array}{rrrrrr}1&0&0&0&0&1\\ 0&-1&0&0&1&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&-1&0&0&1&0\\ -1&0&0&0&0&-1\end{array}\right),\quad\varphi(a_{2})=\left(\begin{array}{rrrrrr} 0&1&0&0&-1&0\\ 1&0&0&0&0&1\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 1&0&0&0&0&1\\ 0&-1&0&0&1&0\end{array}\right).\] The map \(\varphi\) satisfies the conditions of Theorem 1.3. Moreover, \(\varphi(a_{1})^{s}=\varphi(a_{2})^{s}=\varphi(a_{1})\varphi(a_{2})=0\). Then, by Remark 1.1, \(\widetilde{\mathfrak{g}}\) is \(2\)-step nilpotent. This metric is also non-flat. Next we consider some cases where \(\mathfrak{g}\) is non-abelian, and hence the space of derivations is more restricted. **Example 2.5**.: Let \(\mathfrak{h}=\mathbb{R}^{2}=\langle a_{1},a_{2}\rangle\) be the abelian Kahler Lie algebra with the Euclidean metric and complex structure \(J_{h}a_{1}=a_{2}\). Let \((\mathfrak{g},J_{g},g)\) be the \(6\)-dimensional pseudo-Kahler Lie algebra taken from [28, Section 3.1], where \(\mathfrak{g}\) is a \(3\)-step nilpotent Lie algebra with non-zero brackets \[[e_{1},e_{2}]=e_{4},\quad[e_{2},e_{3}]=e_{6},\quad[e_{2},e_{4}]=e_{5}.\] The complex structure and the metric are given by \[J_{g}e_{1}=e_{2},J_{g}e_{3}=e_{4},J_{g}e_{5}=e_{6}\quad\text{and}\quad g=-e^{1 }\odot e^{5}-e^{2}\odot e^{6}-e^{3}\otimes e^{3}-e^{4}\otimes e^{4},\] where \(e^{i}\odot e^{j}:=e^{i}\otimes e^{j}+e^{j}\otimes e^{i}\). We define the representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) by \[\varphi(a_{1})e_{1}=\varphi(a_{2})e_{2}=xe_{5}+ye_{6},\quad\varphi(a_{1})e_{2 }=-\varphi(a_{2})e_{1}=ye_{5}-xe_{6},\] and \(\varphi(a_{1})e_{j}=\varphi(a_{2})e_{j}=0\) for \(j=3,4,5,6\). The map \(\varphi\) defined in this way satisfies the conditions in Theorem 1.3 and \(\varphi(a_{1})\) and \(\varphi(a_{2})\) commute, hence \((\widetilde{\mathfrak{g}},\widetilde{g},\widetilde{J})\) is a \(8\)-dimensional pseudo-Kahler Lie algebra. Furthermore, the matrices \(\varphi(a_{1})\) and \(\varphi(a_{2})\) satisfy \(\varphi(a_{1})^{2}=\varphi(a_{2})^{2}=\varphi(a_{1})\varphi(a_{2})=0\). Since \([\widetilde{\mathfrak{g}},\widetilde{\mathfrak{g}}]=\langle e_{4},e_{5},e_{6 }\rangle=[\mathfrak{g},\mathfrak{g}]\), \(\widetilde{\mathfrak{g}}\) is \(3\)-step nilpotent. This metric is non-flat if \(x\neq 0\) or \(y\neq 0\). **Example 2.6**.: Let \(\mathfrak{h}=\mathbb{R}^{2}=\langle a_{1},a_{2}\rangle\) be the abelian Kahler Lie algebra with the Euclidean metric and complex structure \(J_{h}a_{1}=a_{2}\). Let \((\mathfrak{g},J_{g},g)\) be the \(6\)-dimensional pseudo-Kahler Lie algebra taken from [28, Section 6.2], where \(\mathfrak{g}\) is a \(2\)-step nilpotent Lie algebra with non-zero brackets \[[e_{1},e_{3}]=-e_{5},\quad[e_{2},e_{3}]=e_{6}.\] The complex structure and the metric are given by \[J_{g}e_{1}=-e_{2},J_{g}e_{3}=e_{4},J_{g}e_{5}=e_{6}\quad\text{and}\quad g=e^{1 }\odot e^{5}-e^{2}\odot e^{6}+e^{3}\otimes e^{3}+e^{4}\otimes e^{4}.\] We define the representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) by \[\varphi(a_{1})e_{1}=-\varphi(a_{2})e_{2}=xe_{5}-ye_{6},\quad\varphi(a_{1})e_{ 2}=\varphi(a_{2})e_{1}=ye_{5}+xe_{6},\] and \(\varphi(a_{1})e_{j}=\varphi(a_{2})e_{j}=0\) for \(j=3,4,5,6\). The map \(\varphi\) defined in this way satisfies the conditions in Theorem 1.3 and \(\varphi(a_{1})\) and \(\varphi(a_{2})\) commute, hence \((\widetilde{\mathfrak{g}},\widetilde{g},\widetilde{J})\) is a \(8\)-dimensional pseudo-Kahler Lie algebra. Furthermore, the matrices \(\varphi(a_{1})\) and \(\varphi(a_{2})\) satisfy \(\varphi(a_{1})^{2}=\varphi(a_{2})^{2}=\varphi(a_{1})\varphi(a_{2})=0\). Since \([\widetilde{\mathfrak{g}},\widetilde{\mathfrak{g}}]=\langle e_{5},e_{6} \rangle=[\mathfrak{g},\mathfrak{g}]\), \(\widetilde{\mathfrak{g}}\) is \(2\)-step nilpotent. The metric \(\widetilde{g}\) is non-flat if and only if \(x^{2}+y^{2}\neq\frac{1}{2}\). The \(6\)-dimensional pseudo-Kahler metric \(g\) on \(\mathfrak{g}\) defined above is non-flat, so it is interesting to notice that we can choose some \(x\) and \(y\) on the representation \(\varphi\) such that, starting with a non-flat metric, the metric \(\widetilde{g}\) on \(\widetilde{\mathfrak{g}}\) is flat. Note that Example 2.5 and Example 2.6 satisfy \(\varphi(\mathfrak{h})\mathfrak{g}\subseteq[\mathfrak{g},\mathfrak{g}]\) and \([\widetilde{\mathfrak{g}},\widetilde{\mathfrak{g}}]=[\mathfrak{g},\mathfrak{g}]\). In the following example this is not the case. **Example 2.7**.: We consider the same setting as in Example 2.6, but we define the representation \(\varphi\) as follows \[\varphi(a_{1})=\left(\begin{array}{cccccc}0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ x_{1}&x_{2}&0&0&0&0\\ x_{3}&x_{4}&x_{2}&0&0&0\\ x_{4}&x_{3}&x_{1}&0&0&0\end{array}\right),\quad\varphi(a_{2})=\left( \begin{array}{cccccc}0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ x_{2}&-x_{1}&0&0&0\\ 0&-2\,x_{3}&-x_{1}&0&0&0\\ 0&0&x_{2}&0&0&0\end{array}\right).\] Note that in this case \([\mathfrak{g},\mathfrak{g}]\subseteq[\widetilde{\mathfrak{g}},\widetilde{ \mathfrak{g}}]=\langle e_{4},e_{5},e_{6}\rangle\). Nevertheless, \(\widetilde{\mathfrak{g}}\) is still \(2\)-step nilpotent. ### Case of non-abelian \(\mathfrak{h}\) First of all, note that \(\widetilde{\nabla}_{A}B=\nabla_{A}^{h}B\) for all \(A,B\in\mathfrak{h}\) by Lemma 1.2. This implies that \(\widetilde{R}(A,B)C=R^{h}(A,B)C\), where \(R^{h}\) is the curvature of the metric \(h\). Hence, if the metric \(h\) is non-flat, so is \(\widetilde{g}\). **Example 2.8**.: Consider \(\mathfrak{h}=\mathfrak{r}_{2}^{\prime}\) the 4-dimensional 2-step solvable Lie algebra with non-zero Lie brackets \[[a_{1},a_{3}]=a_{3},\quad[a_{1},a_{4}]=a_{4},\quad[a_{2},a_{3}]=a_{4},\quad[a_ {2},a_{4}]=-a_{3}.\] This is the real Lie algebra underlying on the complex Lie algebra \(\mathfrak{aff}(\mathbb{C})\). It admits a non-flat pseudo-Kahler structure with complex structure \(J_{h}a_{1}=-a_{2},J_{h}a_{3}=a_{4}\) (see [25, Theorem 4.6]). Consider \(\mathfrak{g}=\mathfrak{r}\mathfrak{h}_{3}\), with the pseudo-Kahler structure \[J_{g}e_{1}=e_{2},J_{g}e_{3}=e_{4}\quad\text{and}\quad g=e^{1}\odot e^{3}+e^{2 }\odot e^{4}-e^{1}\otimes e^{1}-e^{2}\otimes e^{2},\] also from [25]. Define a representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) by \[\varphi(a_{1}) =\left(\begin{array}{rrrr}0&0&0&0\\ 0&0&0&0\\ -\frac{1}{2}\,y_{1}-\frac{1}{2}\,y_{2}&x_{1}&0&0\\ x_{2}&\frac{1}{2}\,y_{1}+\frac{1}{2}\,y_{2}&0&0\end{array}\right),\] \[\varphi(a_{2}) =\left(\begin{array}{rrrr}0&0&0&0\\ 0&0&0&0\\ \frac{1}{2}\,x_{1}+\frac{1}{2}\,x_{2}&y_{1}&0&0\\ y_{2}&-\frac{1}{2}\,x_{1}-\frac{1}{2}\,x_{2}&0&0\end{array}\right)\] and \(\varphi(a_{3})=\varphi(a_{4})=0\). This map satisfies the conditions of Theorem 1.3. The derived algebra is given by \([\widetilde{\mathfrak{g}},\widetilde{\mathfrak{g}}]=\langle e_{3},e_{4},a_{3 },a_{4}\rangle\), so \(\widetilde{\mathfrak{g}}\) is 2-step solvable. Moreover, since \(h\) is non-flat, so is \(\widetilde{g}\). **Example 2.9**.: Let \(\mathfrak{h}=\mathfrak{r}\mathfrak{r}_{3,0}\) be the 4-dimensional Lie algebra with non-zero Lie bracket \([a_{1},a_{2}]=a_{2}\) and complex structure \(J_{h}a_{1}=a_{2}\), \(J_{h}a_{3}=a_{4}\) (see [25]). Let \(\mathfrak{g}=\mathbb{R}^{2}\) be the abelian Kahler Lie algebra with the Euclidean metric and complex structure \(J_{g}e_{1}=e_{2}\). We define the representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})=\operatorname{Mat}_{2}( \mathbb{R})\) by \[\varphi(a_{1})=\left(\begin{array}{rr}0&\frac{1}{2}\\ \frac{1}{2}&0\end{array}\right),\quad\varphi(a_{2})=\left(\begin{array}{rr}- \frac{1}{2}&\frac{1}{2}\\ -\frac{1}{2}&\frac{1}{2}\end{array}\right),\quad\varphi(a_{3})=\varphi(a_{4})=0.\] The Lie algebra \(\mathfrak{h}\) is 2-step solvable and non-unimodular. The derived series of the Lie algebra \(\widetilde{\mathfrak{g}}\) is given by \[\widetilde{\mathfrak{g}}^{(1)}=[\widetilde{\mathfrak{g}},\widetilde{\mathfrak{ g}}]=\langle e_{1},e_{2},a_{2}\rangle,\quad\widetilde{\mathfrak{g}}^{(2)}= \langle e_{1}+e_{2}\rangle,\quad\widetilde{\mathfrak{g}}^{(3)}=0.\] Hence the Lie algebra \(\widetilde{\mathfrak{g}}\) is 3-step solvable and non-unimodular. Since \(h\) is non-flat, so is \(\widetilde{g}\). Furthermore, one can check that \(\widetilde{g}\) is not Ricci-flat. **Example 2.10**.: Let \(\mathfrak{h}=\mathfrak{r}\mathfrak{r}_{3,0}\) be as above and let \(\mathfrak{g}=\mathbb{R}^{4}\) be the abelian Kahler Lie algebra with the Euclidean metric and complex structure \(J_{g}e_{1}=e_{2},J_{g}e_{3}=e_{4}\). We define the representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})=\operatorname{Mat}_{4 }(\mathbb{R})\) by \[\varphi(a_{1})=\left(\begin{array}{rrrr}0&\frac{1}{2}&0&0\\ \frac{1}{2}&0&0&0\\ 0&0&0&\frac{1}{2}\\ 0&0&\frac{1}{2}&0\end{array}\right),\quad\varphi(a_{2})=\left(\begin{array}[] {rrrr}-\frac{1}{2}&\frac{1}{2}&0&0\\ -\frac{1}{2}&\frac{1}{2}&0&0\\ 0&0&-\frac{1}{2}&\frac{1}{2}\\ 0&0&-\frac{1}{2}&\frac{1}{2}\end{array}\right),\] \[\varphi(a_{3})=\left(\begin{array}{rrrr}0&0&x&0\\ 0&0&0&x\\ -x&0&0&0\\ 0&-x&0&0\end{array}\right),\quad\varphi(a_{4})=\left(\begin{array}{rrrr}0&0&y &0\\ 0&0&0&y\\ -y&0&0&0\\ 0&-y&0&0\end{array}\right).\] The Lie algebra \(\mathfrak{h}\) is 2-step solvable and non-unimodular. Hence we obtain that \(\widetilde{\mathfrak{g}}\) is 3-step solvable and non-unimodular. Moreover, it has 1-dimensional center given by \(\mathfrak{z}(\widetilde{\mathfrak{g}})=\langle ya_{3}-xa_{4}\rangle\), thus the center is not \(\widetilde{J}\)-invariant. As in the above example, the metric \(\widetilde{g}\) is non-flat. ## 3 Classification on 6-dimensional semidirect products \(\mathfrak{g}\rtimes\mathbb{R}^{2}\) In this section we consider the special case where \(\mathfrak{g}\) has dimension \(4\) and \(\mathfrak{h}\) is \(2\)-dimensional and abelian. We exploit the classification of pseudo-Kahler Lie algebras of dimension \(4\) of [25] and classify all the cases where the hypotheses of Theorem 1.3 are satisfied up to isometry, as defined in Definition 1.13. Beside isometry, we have an obvious symmetry, which is an overall change of sign of the metric on the extension. In light of this symmetry, it makes sense to fix the signature on the \(2\)-dimensional factor to be positive definite. Thus, throughout the section, we will fix \(\mathfrak{h}=\mathbb{R}^{2}\), with a basis \(\{a_{1},a_{2}\}\) such that \[J_{h}a_{1}=a_{2},\quad h=a^{1}\otimes a^{1}+a^{2}\otimes a^{2}.\] Since \(J_{h}\) is fixed, we will drop the subscript \(g\) in \(J_{g}\) and denote the complex structure on \(\mathfrak{g}\) by \(J\). **Lemma 3.1**.: _Let \((\mathfrak{g},g,J)\) be a pseudo-Kahler Lie algebra. Then its extensions by an abelian \(2\)-dimensional \((\mathfrak{h},h,J_{h})\) up to isometry are in one-to-one-correspondence with triples \((A,B_{1},B_{2})\) of linear maps \(\mathfrak{g}\to\mathfrak{g}\) such that_ \[A=A^{*},AJ=-JA,B_{i}=-B_{i}^{*},B_{i}J=JB_{i},\] \[A+B_{1},JA+B_{2}\in\operatorname{Der}(\mathfrak{g})\] _and_ \[[A,B_{2}]=J[A,B_{1}], \tag{2}\] \[[B_{1},B_{2}]=2JA^{2}. \tag{3}\] Proof.: Suppose \(\varphi\) defines an extension; by construction, we can write \[\varphi(a_{1})=A+B_{1},\quad\varphi(a_{2})=JA+B_{2},\] where \(A\) is symmetric relative to \(g\) and \(B_{1}\), \(B_{2}\) are skew-symmetric and commute with \(J_{g}\), and the notation \(J_{g}A\) represents composition, i.e. \((J_{g}A)(X)=J_{g}(A(X)\). In addition, we have that \(JA\) is symmetric, and therefore \(JA=A^{*}J^{*}=-AJ\). Since \(\mathfrak{h}\) is abelian, imposing that \(\varphi\) is a homomorphism implies that \(\varphi(a_{1})\) and \(\varphi(a_{2})\) commute, i.e. \[0=[A+B_{1},JA+B_{2}]=[A,JA]+[A,B_{2}]-[JA,B_{1}]+[B_{1},B_{2}];\] taking the skew-symmetric part, we find \[0=[A,B_{2}]-[JA,B_{1}]=[A,B_{2}]-JAB_{1}+B_{1}JA=[A,B_{2}]-JAB_{1}+JB_{1}A=[A, B_{2}]-J[A,B_{1}].\] Taking the symmetric part and recalling that \(JA=-AJ\), we find \[0=[A,JA]+[B_{1},B_{2}]=AJA-JAA-[B_{1},B_{2}]=-2JA^{2}+[B_{1},B_{2}].\qed\] We will study the abelian case first. We will distinguish two cases, according to whether \(A\) is semisimple; we will also assume that \(A\) is nonzero, since otherwise the extension is trivial. **Lemma 3.2**.: _Let \((g,J,\omega)\) be a pseudo-Kahler structure on \(\mathbb{R}^{4}\). Let \(A\), \(B_{1}\), \(B_{2}\) be as in Lemma 3.1. Assume that \(A\) is nonzero and semisimple. Then there is a basis \(e_{1},\dots,e_{4}\) such that_ \[g=e^{1}\otimes e^{1}-e^{2}\otimes e^{2}+e^{3}\otimes e^{3}-e^{4}\otimes e^{4 },\quad Je_{1}=e_{3},Je_{2}=e_{4},\] \[A=\begin{pmatrix}a&a&0&0\\ -a&a&0&0\\ 0&0&-a&-a\\ 0&0&a&-a\end{pmatrix},\quad B_{1}=\begin{pmatrix}0&k_{1}&-k_{2}&0\\ k_{1}&0&0&k_{2}\\ k_{2}&0&0&k_{1}\\ 0&-k_{2}&k_{1}&0\end{pmatrix},\quad B_{2}=\begin{pmatrix}0&k_{2}&k_{1}&0\\ k_{2}&0&0&-k_{1}\\ -k_{1}&0&0&k_{2}\\ 0&k_{1}&k_{2}&0\end{pmatrix},\] _where \(k_{1}^{2}+k_{2}^{2}=2a^{2}\)._ Proof.: We first show that there is an \(A\)-invariant, orthogonal decomposition \[\mathbb{R}^{4}=V_{+}\oplus V_{-},\quad J(V_{+})=V_{-},\quad\dim V_{\pm}=2. \tag{4}\] Indeed, denote by \(V_{\lambda}\) the eigenspaces of \(A\) over \(\mathbb{R}\), and by \(W_{\mu}\) its eigenspaces over \(\mathbb{C}\). Since \(A\) is symmetric, we have an orthogonal decomposition \[\mathbb{R}^{4}=\bigoplus_{\lambda}V_{\lambda}\oplus\bigoplus_{\mu}[W_{\mu}],\] where \(\lambda\) ranges among real eigenvalues and \(\mu\) among nonreal eigenvalues, the notation \([\![W_{\mu}]\!]\) representing the space of real vectors in \(W_{\mu}\oplus W_{\overline{\mu}}\). Since \(J\) anticommutes with \(A\), it maps each \(V_{\lambda}\) to \(V_{-\lambda}\) and \([\![W_{\mu}]\!]\) to \([\![W_{-\mu}]\!]\). If all eigenvalues of \(A\) are real, we can obtain (4) by fixing an eigenvector \(v\) and an eigenvector \(w\) not contained in \(\mathrm{Span}\{v,Jv\}\), then setting \(V_{+}=\mathrm{Span}\{v,w\}\). If \(A\) has a purely imaginary eigenvalue \(\mu\neq 0\), then \(J\) maps \([\![W_{\mu}]\!]\) to itself; since the spectral theorem forces the metric restricted to \([\![W_{\mu}]\!]\) to be indefinite, this shows that the purely imaginary eigenvalue \(\mu\) has multiplicity two. Similarly, the scalar product \[(v,w)\mapsto\langle v,JAw\rangle\] cannot be definite, for otherwise the symmetric operator \(J\) would have real eigenvalues. Thus, there exists a nonzero \(v\) such that \(v\) is orthogonal to \(JAv\). Then we obtain the splitting by setting \[V_{+}=\mathrm{Span}\{v,Av\},\quad V_{-}=\mathrm{Span}\{Jv,JAv\}.\] On the other hand, if an eigenvalue \(\mu\) is neither real nor imaginary, then by a dimension count \(\mu\) has multiplicity one and \(\mathbb{R}^{4}=[\![W_{-\mu}]\!]\oplus[\![W_{\mu}]\!]\). We can therefore assume that \(A\) takes the block form \[A=\begin{pmatrix}D&0\\ 0&-D\end{pmatrix}\] relative to a basis such that \[Je_{1}=e_{3},Je_{2}=e_{4},\quad g=e^{1}\otimes e^{1}+e^{3}\otimes e^{3}\pm(e^ {2}\otimes e^{2}+e^{4}\otimes e^{4}). \tag{5}\] In the basis satisfying (5), write \[B_{i}=\begin{pmatrix}H_{i}&-U_{i}\\ U_{i}&H_{i}\end{pmatrix},\quad H_{i}^{*}=-H_{i},U_{i}^{*}=U_{i},\] where the \(*\) is the metric transpose taken relative to the metric \[\begin{pmatrix}1&0\\ 0&\pm 1\end{pmatrix}.\] Then \[[A,B_{i}]=\begin{pmatrix}[D,H_{i}]&-DU_{i}-U_{i}D\\ -DU_{i}-U_{i}D&-[D,H_{i}]\end{pmatrix}.\] Now (2) implies \[0=[A,B_{2}]-J[A,B_{1}]=\begin{pmatrix}[D,H_{2}]&-DU_{2}-U_{2}D\\ -DU_{2}-U_{2}D&-[D,H_{2}]\end{pmatrix}-\begin{pmatrix}DU_{1}+U_{1}D&[D,H_{1}] \\ [D,H_{1}]&-DU_{1}-U_{1}D\end{pmatrix}.\] Therefore \[[D,H_{2}]=DU_{1}+U_{1}D,\quad[D,H_{1}]=-DU_{2}-U_{2}D. \tag{6}\] If we set \[K_{1}=H_{1}-U_{2},K_{2}=H_{2}+U_{1},\] we see that \[K_{2}D=-DK_{2}^{*},K_{1}D=-DK_{1}^{*},\] i.e. the \(K_{i}D\) are skew-symmetric. Now (3) implies that \[0=-2JA^{2}+[B_{1},B_{2}]=\begin{pmatrix}0&2D^{2}\\ -2D^{2}&0\end{pmatrix}+\begin{pmatrix}[H_{1},H_{2}]-[U_{1},U_{2}]&-[H_{1},U_{2 }]+[H_{2},U_{1}]\\ -[H_{2},U_{1}]+[H_{1},U_{2}]&-[U_{1},U_{2}]+[H_{1},H_{2}]\end{pmatrix}.\] Thus \[[H_{1},H_{2}]=[U_{1},U_{2}],\quad[H_{1},U_{2}]-[H_{2},U_{1}]=2D^{2},\] which implies \[[K_{1},K_{1}^{*}]+[K_{2},K_{2}^{*}]=-4D^{2}. \tag{7}\] Thus, \(D^{2}\) has trace zero; as \(D\) is semisimple, this implies that its eigenvalues are purely imaginary, and \[D=\begin{pmatrix}a&a\\ -a&a\end{pmatrix},\quad a\in\mathbb{R}.\] Since \(D\) is symmetric, the spectral theorem implies that the metric has neutral signature. Then we find that \(K_{i}D\) being skew-symmetric implies \[K_{i}=\begin{pmatrix}k_{i}&k_{i}\\ k_{i}&-k_{i}\end{pmatrix}.\] i.e. \[U_{1}=\begin{pmatrix}k_{2}&0\\ 0&-k_{2}\end{pmatrix},\quad U_{2}=\begin{pmatrix}-k_{1}&0\\ 0&k_{1}\end{pmatrix},\quad H_{1}=\begin{pmatrix}0&k_{1}\\ k_{1}&0\end{pmatrix},\quad H_{2}=\begin{pmatrix}0&k_{2}\\ k_{2}&0\end{pmatrix}.\] By (7), we see that \(k_{1}^{2}+k_{2}^{2}=2a^{2}\). Observe that \(H_{1},H_{2}\) are linearly dependent, as are \(U_{1},U_{2}\), so all conditions are satisfied. **Proposition 3.3**.: _Any extension of a \(4\)-dimensional definite Kahler Lie algebra is trivial._ Proof.: It suffices to show that any extension has \(A=\varphi(a_{1})^{s}\) equal to zero. Suppose for a contradiction that \(A\) is nonzero; by the spectral theorem, it is diagonalizable. The same choices of \(A,B_{1},B_{2}\) determine an extension of the abelian Lie algebra \(\mathfrak{g}=\mathbb{R}^{4}\); the hypotheses of Lemma 3.2 hold, so we obtain a contradiction because the metric is definite. **Lemma 3.4**.: _Let \((g,J,\omega)\) be a pseudo-Kahler structure on \(\mathbb{R}^{4}\). Let \(A\), \(B_{1}\), \(B_{2}\) be as in Lemma 3.1. Assume that \(A\) is not semisimple. Then there is a basis \(e_{1},\dots,e_{4}\) such that_ \[g=e^{1}\otimes e^{1}+e^{2}\otimes e^{2}-e^{3}\otimes e^{3}-e^{4} \otimes e^{4},\quad Je_{1}=e_{2},\quad Je_{3}=e_{4},\] \[A=\begin{pmatrix}a&0&-a&0\\ 0&-a&0&a\\ a&0&-a&0\\ 0&-a&0&a\end{pmatrix},\quad B_{1}=\begin{pmatrix}0&\nu_{2}-\mu_{1}&\mu_{2}& \mu_{1}\\ -\nu_{2}+\mu_{1}&0&-\mu_{1}&\mu_{2}\\ \mu_{2}&-\mu_{1}&0&\nu_{2}+\mu_{1}\\ \mu_{1}&\mu_{2}&-\nu_{2}-\mu_{1}&0\end{pmatrix},\] \[B_{2}=\begin{pmatrix}0&-\mu_{2}-\nu_{1}&\nu_{2}&\nu_{1}\\ \mu_{2}+\nu_{1}&0&-\nu_{1}&\nu_{2}\\ \nu_{2}&-\nu_{1}&0&-\mu_{2}+\nu_{1}\\ \nu_{1}&\nu_{2}&\mu_{2}-\nu_{1}&0\end{pmatrix},\] _where \(a\neq 0\) and \((\nu_{1},\nu_{2})\) and \((\mu_{1},\mu_{2})\) are linearly dependent._ Proof.: Since \(A\) is not semisimple, there is an eigenvalue \(\lambda\) whose generalized eigenspace is not spanned by eigenvectors; thus, it has dimension at least two. Because \(A\) anticommutes with \(J\), \(-\lambda\) and \(\overline{\lambda}\) also have generalized eigenspaces not spanned by eigenvectors, and for dimensional reasons \(\lambda\) is either real or purely imaginary. We claim that \(\lambda\) is necessarily zero. Indeed, suppose that \(\lambda=i\); then \(A\) leaves invariant a \(2\)-dimensional space \(\llbracket W_{i}\rrbracket\), on which it acts as a complex structure. Since \(J\) anticommutes with \(A\), it maps \(\llbracket W_{i}\rrbracket\) to another \(A\)-invariant space, which is necessarily \(\llbracket W_{i}\rrbracket\) itself. This would give two anticommuting complex structures on the \(2\)-dimensional space \(\llbracket W_{i}\rrbracket\), which is impossible. This argument also shows that \(\lambda\) cannot be a nonzero multiple of \(i\). Suppose now that \(\lambda\) is a nonzero real number. Then \(-\lambda\) is also an eigenvalue, and we obtain a decomposition into generalized eigenspaces \[\mathbb{R}^{4}=V_{\lambda}\oplus V_{-\lambda}.\] If we consider the symmetric nilpotent endomorphism \(A-\lambda I\) restricted to \(V_{\lambda}\), we see that its kernel must be orthogonal to the image, which is again the kernel; this implies that the kernel is isotropic. We can therefore assume that the restrictions of \(A\) and \(g\) take the form \[A|_{V_{\lambda}}=\begin{pmatrix}\lambda&1\\ 0&\lambda\end{pmatrix},\quad g|_{V_{\lambda}}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}.\] Using the fact that \(J\) anticommutes with \(A\), we obtain \[A=\begin{pmatrix}\lambda&1&0&0\\ 0&\lambda&0&0\\ 0&0&-\lambda&-1\\ 0&0&0&-\lambda\end{pmatrix},\quad g=e^{1}\odot e^{2}+e^{3}\odot e^{4},\quad Je _{1}=e_{3},Je_{2}=e_{4}.\] Using the fact that the \(B_{i}\) are skew-symmetric and commute with \(J\), one can write \[B_{i}=\begin{pmatrix}a_{i}&0&b_{i}&c_{i}\\ 0&-a_{i}&d_{i}&b_{i}\\ -b_{i}&-c_{i}&a_{i}&0\\ -d_{i}&-b_{i}&0&-a_{i}\end{pmatrix}.\] One then sees that (3) has no solution. Therefore, the only possibility is that \(A\) only has zero as an eigenvalue. We are assuming \(A\neq 0\); since \(A\) anticommutes with \(J\), the kernel is \(2\)-dimensional and coincides with the image. Thus, \(\ker A\) is isotropic and \(J\)-invariant. If we assume \(\{e_{i}\}\) is an orthonormal basis, with \[Je_{1}=e_{2},\quad Je_{3}=e_{4},\quad g=e^{1}\otimes e^{1}+e^{2}\otimes e^{2} -e^{3}\otimes e^{3}-e^{4}\otimes e^{4},\] we can replace \(A\) with any matrix in the same orbit for the action of the group \(\mathrm{U}(1,1)\) that preserved \(g\) and \(J\). Any lightlike vector takes the form \[v+w,\quad v\in\mathrm{Span}\{e_{1},e_{2}\},w\in\mathrm{Span}\{e_{3},e_{4}\},\] where \(v,w\) have the same Euclidean norm. Up to \(\mathrm{U}(1)\times\mathrm{U}(1)\subset\mathrm{U}(1,1)\), we can assume \(v+w\) is a multiple of \(e_{1}+e_{3}\). This shows that we can assume that \(\ker A\) is spanned by \(e_{1}+e_{3},e_{2}+e_{4}\). Now consider the map \[\mathrm{Span}\{e_{1},e_{2}\}\xrightarrow{A}\mathrm{Span}\{e_{1}+e_{3},e_{2}+ e_{4}\}.\] The matrix of this map relative to the natural bases is symmetric, so it can be diagonalized. This means that we can use the action of the diagonal \(\mathrm{U}(1)\) in \(\mathrm{U}(1)\times\mathrm{U}(1)\) to assume that \(A(e_{1})\) is a multiple of \(e_{1}+e_{3}\); up to scaling, this fully determines \(A\) by \(J\)-antiinvariance. To simplify the proof, we will rescale \(A\) to obtain \[A=\begin{pmatrix}1&0&-1&0\\ 0&-1&0&1\\ 1&0&-1&0\\ 0&-1&0&1\end{pmatrix}.\] The generic element of \(\mathfrak{u}(1,1)\) takes the form \[\begin{pmatrix}0&-h-\mu_{1}&\mu_{2}&\mu_{3}\\ h+\mu_{1}&0&-\mu_{3}&\mu_{2}\\ \mu_{2}&-\mu_{3}&0&-h+\mu_{1}\\ \mu_{3}&\mu_{2}&h-\mu_{1}&0\end{pmatrix}.\] The center is spanned by the matrix corresponding to the two-form \(e^{12}+e^{34}\), and the complement is \(\mathfrak{su}(1,1)\cong\mathfrak{sl}(2,\mathbb{R})\), which has rank one. Since \(A^{2}=0\), (3) requires that \(B_{1}\) and \(B_{2}\) commute. Thus, their components in \(\mathfrak{su}(1,1)\) are linearly dependent. We obtain \[B_{1}=h(e^{12}+e^{34})+x\beta,\quad B_{2}=k(e^{12}+e^{34})+y\beta,\] where \(\beta\) is some nonzero element in \(\mathfrak{su}(1,1)\). Then (2) gives \[[A,k(e^{12}+e^{34})+y\beta]=J[A,h(e^{12}+e^{34})+x\beta].\] Solving this equation in \(h,k,x,y,\beta\) shows that \(B_{1}\) and \(B_{2}\) take the form in the statement. We are now in a position to classify extensions \(\mathfrak{g}\rtimes_{\varphi}\mathfrak{h}\) with \(\mathfrak{g}\) and \(\mathfrak{h}\) abelian of dimensions respectively four and two. Having established that definite metrics on \(\mathfrak{g}\) only yield trivial extensions, we may assume that \(\mathfrak{g}=\mathbb{R}^{4}\), with the neutral pseudo-Kahler structure given by \[g=e^{1}\otimes e^{1}-e^{2}\otimes e^{2}+e^{3}\otimes e^{3}-e^{4}\otimes e^{4 },\quad Je_{1}=e_{3},Je_{2}=e_{4}. \tag{8}\] **Proposition 3.5**.: _Up to isometry, the only nontrivial extensions of the form \(\mathbb{R}^{4}\rtimes\mathbb{R}^{2}\) are given by taking the pseudo-Kahler structure (8) and either_ \[\varphi(a_{1})=a\begin{pmatrix}1&2&-1&0\\ 0&1&0&1\\ 1&0&-1&0\\ 0&-1&2&-1\end{pmatrix},\quad\varphi(a_{2})=a\begin{pmatrix}0&1&2&1\\ 1&0&-1&0\\ 0&1&0&1\\ -1&2&1&0\end{pmatrix},\] _where \(a>0\), or_ \[\varphi(a_{1})=\begin{pmatrix}a&-a+c&-b&b\\ a+c&-a&-b&b\\ b&-b&-a&a+c\\ b&-b&-a+c&a\end{pmatrix},\quad\varphi(a_{2})=\begin{pmatrix}0&0&a-c&-a\\ 0&0&a&-a-c\\ a+c&-a&0&0\\ a&-a+c&0&0\end{pmatrix},\] _where \(a>0\) and either \(b\neq 0\) or \(b=0=c\)._ Proof.: Write \(\varphi(a_{1})=A+B_{1}\), \(\varphi(a_{2})=JA+B_{2}\) as in Lemma 3.1. Consider the one-parameter group of automorphisms \(\{\exp\theta J\}\), which preserves the pseudo-Kahler structure of \(\mathbb{R}^{4}\). We see that \[\operatorname{Ad}(\exp\theta J)A=\cos 2\theta A+\sin 2\theta JA. \tag{9}\] In particular this shows that every unit element in \(\mathbb{R}^{2}\) is mapped to a matrix conjugated to \(A\). Thus, \(A\) is semisimple if and only if \(\varphi(a)^{s}\) is semisimple for all \(a\). This condition is invariant under isometry, which amounts to acting by automorphisms and modifying the skew-symmetric part of \(\varphi\) Suppose first that \(A\) is semisimple, so that \(A,B_{1},B_{2}\) take the form of Lemma 3.2. Since \(B_{1}\) and \(B_{2}\) are invariant under \(\{\exp\theta J\}\) and in light of (9), the automorphisms \(f_{h}=\exp(-\theta/2J)\) and \(f_{g}=\exp\theta J_{h}\) yield an isometric extension with \[\varphi(a_{1})=A+(\cos\theta B_{1}+\sin\theta B_{2}),\quad\varphi(a_{2})=JA+( -\sin\theta B_{1}+\cos\theta B_{2}).\] In terms of the parameters \((a,k_{1},k_{2})\), isometries give a circle symmetry in the \((k_{1},k_{2})\)-plane. Since \(2a^{2}=k_{1}^{2}+k_{2}^{2}\), we can assume \(k_{1}=k_{2}=a\). In addition, we may reverse the sign of \(a\) by reflecting \(\mathbb{R}^{4}\) around the plane \(\operatorname{Span}\{e_{2},e_{4}\}\). Different choices of \(a>0\) yield nonisometric extensions, because \(\det A=(2a^{2})^{2}\) is an invariant. Now consider the case in which \(A\) is not semisimple, and let \(A,B_{1},B_{2}\) be as in Lemma 3.4. Arguing as above, we see that we can rotate in the \(B_{1},B_{2}\)-plane, which means that we can fix any angle \(\theta\) and obtain new parameters \[\begin{pmatrix}\mu_{1}^{\prime}&\nu_{1}^{\prime}\\ \mu_{2}^{\prime}&\nu_{2}^{\prime}\end{pmatrix}=\begin{pmatrix}\mu_{1}&\nu_{1} \\ \mu_{2}&\nu_{2}\end{pmatrix}\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}.\] Since \(\begin{pmatrix}\mu_{1}&\nu_{1}\\ \mu_{2}&\nu_{2}\end{pmatrix}\) is not invertible, we may assume that \(\nu_{1}=0=\nu_{2}\). Changing the basis so that the metric takes the form (8), we obtain \[A=\begin{pmatrix}a&-a&0&0\\ a&-a&0&0\\ 0&0&-a&a\\ 0&0&-a&a\end{pmatrix},\quad B_{1}=\begin{pmatrix}0&c&-b&b\\ c&0&-b&b\\ b&-b&0&c\\ b&-b&c&0\end{pmatrix},\quad B_{2}=\begin{pmatrix}0&0&-c&0\\ 0&0&0&-c\\ c&0&0&0\\ 0&c&0&0\end{pmatrix}.\] Now suppose that there are two isometric extensions of this form, \(A,B_{1},B_{2}\), \(A^{\prime},B_{1}^{\prime},B_{2}^{\prime}\). We can assume \(A=A^{\prime}\). By Definition 1.13, \(B_{1}^{\prime}-B_{1}\) and \(B_{2}^{\prime}-B_{2}\) commute with both \(A+B_{1}\) and \(JA+B_{2}\). Separating the symmetric and skew-symmetric part, this boils down to \(B_{1}^{\prime}-B_{1}\), \(B_{2}^{\prime}-B_{2}\) commuting with each of \(A\), \(B_{1}\) and \(B_{2}\). This happens if and only if \(c=0=c^{\prime}\), with \(b,b^{\prime}\) arbitrary; this shows that \(b\) can be set to zero if \(c=0\), proving that \(\varphi\) has the form in the statement up to isometry. Outside of the abelian case, it turns out that there are only two pseudo-Kahler Lie algebras admitting a nontrivial extension. They are the Lie algebra \(\mathfrak{g}=\mathfrak{r}\mathfrak{h}_{3}\), with nonzero Lie brackets \[[e_{1},e_{2}]=e_{3},\] and the pseudo-Kahler structure given by \[Je_{1}=e_{2},Je_{3}=e_{4},\quad\omega=e^{14}-e^{23},\quad g=e^{1}\odot e^{3}+ e^{2}\odot e^{4}, \tag{10}\] and \(\mathfrak{g}=\mathfrak{r}_{2}^{\prime}\), whose nonzero Lie brackets are \[[e_{1},e_{3}]=e_{3},\quad[e_{1},e_{4}]=e_{4},\quad[e_{2},e_{3}]=e_{4},\quad[e_ {2},e_{4}]=-e_{3},\] and the pseudo-Kahler structure given by \[J=\left(\begin{array}{rrrr}0&1&0&0\\ -1&0&0&0\\ 0&0&0&-1\\ 0&0&1&0\end{array}\right)\quad\text{and}\quad g=\left(\begin{array}{rrrr}a_{ 12}&0&-a_{14}&a_{13}\\ 0&a_{12}&a_{13}&a_{14}\\ -a_{14}&a_{13}&0&0\\ a_{13}&a_{14}&0&0\end{array}\right),\quad a_{13}^{2}+a_{14}^{2}\neq 0. \tag{11}\] **Remark 3.6**.: In [25], \(\mathfrak{r}\mathfrak{h}_{3}\) appears with a two-parameter family of pseudo-Kahler structures. We are indebted with Federico A. Rossi for pointing out to us that they are related to each other by isometric isomorphisms. On the other hand, \(\mathfrak{r}_{2}^{\prime}\) admits a distinct pseudo-Kahler structure, which does not admit nontrivial extensions. **Theorem 3.7**.: _Up to isometry, the nontrivial extensions of a \(4\)-dimensional pseudo-Kahler Lie algebra by a \(2\)-dimensional abelian Lie algebra are:_ * _the extensions of_ \(\mathbb{R}^{4}\) _given in Proposition_ 3.5_;_ * _the extension of_ \(\mathfrak{v}\mathfrak{h}_{3}\) _with the pseudo-Kahler structure_ (10) _and_ \[\varphi(a_{1})=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ a&0&0&0\\ 0&-a&0&0\end{pmatrix},\quad\varphi(a_{2})=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&a&0&0\\ a&0&0&0\end{pmatrix},\quad a>0;\] * _the extension of_ \(\mathfrak{r}_{2}^{\prime}\) _with the pseudo-Kahler structure_ (11) _and_ \[\varphi(a_{1})=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ a&0&0&0\\ 0&a&0&0\end{pmatrix},\quad\varphi(a_{2})=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&-a&0&0\\ a&0&0&0\end{pmatrix},\quad a>0.\] Proof.: We illustrate the computation for \(\mathfrak{v}\mathfrak{h}_{3}\). In this case derivations take the block form \[\begin{pmatrix}D_{1}&0\\ D_{2}&D_{3}\end{pmatrix}, \tag{12}\] where the \(D_{i}\) are two by two matrices. If \(A,B_{1},B_{2}\) are as in Lemma 3.1, then \(JA^{2}\) is symmetric in the block corresponding to \(D_{1}\) in (12), whereas the corresponding blocks for \(B_{1}\) and \(B_{2}\) are in the centralizer of the complex structure \(e^{1}\otimes e_{2}-e^{2}\otimes e_{1}\), so that \([B_{1},B_{2}]\) is skew-symmetric. So (2) implies that \(B_{1}\) and \(B_{2}\) commute and \(A^{2}=0\). Keeping in mind that \(A\) is symmetric and anticommuting with \(J\), whereas the \(B_{i}\) are skew-symmetric and commute with \(J\), we find \[A=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ a&b&0&0\\ b&-a&0&0\end{pmatrix},\quad B_{i}=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&h_{i}&0&0\\ -h_{i}&0&0&0\end{pmatrix}.\] Since the \(B_{i}\) are derivations that commute with \(A+B_{1}\), \(JA+B_{2}\), we can suppose \(h_{i}=0\) up to isometry. In addition, we can apply an isomorphism that rotates \(a_{1}\) and \(a_{2}\) (and consequently \(A\) and \(JA\)), and obtain \(a>0\), \(b=0\). For \(\mathfrak{r}_{2}^{\prime}\), the space of derivations is \[\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ a&-b&c&d\\ b&a&-d&c\end{pmatrix}. \tag{13}\] To determine extensions relative to the pseudo-Kahler structure (11), observe that irrespective of the metric imposing that \(A\) anticommute with \(J\) and \(B_{i}\) commute with \(J\) forces \[A=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ a&-b&0&0\\ b&a&0&0\end{pmatrix},\quad B_{i}=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&c_{i}&d_{i}\\ 0&0&-d_{i}&c_{i}\end{pmatrix}.\] Again, we can assume \(a\geq 0\), \(b=0\) up to isometry, and the extension is only nontrivial if \(a\neq 0\). Imposing that \(B_{i}\) is skew-symmetric gives \(B_{i}=0\). On the other hand, \(\mathfrak{r}_{2}^{\prime}\) also has a pseudo-Kahler structure \[J^{\prime}=\left(\begin{array}{cccc}0&0&-1&0\\ 0&0&0&-1\\ 1&0&0&0\\ 0&1&0&0\end{array}\right)\quad\text{and}\quad g^{\prime}=\left(\begin{array} []{cccc}-a_{13}&-a_{14}&0&0\\ -a_{14}&a_{13}&0&0\\ 0&0&-a_{13}&-a_{14}\\ 0&0&-a_{14}&a_{13}\end{array}\right),\quad a_{13}^{2}+a_{14}^{2}\neq 0.\] Suppose \(A,B_{1},B_{2}\) satisfy Lemma 3.1. The component of the derivation (13) that commutes with \(J\) is skew-symmetric if and only if \(c=d=0\), so we must have \[A=\begin{pmatrix}0&0&a&-b\\ 0&0&b&a\\ a&-b&0&0\\ b&a&0&0\end{pmatrix},\quad B_{1}=\begin{pmatrix}0&0&-a&b\\ 0&0&-b&-a\\ a&-b&0&0\\ b&a&0&0\end{pmatrix},\] and \(B_{2}\) has the same form as \(B_{1}\); however, \(JA+B_{2}\) cannot be a derivation unless \(a\) and \(b\) are zero. This shows that every extension of this pseudo-Kahler structure is trivial. The other cases are similar. Explicitly, the resulting 6-dimensional Lie algebras are: * extension of \(\mathfrak{g}\) abelian, \(\varphi(a_{i})\) semisimple: normalizing to \(a=1\) (which has the effect of rescaling the metric), with notation as in Example 1.4, we obtain the Lie algebra \[\big{(}e^{15}+2e^{25}+e^{26}-e^{35}+2e^{36}+e^{46},e^{16}+e^{25}-e ^{36}+e^{45},\\ e^{15}+e^{26}-e^{35}+e^{46},-e^{16}-e^{25}+2e^{26}+2e^{35}+e^{36}-e^{45},0,0\big{)};\] the metric and Kahler form are given by \[\widetilde{g}=e^{1}\otimes e^{1}-e^{2}\otimes e^{2}+e^{3}\otimes e^{3}-e^{4} \otimes e^{4}+e^{5}\otimes e^{5}+e^{6},\quad\widetilde{\omega}=e^{12}-e^{34}+ e^{56}.\] One can check that the metric is Ricci-flat but not flat. * extension of \(\mathfrak{g}\) abelian, \(\varphi(a_{i})\) not semisimple: rescaling the metric so that \(a=1\), we obtain \[\big{(}e^{15}-e^{25}+ce^{25}-be^{35}+e^{36}-ce^{36}+be^{45}-e^{46},-(1+c)(-e^{ 15}+e^{46})+(-e^{25}+e^{36})+b(-e^{35}+e^{45}),\\ (1+c)(e^{16}+e^{45})-(e^{26}+e^{35})-b(-e^{15}+e^{25}),-(1-c)(e^{26}+e^{35}) +(e^{16}+e^{45})-b(-e^{15}+e^{25}),0,0\big{)}\] with metric and Kahler form \[\widetilde{g}=e^{1}\otimes e^{1}+e^{2}\otimes e^{2}-e^{3}\otimes e^{3}-e^{4} \otimes e^{4}+e^{5}\otimes e^{5}+e^{6},\quad\widetilde{\omega}=e^{13}-e^{24}+ e^{56},\] also Ricci-flat but not flat. * extension of \(\mathfrak{rh}_{3}\): \[(0,0,a(e^{15}+e^{26})-e^{12},-a(-e^{16}+e^{25}),0,0)\] with metric and Kahler form \[\widetilde{g}=e^{1}\odot e^{3}+e^{2}\odot e^{4}+e^{5}\otimes e^{5}+e^{6} \otimes e^{6},\quad\widetilde{\omega}=e^{14}-e^{23}+e^{56},\] also Ricci-flat but not flat, unless \(a=0\). * extension of \(\mathfrak{r}_{2}^{\prime}\): \[\big{(}0,0,-a(-e^{15}+e^{26})-e^{13}+e^{24},a(e^{16}+e^{25})-e^{14}-e^{23},0,0 \big{)}\] with metric and Kahler form \[\widetilde{g}=x(e^{1}\otimes e^{1}+e^{2}\otimes e^{2})+y(e^{2}\odot e^{4}-e ^{1}\odot e^{3})+z(e^{1}\odot e^{4}+e^{2}\odot e^{3})+e^{5}\otimes e^{5}+e^{6} \otimes e^{6},\] \[\widetilde{\omega}=-xe^{12}-ze^{13}-ye^{14}-ye^{23}+ze^{24}+e^{56},\] where \((y,z)\neq(0,0)\). The metric is Ricci-flat for all \(x,y,z\), and flat precisely when \(x=a^{2}(y^{2}+z^{2})\). ## 4 Hypersymplectic structures In this section we construct hypersymplectic structures on the semidirect product \(\widetilde{\mathfrak{g}}=\mathfrak{g}\rtimes_{\varphi}\mathfrak{h}\) equipped with the pseudo-Kahler structure given by Theorem 1.3. First of all let us set the following definitions. **Definition 4.1**.: We say that \((\mathfrak{g},g,J,E)\) is an _almost hypersymplectic Lie algebra_ if \((g,J)\) is a pseudo-Kahler structure and \(E\) is an almost para-complex structure such that \(JE=-EJ\) and \(g(E\cdot,E\cdot)=-g\). If in addition \(E\) is parallel with respect to the Levi-Civita connection of \(g\), then \((\mathfrak{g},g,J,E)\) is called a _hypersymplectic Lie algebra_. Recall that the existence of an almost hypersymplectic structure on a Lie algebra implies that its dimension is a multiple of \(4\) and the metric has neutral signature (see [1, p. 2043]). **Example 4.2**.: The basic example of a hypersymplectic Lie algebra is \(\mathbb{R}^{4}\) equipped with the flat metric \[g=e^{1}\odot e^{4}-e^{2}\odot e^{3}\] and complex and para-complex structures given by \[J =e^{1}\otimes e_{3}+e^{2}\otimes e_{4}-e^{3}\otimes e_{1}-e^{4} \otimes e_{2},\] \[E =e^{1}\otimes e_{1}+e^{2}\otimes e_{2}-e^{3}\otimes e_{3}-e^{4} \otimes e_{4}.\] The space of endomorphisms that commute with both \(J\) and \(E\) as above is \[\left\{\begin{pmatrix}a&b&0&0\\ c&d&0&0\\ 0&0&a&b\\ 0&0&c&d\end{pmatrix}\mid a,b,c,d\in\mathbb{R}\right\}.\] If we require that these endomorphisms are moreover skew-symmetric (with respect to the metric \(g\)), then we need the condition \(a+d=0\). So we find the space \[\left\{\begin{pmatrix}A&0\\ 0&A\end{pmatrix}\mid A\in\mathfrak{sl}(2,\mathbb{R})\right\}.\] This gives us the inclusion \(\mathfrak{sl}(2,\mathbb{R})\cong\mathfrak{sp}(2,\mathbb{R})\hookrightarrow \mathfrak{gl}(4,\mathbb{R})\). More generally, by considering the flat hypersymplectic structure on \(\mathbb{R}^{4n}\), we get \(\mathfrak{sp}(2n,\mathbb{R})\hookrightarrow\mathfrak{gl}(4n,\mathbb{R})\). Inspired by the notion of Kodaira manifold introduced in [16, p. 255], we consider the following class of hypersymplectic Lie algebras. **Definition 4.3**.: Let \((\mathfrak{g},g,J,E)\) be a hypersymplectic Lie algebra. We say that it is of _Kodaira type_ if \(\mathfrak{g}\) is \(2\)-step nilpotent and its center is half-dimensional and \(J\)-invariant. The only \(4\)-dimensional nilpotent hypersymplectic Lie algebra is of Kodaira type (see [1, Theorem 23]). In [16, Section 3.2] some examples of hypersymplectic Lie algebras of Kodaira type are constructed in dimension \(8\). In [3, Section 5.1] examples of hypersymplectic Lie algebras of Kodaira type are constructed in dimension \(8n\) for \(n\geq 1\). It is pointed out in [16, p. 261] that the metric of a hypersymplectic Lie algebra of Kodaira type is flat. Moreover, all these examples are equipped with a particular kind of hypersymplectic structure. First let us recall that a complex structure \(J\) is called _abelian_ if \[[JX,JY]=[X,Y].\] Similarly, a para-complex structure \(E\) will be called _abelian_ if \[[EX,EY]=-[X,Y].\] It was shown in [3, Proposition 6.1] that if \((\mathfrak{g},g,J,E)\) is hypersymplectic, then \(J\) is abelian if and only if \(E\) is abelian. Therefore, we give the following definition. **Definition 4.4**.: Let \((\mathfrak{g},g,J,E)\) be a hypersymplectic Lie algebra. We say that the hypersymplectic structure is _abelian_ if \(J\) or \(E\) is abelian. We will then refer to \((\mathfrak{g},g,J,E)\) as a hypersymplectic Lie algebra of _abelian type_. Although this nomenclature is not standard, we use it to differentiate a hypersymplectic Lie algebra of abelian type, which does not necessarily have an underlying abelian Lie algebra, from the abelian hypersymplectic Lie algebra, which is \(\mathbb{R}^{4n}\) equipped with the canonical hypersymplectic structure as in Example 4.2. **Remark 4.5**.: If a real Lie algebra admits an abelian complex structure, then it must be \(2\)-step solvable (see [2, p. 235], [26]). In [5, Theorem 2] the authors classify the \(8\)-dimensional hypersymplectic Lie algebras of abelian type and in [3, p. 15] the authors point out that all the possible hypersymplectic Lie algebras of abelian type are constructed as in [3, Theorem 2.1]. This observation, together with [3, Theorem 4.2], gives us the following. **Proposition 4.6**.: _Let \((\mathfrak{g},g,J,E)\) be a hypersymplectic Lie algebra of abelian type. If \(\mathfrak{g}\) is \(2\)-step nilpotent, then the metric \(g\) is flat._ **Remark 4.7**.: This is not longer true if \(\mathfrak{g}\) is \(3\)-step nilpotent (see [3, 5]). Now we can proceed with the construction of hypersymplectic structures. Given the semidirect product \(\widetilde{\mathfrak{g}}=\mathfrak{g}\rtimes_{\varphi}\mathfrak{h}\) with the pseudo-Kahler structure \((\widetilde{g},\widetilde{J})\) constructed in Theorem 1.3, we are interested in finding an almost para-complex structure \(\widetilde{E}\) on \(\widetilde{\mathfrak{g}}\) which is parallel with respect to the Levi-Civita connection \(\widetilde{\nabla}\) of \(\widetilde{g}\) so that \((\widetilde{\mathfrak{g}},\widetilde{g},\widetilde{J},\widetilde{E})\) is a hypersymplectic Lie algebra. For this, let us consider an almost para-complex structure \[\widetilde{E}=\begin{pmatrix}E_{1}&E_{2}\\ E_{3}&E_{4}\end{pmatrix}\in\operatorname{End}(\widetilde{\mathfrak{g}}),\] where \(E_{1}\in\operatorname{End}(\mathfrak{g})\), \(E_{2}\in\operatorname{Hom}(\mathfrak{h},\mathfrak{g})\), \(E_{3}\in\operatorname{Hom}(\mathfrak{g},\mathfrak{h})\) and \(E_{4}\in\operatorname{End}(\mathfrak{h})\). The next theorem gives us the conditions for such \(\widetilde{E}\) to be parallel. **Theorem 4.8**.: _Let \((\mathfrak{g},g,J_{g})\) and \((\mathfrak{h},h,J_{h})\) be pseudo-Kahler Lie algebras and let \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) be a representation satisfying the conditions of Theorem 1.3. Then \((\widetilde{\mathfrak{g}},\widetilde{g},\widetilde{J},\widetilde{E})\) is a hypersymplectic Lie algebra if and only if_ * \(\nabla^{h}E_{4}=0\)_,_ * \(\varphi(A)^{a}\circ E_{2}=E_{2}\circ\nabla^{h}_{A}\)_,_ * \(\varphi(A)^{a}\circ E_{1}=E_{1}\circ\varphi(A)^{a}\)_,_ * \(\varphi(B)^{s}E_{2}C=\varphi(C)^{s}E_{2}B\)_,_ * \(g(\nabla^{g}_{X}Y,E_{2}C)=g((E_{1}\circ\varphi(C)^{s}-\varphi(E_{4}C)^{s})X,Y)\)_,_ * \(g((\nabla^{g}_{X}E_{1})Y,Z)=g(\varphi(E_{3}Y)^{s}Z-\varphi(E_{3}Z)^{s}Y,X)\)_._ Proof.: We assume that the triple \((\widetilde{g},\widetilde{J},\widetilde{E})\) satisfies the algebraic conditions for being an almost hypersymplectic structure. Then we need to determine the conditions for \(\widetilde{E}\) being \(\widetilde{\nabla}\)-parallel. We have to consider four cases depending on where the vectors \(u,v\in\widetilde{\mathfrak{g}}\) on \((\widetilde{\nabla}_{u}\widetilde{E})v\) lie. \(\bullet\)**Case \(u,v\in\mathfrak{h}\):** Set \(u=A,v=B\). We compute \[(\widetilde{\nabla}_{A}\widetilde{E})B =\widetilde{\nabla}_{A}(\widetilde{E}B)-\widetilde{E}( \widetilde{\nabla}_{A}B)\] \[=\widetilde{\nabla}_{A}(E_{2}B)+\widetilde{\nabla}_{A}(E_{4}B)- E_{2}(\widetilde{\nabla}_{A}B)-E_{4}(\widetilde{\nabla}_{A}B)\] \[=\varphi(A)^{a}E_{2}B+\nabla^{h}_{A}E_{4}B-E_{2}(\nabla^{h}_{A}B) -E_{4}(\nabla^{h}_{A}B).\] This term is zero if and only if \(\varphi(A)^{a}\circ E_{2}=E_{2}\circ\nabla^{h}_{A}\) for all \(A\in\mathfrak{h}\) and \(\nabla^{h}E_{4}=0\). \(\bullet\)**Case \(u\in\mathfrak{h}\) and \(v\in\mathfrak{g}\):** Set \(u=A,v=Y\). We compute \[(\widetilde{\nabla}_{A}\widetilde{E})Y =\widetilde{\nabla}_{A}(\widetilde{E}Y)-\widetilde{E}(\widetilde{ \nabla}_{A}Y)\] \[=\widetilde{\nabla}_{A}(E_{1}Y)+\widetilde{\nabla}_{A}(E_{3}Y)-E_ {1}(\widetilde{\nabla}_{A}Y)-E_{3}(\widetilde{\nabla}_{A}Y)\] \[=\varphi(A)^{a}E_{1}Y+\nabla_{A}^{h}E_{3}Y-E_{1}(\varphi(A)^{a}Y )-E_{3}(\varphi(A)^{a}Y).\] This term is zero if and only if \(\varphi(A)^{a}\circ E_{1}=E_{1}\circ\varphi(A)^{a}\) and \(\nabla_{A}^{h}\circ E_{3}=E_{3}\circ\varphi(A)^{a}\) for all \(A\in\mathfrak{h}\). The condition \(\nabla_{A}^{h}\circ E_{3}=E_{3}\circ\varphi(A)^{a}\) is equivalent to \(\varphi(A)^{a}\circ E_{2}=E_{2}\circ\nabla_{A}^{h}\). Indeed, \(\widetilde{E}^{*}=-\widetilde{E}\) implies that \(E_{3}=-h^{-1}E_{2}^{\top}g\). The claimed equivalence follows from this and the fact that \(\varphi(A)^{a}\) is anti-symmetric with respect to \(g\) and \(\nabla_{A}^{h}\) is anti-symmetric with respect to \(h\). \(\bullet\)**Case \(u\in\mathfrak{g}\) and \(v\in\mathfrak{h}\):** Set \(u=X,v=B\). We compute \[\widetilde{g}((\widetilde{\nabla}_{X}\widetilde{E})B,Z+C)=\widetilde{g}( \widetilde{\nabla}_{X}(\widetilde{E}B),Z+C)-\widetilde{g}(\widetilde{E}( \widetilde{\nabla}_{X}B),Z+C),\] where \[\widetilde{g}(\widetilde{\nabla}_{X}(\widetilde{E}B),Z+C) =\widetilde{g}(\widetilde{\nabla}_{X}E_{2}B,Z+C)+\widetilde{g}( \widetilde{\nabla}_{X}E_{4}B,Z+C)\] \[=g(\nabla_{X}^{g}E_{2}B,Z)+g(\varphi(C)^{s}X,E_{2}B)-g(\varphi(E_ {4}B)^{s}X,Z),\] \[\widetilde{g}(\widetilde{E}(\widetilde{\nabla}_{X}B),Z+C) =-\widetilde{g}(\widetilde{\nabla}_{X}B,\widetilde{E}Z+\widetilde{ E}C)\] \[=\widetilde{g}(\varphi(B)^{s}X,E_{1}Z+E_{3}Z+E_{2}C+E_{4}C)\] \[=g(\varphi(B)^{s}X,E_{1}Z)+g(\varphi(B)^{s}X,E_{2}C).\] Hence we have \[\widetilde{g}((\widetilde{\nabla}_{X}\widetilde{E})B,Z+C) =g(\nabla_{X}^{g}E_{2}B,Z)+g(\varphi(C)^{s}X,E_{2}B)-g(\varphi(E_ {4}B)^{s}X,Z)\] \[\quad-g(\varphi(B)^{s}X,E_{1}Z)-g(\varphi(B)^{s}X,E_{2}C)\] \[=g(\nabla_{X}^{g}E_{2}B,Z)+g((E_{1}\circ\varphi(B)^{s}-\varphi(E_ {4}B)^{s})X,Z)\] \[\quad+g(\varphi(C)^{s}X,E_{2}B)-g(\varphi(B)^{s}X,E_{2}C).\] This term is zero if and only if \[g(\nabla_{X}^{g}E_{2}B,Z) =g((\varphi(E_{4}B)^{s}-E_{1}\circ\varphi(B)^{s})X,Z),\] \[g(\varphi(C)^{s}X,E_{2}B) =g(\varphi(B)^{s}X,E_{2}C).\] The second equation is equivalent to \(\varphi(B)^{s}E_{2}C=\varphi(C)^{s}E_{2}B\) for all \(B,C\in\mathfrak{h}\). \(\bullet\)**Case \(u,v\in\mathfrak{g}\):** Set \(u=X,v=Y\). We compute \[\widetilde{g}((\widetilde{\nabla}_{X}\widetilde{E})Y,Z+C)=\widetilde{g}( \widetilde{\nabla}_{X}(\widetilde{E}Y),Z+C)-\widetilde{g}(\widetilde{E}( \widetilde{\nabla}_{X}Y),Z+C),\] where \[\widetilde{g}(\widetilde{\nabla}_{X}(\widetilde{E}Y),Z+C) =\widetilde{g}(\widetilde{\nabla}_{X}E_{1}Y,Z+C)+\widetilde{g}( \widetilde{\nabla}_{X}E_{3}Y,Z+C)\] \[=g(\nabla_{X}^{g}E_{1}Y,Z)+g(\varphi(C)^{s}X,E_{1}Y)-g(\varphi( E_{3}Y)^{s}X,Z),\] \[\widetilde{g}(\widetilde{E}(\widetilde{\nabla}_{X}Y),Z+C) =-\widetilde{g}(\widetilde{\nabla}_{X}Y,\widetilde{E}Z+ \widetilde{E}C)\] \[=-\widetilde{g}(\widetilde{\nabla}_{X}Y,E_{1}Z+E_{3}Z+E_{2}C+E_{4 }C)\] \[=-g(\nabla_{X}^{g}Y,E_{1}Z)-g(\nabla_{X}^{g}Y,E_{2}C)\] \[\quad-g(\varphi(E_{3}Z)^{s}X,Y)-g(\varphi(E_{4}C)^{s}X,Y).\] Hence we have \[\widetilde{g}((\widetilde{\nabla}_{X}\widetilde{E})Y,Z+C)=g(\nabla_{X}^{g}E_{1 }Y,Z)+g(\varphi(C)^{s}X,E_{1}Y)-g(\varphi(E_{3}Y)^{s}X,Z)\] \[+g(\nabla^{g}_{X}Y,E_{1}Z)+g(\nabla^{g}_{X}Y,E_{2}C)\] \[+g(\varphi(E_{3}Z)^{s}X,Y)+g(\varphi(E_{4}C)^{s}X,Y)\] \[=g(\nabla^{g}_{X}Y,E_{2}C)+g((\varphi(E_{4}C)^{s}-E_{1}\circ \varphi(C)^{s})X,Y)\] \[+g((\nabla^{g}_{X}E_{1})Y,Z)-g(\varphi(E_{3}Y)^{s}X,Z)+g(\varphi( E_{3}Z)^{s}X,Y).\] This term is zero if and only if \[g(\nabla^{g}_{X}Y,E_{2}C) =g((E_{1}\circ\varphi(C)^{s}-\varphi(E_{4}C)^{s})X,Y),\] \[g((\nabla^{g}_{X}E_{1})Y,Z) =g(\varphi(E_{3}Y)^{s}X,Z)-g(\varphi(E_{3}Z)^{s}X,Y)\] \[=g(\varphi(E_{3}Y)^{s}Z-\varphi(E_{3}Z)^{s}Y,X).\] The first equation above is equivalent to the condition \[g(\nabla^{g}_{X}E_{2}B,Z)=g((\varphi(E_{4}B)^{s}-E_{1}\circ\varphi(B)^{s})X,Z),\] since \(\nabla^{g}_{X}\) is anti-symmetric with respect to \(g\). We consider a couple of particular cases in the following corollary. **Corollary 4.9**.: _In the situation of Theorem 4.8:_ 1. _If_ \(E_{2}=E_{3}=0\)_, then_ \((\mathfrak{g},g,J_{g},E_{1})\) _and_ \((\mathfrak{h},h,J_{h},E_{4})\) _are hypersymplectic Lie algebras and_ \(\varphi(A)^{s}=0\) _for all_ \(A\in\mathfrak{h}\)_._ 2. _If_ \(E_{1}\neq 0\)_,_ \(E_{4}\neq 0\) _and_ \(\mathfrak{g}\) _is abelian, then_ \(\varphi(A)^{s}=0\) _for all_ \(A\in\mathfrak{h}\)_._ Proof.: (1) In this situation, \((g,J_{g},E_{1})\) and \((h,J_{h},E_{4})\) are almost hypersymplectic structures, and since \(\nabla^{g}E_{1}=0\) and \(\nabla^{h}E_{4}=0\), then they define hypersymplectic structures on \(\mathfrak{g}\) and \(\mathfrak{h}\), respectively. Moreover, the condition \(E_{1}\circ\varphi(A)^{s}=\varphi(E_{4}A)^{s}\), together with \(E_{1}^{*}=-E_{1}\) and \(J_{g}E_{1}=-E_{1}J_{g}\), implies that \(\varphi(A)^{s}=0\) for all \(A\in\mathfrak{h}\). Indeed, since \(J_{g}\circ\varphi(A)^{s}=\varphi(J_{h}A)^{s}\) is symmetric, we have \[J_{g}\circ\varphi(A)^{s}=(J_{g}\circ\varphi(A)^{s})^{*}=\varphi(A)^{s}\circ J _{g}^{*}=-\varphi(A)^{s}\circ J_{g}. \tag{14}\] Similarly, \(E_{1}\circ\varphi(A)^{s}=-\varphi(A)^{s}\circ E_{1}.\) Hence, if \(B=J_{h}A\), then \[E_{1}\circ\varphi(B)^{s}=E_{1}J_{g}\circ\varphi(A)^{s}=-J_{g}E_{1}\circ \varphi(A)^{s}=J_{g}\circ\varphi(A)^{s}\circ E_{1}=\varphi(B)^{s}\circ E_{1}.\] On the other hand, (14) applied to \(B\) gives \(E_{1}\circ\varphi(B)^{s}=-\varphi(B)^{s}\circ E_{1}\). Therefore \(\varphi(B)^{s}=0\), and since \(A\) is arbitrary, this means that \(\varphi(B)^{s}=0\) for all \(B\in\mathfrak{h}\). (2) Since \(\mathfrak{g}\) is abelian, we have that \(\nabla^{g}_{X}Y=0\) for all \(X,Y\in\mathfrak{g}\), and this implies that \(E_{1}\circ\varphi(B)^{s}=\varphi(E_{4}B)^{s}\). Note that \(E_{1}^{*}=-E_{1}\) and \(J_{g}E_{1}=-E_{1}J_{g}\). Hence we conclude that \(\varphi(A)^{s}=0\) for all \(A\in\mathfrak{h}\) in the same way as in part (1). If we let the symmetric part of the representation \(\varphi\) being zero, that is \(\varphi(A)^{s}=0\) for all \(A\in\mathfrak{h}\), then we can find several examples of hypersymplectic Lie algebras. **Corollary 4.10**.: _If \(\mathfrak{g}\) and \(\mathfrak{h}\) are abelian hypersymplectic Lie algebras and \(\varphi:\mathfrak{g}\to\operatorname{End}(\mathfrak{g})\) takes values in the Lie algebra \(\mathfrak{sp}(2n,\mathbb{R})\) of skew-symmetric endomorphisms that commute with \(J_{g}\) and \(E_{g}\), then the semidirect product \(\mathfrak{g}\rtimes_{\varphi}\mathfrak{h}\) is hypersymplectic and flat._ Proof.: We set \(E_{1}=E_{g}\), \(E_{4}=E_{h}\) and apply Theorem 4.8. The Azencott-Wilson theorem shows that the resulting structure is isometric to a direct product (see Proposition 1.12), so it is flat. **Example 4.11**.: A 2-step nilpotent example is given by \(\mathbb{R}^{4}\rtimes_{\varphi}\mathbb{R}^{4}\), where \[\varphi(a_{1})=\begin{pmatrix}0&1&0&0\\ 0&0&0&0\\ 0&0&0&1\\ 0&0&0&0\end{pmatrix},\quad\varphi(a_{2})=\varphi(a_{3})=\varphi(a_{4})=0.\] In \(\mathbb{R}^{4}\) we have put the flat hypersymplectic structure of Example 4.2. Its center is given by \(\mathfrak{z}(\widetilde{\mathfrak{g}})=\langle e_{1},e_{4},a_{2},a_{3},a_{4}\rangle\), so \(\widetilde{\mathfrak{g}}\) is not of Kodaira type. Moreover, one can check that the complex structure \(\widetilde{J}\) is not abelian. **Remark 4.12**.: As we have explained above, it seems that all \(2\)-step nilpotent hypersymplectic Lie algebras in the literature are of Kodaira type (or products of Kodaira type with abelian hypersymplectic Lie algebras). Example 4.11 shows that this is not always the case. Furthermore, to our knowledge, all examples of \(2\)-step nilpotent hypersymplectic Lie algebras have an abelian complex structure. In our construction we obtain examples where this is not the case. **Example 4.13**.: More generally, if \(\varphi:\mathbb{R}^{4n}\to\mathfrak{sp}(2m,\mathbb{R})\subset\mathfrak{gl}(4m,\mathbb{R})\) is any linear map taking values in an abelian subalgebra of nilpotent matrices, we obtain a hypersymplectic structure on the nilpotent Lie algebra \(\mathbb{R}^{4m}\rtimes_{\varphi}\mathbb{R}^{4n}\). **Example 4.14**.: Some \(2\)-step solvable examples can be obtained as \(\mathbb{R}^{4}\rtimes_{\varphi}\mathbb{R}^{4}\), with \[\varphi(a_{1})=\begin{pmatrix}1&0&0&0\\ 0&-1&0&0\\ 0&0&1&0\\ 0&0&0&-1\end{pmatrix},\quad\varphi(a_{2})=\varphi(a_{3})=\varphi(a_{4})=0,\] where on \(\mathbb{R}^{4}\) we have put the hypersymplectic structure of Example 4.2, or \(\mathbb{R}^{4}\rtimes_{\varphi}\mathbb{R}^{4}\), with \[\varphi(a_{1})=\begin{pmatrix}0&1&0&0\\ -1&0&0&0\\ 0&0&0&1\\ 0&0&-1&0\end{pmatrix},\quad\varphi(a_{2})=\varphi(a_{3})=\varphi(a_{4})=0,\] which is not completely solvable. Another variation is imposing \(\varphi(a_{4})=\varphi(a_{1})\), whilst keeping the others, so that \(\ker\varphi\) is non-degenerate. We can also consider the case where one or two of the starting hypersymplectic Lie algebras are not abelian. The following can be seen as a converse of the first item of Corollary 4.9. **Corollary 4.15**.: _Let \((\mathfrak{g},g,J_{g},E_{g})\) and \((\mathfrak{h},h,J_{h},E_{h})\) be hypersymplectic Lie algebras. Let \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) be a representation such that \(\varphi(A)^{s}=0\) and \(\varphi(A)^{a}\) commutes with \(J_{g}\) and \(E_{g}\) for all \(A\in\mathfrak{h}\). Then \((\widetilde{\mathfrak{g}},\widetilde{g},\widetilde{J},\widetilde{E})\) is a hypersymplectic Lie algebra._ Proof.: This follows from Theorem 4.8 by choosing \(\widetilde{E}\) with \(E_{2}=E_{3}=0\), \(E_{1}=E_{g}\) and \(E_{4}=E_{h}\). **Example 4.16**.: Let \(\mathfrak{h}=\mathfrak{rh}_{3}\) with the hypersymplectic structure given in [1, Theorem 23] and \(\mathfrak{g}=\mathbb{R}^{4}\) with the hypersymplectic structure as in Example 4.2. Consider the representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) given by \[\varphi(a_{1})=\varphi(a_{2})=\varphi(a_{3})=0,\quad\varphi(a_{4})=\begin{pmatrix} 0&1&0&0\\ 0&0&0&0\\ 0&0&0&1\\ 0&0&0&0\end{pmatrix}.\] Then \(\widetilde{\mathfrak{g}}\) is hypersymplectic \(2\)-step nilpotent and flat. Its center is given by \(\mathfrak{z}(\widetilde{\mathfrak{g}})=\langle a_{3},e_{1},e_{3}\rangle=[ \widetilde{\mathfrak{g}},\widetilde{\mathfrak{g}}]\), so it is again not of Kodaira type. We also check that \(\widetilde{J}\) is not abelian. One of the upshots of this construction, and a remarkable feature, is established in the following proposition. **Proposition 4.17**.: _There exist (irreducible) \(2\)-step nilpotent hypersymplectic Lie algebras that are neither of Kodaira type neither of abelian type in any dimension \(4n\) for \(n\geq 1\)._ To get some non-flat examples, we can take in Corollary 4.15 the Lie algebra \(\mathfrak{h}\) to be a non-flat hypersymplectic Lie algebra and \(\mathfrak{g}=\mathbb{R}^{4n}\) with the flat hypersymplectic structure. For instance, we can take as \(\mathfrak{h}\) a non-flat solvable \(4\)-dimensional hypersymplectic Lie algebra of [1, Theorem 23], the non-flat \(3\)-step nilpotent hypersymplectic Lie algebra of [3, 5] or the non-flat \(4\)-step nilpotent hypersymplectic Lie algebra from [6, Section 5]. **Corollary 4.18**.: _Let \(\mathfrak{h}\) be a hypersymplectic Lie algebra. Let \(\varphi:\mathfrak{h}\to\mathfrak{sp}(2m,\mathbb{R})\) be a Lie algebra homomorphism. Then \(\mathbb{R}^{4m}\rtimes_{\iota\circ\varphi}\mathfrak{h}\) has a hypersymplectic structure, where \(\iota:\mathfrak{sp}(2m,\mathbb{R})\hookrightarrow\mathfrak{gl}(4m,\mathbb{R})\) is the inclusion._ **Example 4.19**.: If \(\mathfrak{h}\) is non-flat hypersymplectic Lie algebra, then \(\operatorname{ad}\colon\mathfrak{h}\to\mathfrak{gl}(\mathfrak{h})\) induces a representation of \(\mathfrak{h}\) on \(V=\mathfrak{h}\oplus\mathfrak{h}^{*}\) that preserves the symplectic structure corresponding to the pairing; this gives a homomorphism \(\mathfrak{h}\to\mathfrak{sp}(2n,\mathbb{R})\) where \(n=\dim\mathfrak{h}\). This determines a non-flat hypersymplectic structure on \((V\oplus V)\rtimes\mathfrak{h}\). We can also look for some examples where the representation \(\varphi\) has non-zero symmetric part. Let us consider the following. **Lemma 4.20**.: _Let \((\mathfrak{g},g,J_{g})\) be a pseudo-Kahler Lie algebra. Then_ \[\widetilde{g}=\begin{pmatrix}g&0\\ 0&-g\end{pmatrix},\quad\widetilde{J}=\begin{pmatrix}J_{g}&0\\ 0&-J_{g}\end{pmatrix},\quad\widetilde{E}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix} \tag{15}\] _define an almost hypersymplectic structure on \(\widetilde{\mathfrak{g}}=\mathfrak{g}\rtimes_{\varphi}\mathfrak{g}\)._ Note that if \(E_{1}=E_{4}=0\) in Theorem 4.8, then the Lie algebra \(\mathfrak{g}\) is abelian. Hence in the above situation, if we want \((\widetilde{g},\widetilde{J},\widetilde{E})\) being hypersymplectic, then the only case we can consider is \(\mathfrak{g}\) abelian. Let us look at the following particular example. **Example 4.21**.: Let \(\mathfrak{g}=\mathfrak{h}=\mathbb{R}^{4}\) with metric and complex structure given by \[g =e^{1}\otimes e^{1}+e^{2}\otimes e^{2}-e^{3}\otimes e^{3}-e^{4} \otimes e^{4},\] \[J_{g} =e^{1}\otimes e_{2}-e^{2}\otimes e_{1}+e^{3}\otimes e_{4}-e^{4} \otimes e_{3},\] and \(h=-g,J_{h}=-J_{g}\). We define the representation \(\varphi:\mathfrak{h}\to\operatorname{Der}(\mathfrak{g})\) by \[\varphi(a_{1}) =\varphi(a_{3})=\left(\begin{array}{rrrr}1&1&1&1\\ 1&-1&1&-1\\ -1&-1&-1&-1\\ -1&1&-1&1\end{array}\right),\] \[\varphi(a_{2}) =\varphi(a_{4})=\left(\begin{array}{rrrr}1&-1&1&-1\\ -1&-1&-1&-1\\ -1&1&-1&1\\ 1&1&1&1\end{array}\right).\] The map \(\varphi\) satisfies the conditions of Theorem 1.3, so we have a pseudo-Kahler structure on \(\widetilde{\mathfrak{g}}\). Moreover, since \(\varphi(A)\varphi(B)=0\) for all \(A,B\in\mathfrak{h}\), then \(\widetilde{\mathfrak{g}}\) is \(2\)-step nilpotent by Remark 1.1. One can check that \(\widetilde{E}\) as defined in (15) and this \(\varphi\) satisfy all the conditions in Theorem 4.8. Then \((\widetilde{\mathfrak{g}},\widetilde{g},\widetilde{J},\widetilde{E})\) is a hypersymplectic Lie algebra. Moreover, its center is \(\mathfrak{z}(\widetilde{\mathfrak{g}})=\langle a_{1}-a_{3},a_{2}-a_{4},e_{1}- e_{3},e_{2}-e_{4}\rangle\), which is \(\widetilde{J}\)-invariant, thus it is of Kodaira type and hence flat. **Remark 4.22**.: In [10, Section 7.3] a similar approach to our construction is considered. Here the authors also obtain examples of hypersymplectic Lie algebras of Kodaira type. So far, all the examples of \(2\)-step nilpotent hypersymplectic Lie algebras in the literature are flat. We have seen this for the ones of Kodaira type, the ones of abelian type (see Proposition 4.6) and the new examples obtained in this paper. This leads the authors to make the following conjecture. **Conjecture 4.23**.: _If \((\mathfrak{g},g,J,E)\) is a \(2\)-step nilpotent hypersymplectic Lie algebra, then the metric \(g\) is flat._ Notice that in [16], examples of non-flat hypersymplectic structures are given on some \(2\)-step nilpotent Lie groups. However these hypersymplectic structures are not left-invariant, so they are not defined in the corresponding Lie algebra.
2309.15960
Ultimate Colliders
Our understanding of the Universe critically depends on the fundamental knowledge of particles and fields, which represents a central endeavor of modern high-energy physics. Energy frontier particle colliders - arguably, among the largest, most complex and advanced scientific instruments of modern times - for many decades have been at the forefront of scientific discoveries in high-energy physics. Due to technology advances and beam physics breakthroughs, the colliding beam facilities have progressed immensely and now operate at energies and luminosities many orders of magnitude greater than the pioneering instruments of the early 1960s. While the Large Hadron Collider and the Super-KEKB factory represent the frontier hadron and lepton colliders of today, respectively, future colliders are an essential component of a strategic vision for particle physics. Conceptual studies and technical developments for several exciting near- and medium-term future collider options are underway internationally. Analysis of numerous proposals and studies for far-future colliders indicate the limits of the collider beam technology due to machine size, cost, and power consumption, and call for a paradigm shift of the particle physics research at ultra-high energy but low luminosity colliders approaching or exceeding 1 PeV center-of-mass energy scale.
Vladimir Shiltsev
2023-09-27T19:17:27Z
http://arxiv.org/abs/2309.15960v1
# Ultimate Colliders 1 ###### Abstract Understanding the Universe critically depends on the fundamental knowledge of particles and fields, which represents a central endeavor of modern high-energy physics. Energy frontier particle colliders - arguably, among the largest, most complex and advanced scientific instruments of modern times - for many decades have been at the forefront of scientific discoveries in high-energy physics. Due to technology advances and beam physics breakthroughs, the colliding beam facilities have progressed immensely and now operate at energies and luminosities many orders of magnitude greater than the pioneering instruments of the early 1960s. While the Large Hadron Collider and the Super-KEKB factory represent the frontier hadron and lepton colliders of today, respectively, future colliders are an essential component of a strategic vision for particle physics. Conceptual studies and technical developments for several exciting near- and medium-term future collider options are underway internationally. Analysis of numerous proposals and studies for far-future colliders indicate the limits of the collider beam technology due to machine size, cost, and power consumption, and call for a paradigm shift of particle physics research at ultra-high energy but low luminosity colliders approaching or exceeding 1 PeV center-of-mass energy scale. Particle physics, accelerators, colliders, protons, ions, electrons, muons, positrons + Footnote †: This work has been supported by the Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. ###### Contents * 1 Introduction * 2 Colliders: Energy, Luminosity, History * 3 Next Few Decades * 4 Limits of Colliders * 4.1 General Limitations * 4.2 Circular \(e^{+}e^{-}\) colliders * 4.3 Circular \(pp\) colliders * 4.4 Circular \(\mu\mu\) colliders * 4.5 Traditional, advanced and exotic linear \(ee\) or \(\mu\mu\) colliders * 5 Conclusion * 6 Acknowledgements and Further Reading Introduction Particle accelerators are unique scientific instruments which offer access to unprecedented energy per constituent, using well-focused, high-density beams of electrons (\(e^{-}\)), positrons (\(e^{+}\)), protons (\(p\)), antiprotons (\(\bar{p}\)), ions, muons (\(\mu^{+}\), \(\mu^{-}\)), mesons, photons, and gamma quanta (\(\gamma\)), among others [Shiltsev, 2020]. Three Nobel prizes were awarded for seminal advancements in accelerator science and technology: to Ernest O. Lawrence in 1939 for invention of the first modern accelerator, the cyclotron [Lawrence and Livingston, 1932], to John Cockcroft and Ernest Walton in 1951 for their invention of the eponymous linear accelerator [Cockcroft and Walton, 1932], and to Simon van der Meer in 1984 for conceiving and developing the novel method of stochastic cooling [Van Der Meer, 1985]. Of course, highly notable are applications of accelerators - for example, they were of critical importance for about a quarter of the most acclaimed physics discoveries since 1939, resulting on average in a Nobel Prize for Physics every three years [Haussecker and Chao, 2011]. Electron microscopes, accelerator-based synchrotron radiation and spallation neutron sources were instrumental for numerous Nobel Prize-winning research achievements in chemistry, physiology and medicine, such as those recognized in 1997, 2003, 2006, 2009, 2012, 2017, 2019, and 2021. At present, about 140 accelerators of all types worldwide are devoted to fundamental research [Faus-Golfe and Edgecock, 2017]. Among them, the most complex and technologically advanced are higher-energy accelerators and, especially, colliders for nuclear and particle physics. While they are of different sizes and shapes, based on different technologies and employing different types of particles, they have common functional elements and basic stages - charged particles are produced in dedicated sources, often go through a preparatory stage to arrange the particles in suitable beams of bunches, and then get accelerated to very high kinetic energies. (H Figure 1: Schematics of some particle collider types: a) circular, b) linear, c) ring-ERL(energy recovery linac). Beam collision points are marked by crosses. that all the particles are ultra-relativistic and their kinetic energy and full energy are the same \(E=\gamma mc^{2}\), where \(m\) is the particle's mass, \(c\) is the speed of light, and relativistic Lorentz factor \(\gamma\gg 1\).) In order to be most effective in getting insights into the interesting physics of nuclei and/or elementary particles, the beams usually get compressed in a sequence of dedicated elements, like focusing magnets, before being sent to strike other particles, causing reactions that transform the particles into new particles. Sophisticated detectors are needed to identify and analyse products of the reactions of interest. What makes colliders distinct is the use of two similar but counter-propagating beams directed onto each other in one or several interaction points (IPs) - see Fig.1. While such an arrangement makes the machines significantly more complex [Shiltsev and Zimmermann, 2021], it is fully justified by the enormous kinematic advantage in so-called center-of-mass energy, resulting in much larger available energy and, therefore, opportunity to generate new particles of much higher masses. Indeed, for the head-on collision of two ultra-relativistic particles with equal energy \(E\), the center of mass energy (c.m.e.) is: \[E_{cm}\approx 2E. \tag{1}\] (The equation for unequal particle energies \(E_{1}\neq E_{2}\) is \(E_{cm}\approx 2\sqrt{E_{1}E_{2}}\)). High-energy particles can also be sent onto a stationary target, resulting in \(E_{cm}\approx\sqrt{2Emc^{2}}\), where \(m\) is the mass of the target-material particles. Take, for example, the highest energy cosmic rays observed on Earth, reaching \(E\sim 10^{21}\) eV, or a million PeV (1 PeV=1000 TeV=1000,000 GeV=\(10^{15}\) eV). Their collisions with stationary protons (\(mc^{2}\approx 1\) GeV) result in the c.m.e. of 1.4 PeV. In comparison, the same c.m.e. would be possible in a particle collider with only \(E\)=0.7 PeV=700 TeV energy per beam, i.e., with a million(!) times smaller particle energies. The highest beam and center-of-mass energies achieved to date are, of course, much lower - \(E\)=0.007 PeV and \(E_{cm}\)=0.014 PeV in the Large Hadron Collider (LHC), see Table 1. In what follows, the ultimate limits of particle colliders are discussed. ## 2 Colliders: Energy, Luminosity, History As noted above, colliders essentially shaped modern particle physics, and 31 of them have so far reached the operational stage (some in several successive configurations), with seven operational now (2023) - see Table 1. Two colliders are under construction and almost three dozen proposals for future colliders are under discussion, some of which are also listed in Table 1. The idea of using colliding beams to gain the above mentioned kinematic advantage was first given serious consideration by the Norwegian engineer and inventor Rolf Wideroe, who in 1943 had filed a patent for the collider concept (and received the patent in 1953) [Wideroe, 1953, Waloschek, 2013], and then further developed by Donald Kerst [Kerst et al., 1956] and Gerry O'Neill [O'Neill, 1956]. In the early 1960s, almost concurrently, three early colliders went into operation in the Soviet Union (\(e^{-}e^{-}\) collider VEP-1), France (to where the \(e^{+}e^{-}\) AdA had been moved from Italy), and the USA (\(e^{-}e^{-}\) CBX). The first colliders, as well as all but one follow up machine, were built in a storage ring (circular) configuration - see Fig. 1a - where particles of each beam circulate in the same or two different rings and repeatedly collide. In linear colliders, first proposed in Ref. [Tigner, 1965] and realized in the 1990s in the SLAC Linear Collider (SLC), the two colliding beams are accelerated in linear accelerators (linacs) and transported to a collision point, either in a simple two-linac configuration as depicted in Fig. 1c or with use of the same linac and two arcs, as in the SLC. Other configurations are possible and were considered: e.g., collision of beams circulating in a ring and a few-pass energy recovery linac (ERL) (Fig. 1b) or linac-ring schemes. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Colliders & Species & \(E_{cm}\), GeV & \(C\), m & \({\cal L}\), \(10^{32}\) & Years & Host lab, country \\ \hline AdA & \(e^{+}e^{-}\) & 0.5 & 4.1 & \(10^{-7}\) & 1964 & Frascati/Orsay \\ VEP-1 & \(e^{-}e^{-}\) & 0.32 & 2.7 & \(5\times 10^{-5}\) & 1964-68 & Novosibirsk, USSR \\ CBX & \(e^{-}e^{-}\) & 1.0 & 11.8 & \(2\times 10^{-4}\) & 1965-68 & Stanford, USA \\ VEPP-2 & \(e^{+}e^{-}\) & 1.34 & 11.5 & \(4\times 10^{-4}\) & 1966-70 & Novosibirsk, USSR \\ ACO & \(e^{+}e^{-}\) & 1.08 & 22 & 0.001 & 1967-72 & Orsay, France \\ ADONE & \(e^{+}e^{-}\) & 3.0 & 105 & 0.006 & 1969-93 & Frascati, Italy \\ CEA & \(e^{+}e^{-}\) & 6.0 & 226 & \(0.8\times 10^{-4}\) & 1971-73 & Cambridge, USA \\ ISR & \(pp\) & 62.8 & 943 & 1.4 & 1971-80 & CERN \\ SPEAR & \(e^{+}e^{-}\) & 8.4 & 234 & 0.12 & 1972-90 & SLAC, USA \\ DORIS & \(e^{+}e^{-}\) & 11.2 & 289 & 0.33 & 1973-93 & DESY, Germany \\ VEPP-2M & \(e^{+}e^{-}\) & 1.4 & 18 & 0.05 & 1974-2000 & Novosibirsk, USSR \\ VEPP-3 & \(e^{+}e^{-}\) & 3.1 & 74 & \(2\times 10^{-5}\) & 1974-75 & Novosibirsk, USSR \\ DCI & \(e^{+}e^{-}\) & 3.6 & 94.6 & 0.02 & 1977-84 & Orsay, France \\ PETRA & \(e^{+}e^{-}\) & 46.8 & 2304 & 0.24 & 1978-86 & DESY, Germany \\ CESR & \(e^{+}e^{-}\) & 12 & 768 & 13 & 1979-2008 & Cornell, USA \\ PEP & \(e^{+}e^{-}\) & 30 & 2200 & 0.6 & 1980-90 & SLAC, USA \\ S\(p\bar{p}\)S & \(p\bar{p}\) & 910 & 6911 & 0.06 & 1981-90 & CERN \\ TRISTAN & \(e^{+}e^{-}\) & 64 & 3018 & 0.4 & 1987-95 & KEK, Japan \\ Tevatron & \(p\bar{p}\) & 1960 & 6283 & 4.3 & 1987-2011 & Fermilab, USA \\ SLC & \(e^{+}e^{-}\) & 100 & 2920 & 0.025 & 1989-98 & SLAC, USA \\ LEP & \(e^{+}e^{-}\) & 209.2 & 26659 & 1 & 1989-2000 & CERN \\ HERA & \(ep\) & 30+920 & 6336 & 0.75 & 1992-2007 & DESY, Germany \\ PEP-II & \(e^{+}e^{-}\) & 3.1+9 & 2200 & 120 & 1999-2008 & SLAC, USA \\ KEKB & \(e^{+}e^{-}\) & 3.5+8.0 & 3016 & 210 & 1999-2010 & KEK, Japan \\ \hline VEPP-4M & \(e^{+}e^{-}\) & 12 & 366 & 0.22 & 1979- & Novosibirsk, Russia \\ BEPC-I/II & \(e^{+}e^{-}\) & 4.6 & 238 & 10 & 1989- & IHEP, China \\ DA\(\Phi\)NE & \(e^{+}e^{-}\) & 1.02 & 98 & 4.5 & 1997- & Frascati, Italy \\ RHIC & \(p,i\) & 510 & 3834 & 2.5 & 2000- & BNL, USA \\ LHC & \(p,i\) & 13600 & 26659 & 210 & 2009- & CERN \\ VEPP2000 & \(e^{+}e^{-}\) & 2.0 & 24 & 0.4 & 2010- & Novosibirsk, Russia \\ S-KEKB & \(e^{+}e^{-}\) & 7+4 & 3016 & 6000\({}^{*}\) & 2018- & KEK, Japan \\ \hline NICA & \(p,i\) & 13 & 503 & \(1^{*}\) & 2024(tbd) & JINR, Russia \\ EIC & \(ep\) & 10+275 & 3834 & \(105^{*}\) & 2032(tbd) & BNL, USA \\ \hline \hline Proposals & Species & \(E_{cm}\), TeV & \(C\), km & \({\cal L}^{*}\), \(10^{35}\) & Years & Host lab, country \\ \hline FCCee & \(e^{+}e^{-}\) & 0.24 & 91 & 0.5 & n/a & CERN \\ CEPC & \(e^{+}e^{-}\) & 0.24 & 100 & 0.5 & n/a & China \\ ILC-0.25 & \(e^{+}e^{-}\) & 0.25 & 20.5 & 0.14 & n/a & Japan \\ CLIC-0.38 & \(e^{+}e^{-}\) & 0.38 & 11 & 0.15 & n/a & CERN \\ ILC-1 & \(e^{+}e^{-}\) & 1 & 38 & 0.5 & n/a & Japan \\ LHeC & \(ep\) & 0.06+7 & 9+26.7 & 0.08 & n/a & CERN \\ CLIC-3 & \(e^{+}e^{-}\) & 3 & 50 & 0.6 & n/a & CERN \\ MC-3 & \(\mu^{+}\mu^{-}\) & 3 & 4.5 & 0.18 & n/a & n/a \\ MC-14 & \(\mu^{+}\mu^{-}\) & 14 & 14 & 4 & n/a & n/a \\ WFA-15 & \(e^{+}e^{-}\) & 15 & 12 & 5 & n/a & n/a \\ WFA-30 & \(e^{+}e^{-}\) & 30 & 20 & 32 & n/a & n/a \\ FCChh & \(pp\) & 100 & 91 & 3 & n/a & CERN \\ SPPC & \(pp\) & 125 & 100 & 1.3 & n/a & IHEP, China \\ \hline \end{tabular} \end{table} Table 1: Past, present and several proposed future particle colliders: their particle species, center of mass energy \(E_{cm}\), circumference or length \(C\), maximum peak luminosity \({\cal L}\) per interaction point, years of luminosity operation, and host labs. (\(i\) is for ions; luminosity is in units of cm\({}^{-2}\)s\({}^{-1}\), \({}^{*}\) design; see also text.) Figure 2: Center of mass energy reach of particle colliders vs their actual or proposed start of operation. Solid and dashed lines indicate a ten-fold increase per decade for hadron (circles), lepton (triangles) and lepton-hadron (half filled circles) colliders (adapted from [Shiltsev and Zimmermann, 2021]). The ever-growing demands of particle physics research drove an increase in the beam energy and c.m.e. of colliders by five orders of magnitude, as is demonstrated in Fig. 2. Charged particles gain energy from an electric field. The accelerating-field gradients in fast time-varying structures, such as radio-frequency (RF) cavities, are usually orders of magnitude higher than in direct-current (DC) systems, and, therefore, commonly used in modern colliders (with the RF frequencies ranging from 10s of MHz to 10s of GHz). At present, the highest beam accelerating gradients ever achieved in operational machines or beam-test facilities are about 31.5 MV/m in 1.3 GHz superconducting RF (SRF) cavities and some \(G\approx 100\) MV/m in 12 GHz normal-conducting (NC) ones. The much higher gradients \(O(10GV/m)\) are reached in plasma wake-field acceleration (WFA) experiments (see below). In a linear-collider arrangement, illustrated in Fig. 1c, the beam energy \(E\) is the product of the average accelerating gradient \(G\) and the length of the linac \(L\): \[E=eG\cdot L\;, \tag{2}\] where \(e\) denotes the elementary (electron) charge, assuming the acceleration of singly charged particles like electrons or protons. For example, reaching just 0.001 PeV=1 TeV energy requires either \(\sim\)30 km of SRF linac or 10 km of NC RF accelerator, if the RF cavities occupied all available space - which they usually do not. Cost considerations (see below) imply that RF acceleration hardware, such as normally metallic resonant cavities, RF power sources and distribution systems, should be minimized, e.g., through repeated use of the same RF system, which would boost the energy incrementally, \(\Delta E=eV_{RF}\) per turn every time a particle passes through the total cavity voltage \(V_{RF}\). Such an arrangement can be realized both in the form of circular colliders (Fig. 1a), which have proven extremely successful, and also through schemes based on ERLs (Fig. 1b). Circular colliders are most common; here, dipole magnets with an average magnetic field \(B\) and bending radius, \(\rho\), are used to confine charged particles inside the accelerator beam pipe passing through the apertures of the dipoles such that : \[E=ecB\cdot\rho\quad\mbox{or}\quad E\;[\mbox{TeV}]=0.3(B\rho)\;[\mbox{T}\cdot \mbox{km}]\;. \tag{3}\] As the particles are accelerated in a _synchrotron_, the strength of the magnetic field is increased to keep the radius of the orbit approximately constant. The maximum field of NC magnets is about 2 Tesla (T) due to the saturation of ferromagnetic materials, and while this is sufficient for lower-energy colliders, such as most \(e^{+}e^{-}\) storage rings, it is not adequate for very high-energy hadron or muon beams because it would require excessively long accelerator tunnels and prohibitively high magnet power consumption. The development of superconducting (SC) magnets carrying high electric current in Nb-Ti wires cooled by liquid helium below 5 K opened the way towards higher fields and to hadron colliders at record energies [Tollestrup and Todesco, 2008]. For example, the 14 TeV c.m.e. LHC at CERN uses double-bore magnets with a maximum field of 8.3 T at a temperature of 1.9 K in a tunnel of \(C=26.7\) km circumference (dipole-magnet bending radius \(\rho=2,800\) m). The exploration of rare particle-physics phenomena at the energy frontier requires not only an appropriately high energy, but also a sufficiently large number of detectable reactions. This number, \(N_{reaction}\) is given by the product of the cross section of the reaction under study, \(\sigma_{\mbox{reaction}}\), and the time integral over the instantaneous _collider luminosity_, \({\cal L}\): \[N_{\mbox{reaction}}=\sigma_{\mbox{reaction}}\cdot\int{\cal L}(t)dt. \tag{4}\] The luminosity dimension is \([\mbox{length}]^{-2}[\mbox{time}]^{-1}\). The integral on the right is referred to as _integrated luminosity_\({\cal L}_{\mbox{int}}\), and, reflecting the smallness of typical particle-interaction cross-sections, is often reported in units of inverse pico-, femto- or attobarns. By definition, 1 barn is equal to \(10^{-24}\) cm\({}^{2}\), and, correspondingly, 1 ab\({}^{-1}\)=\(10^{42}\) cm\({}^{2}\). Figure 3 presents impressive progress in the luminosity of colliders - by more than six orders of magnitude, up to today's record of about \(0.5\cdot 10^{35}\) cm\({}^{-2}\)s\({}^{-1}\). Note that the luminosity progress goes hand in hand with increase of the energy because the cross-sections of many reactions of interest get smaller with energy and often drop as \(\sigma_{reaction}\propto 1/E_{cm}^{2}\). To get reasonably high numbers of events, one needs to raise the luminosity correspondingly - as can be seen from Eq.(4). For example, for the \(WZ\) production in the LHC, with the reaction cross-section of about 6 femtobarn or \(6\cdot 10^{-39}\) cm\({}^{2}\), one can expect to see some 1200 of such events over one year of operation (effectively, about \(10^{7}\) s) with peak luminosities \(\sim 0.2\cdot 10^{35}\) cm\({}^{-2}\)s\({}^{-1}\). Luminosity of colliders is critically dependent on beam intensities and sizes at the IPs. Colliders usually employ bunched beams of particles with approximately Gaussian distributions, and for \(n_{b}\) bunches containing equal numbers of particles \(N_{1}=N_{2}=N\) colliding head-on with repetition frequency \(f_{\rm rep}\), a basic expression for the luminosity is \[{\cal L}=f_{\rm rep}n_{b}\frac{N^{2}}{4\pi\sigma_{x}^{*}\sigma_{y}^{*}}\;, \tag{5}\] where \(\sigma_{x}^{*}\) and \(\sigma_{y}^{*}\) characterize the rms transverse beam sizes in the horizontal and vertical directions at the IPs, respectively. To achieve high luminosity, the population and number of bunches should be maximized, and either produced as narrow as possible or focused tightly at dedicated colliding locations. Sophisticated detectors usually surround the interaction points in order to collect as much information as possible about the reactions that originate from collisions of particles. In the attempt to understand the ultimate limits of colliders, it should be noted that the great progress of the colliders shown in Figs. 2 and 3 was accompanied by a simultaneous increase of their size, power consumption, complexity and cost. Modern colliders employ a number of diverse technologies for power converters and power supplies, ultra-high vacuum systems, particle sources, injection and extraction systems, tunneling, geodesy and alignment, cooling water and cryogenic cooling, beam diagnostics, accelerator control, personnel safety and machine protection, among other subsystems and equipment. Still, when it comes to the facility size, cost and power consumption, the most important factors are the "core technologies" required for accelerating particles to high energies - normal- and/or superconducting radio-frequency (RF) acceleration systems, and normal- and/or superconducting accelerator magnets - and "beam physics techniques" used to attain the necessary beam qualities such as intensity, beam sizes, and sometimes polarization, including beam cooling, manipulation and collimation, the production of exotic particles like antiprotons or muons, mitigation of beam instabilities, and countermeasures against beam-size blow up caused by space-charge and beam-beam effects or intra-beam scattering, among other effects. The energy reach of a collider is mostly defined by its core accelerator technologies, while its luminosity is very much dependent on the sophistication of beam physics techniques [Shiltsev and Zimmermann, 2021]. The energy frontier colliders were and remain costly, often at the brink of financial and political affordability. That poses serious risks, and in the past several projects have been terminated, even after the start of construction. For example, the construction of the 400 GeV c.m.e. ISABELLE \(pp\) collider (briefly renamed CBA) at the Brookhaven National Laboratory in the USA was stopped in 1983 [Month, 2003, Crease, 2005a, Crease, 2005b]; in the early 1990s two other flagship projects were terminated: the 6 TeV c.m.e. proton-proton complex UNK [Yarba, 1990, Kuiper, 1994] in Protvino, Russia, and the 40 TeV c.m.e. proton-proton Superconducting Super Collider (SSC) in Texas, USA, in 1993 [Wojcicki, 2009, Riordan et al., 2015]. Notwithstanding the above, advances in core accelerator technologies - including the developments of superconducting magnets for ISABELLE/CBA, UNK and SSC - have led to substantial reductions in collider cost per TeV Figure 3: Luminosities of particle colliders: triangles are lepton colliders, full circles are hadron colliders, and half-filled circles for electron-hadron colliders. Values are per collision point (adapted from [Shiltsev and Zimmermann, 2021]). [Shiltsev, 2014]. This progress, together with the growing strength of the high-energy particle physics community, enabled development of frontier machines, such as the currently operational multi-billion dollar LHC.Because no other instrument can replace high-energy colliders in the search for the fundamental laws governing the universe, even larger $10B-scale future collider projects, need to be motivated and proposed. ## 3 Next Few Decades The prevailing view of the global HEP community is that the next large particle physics facility should be an \(e^{+}e^{-}\) collider that functions as a Higgs/ElectroWeak factory. The physics case for such a collider with c.m.e. range (0.25-0.5) TeV and very high luminosity (0.1-1) ab\({}^{-1}\)/yr (hence the name "factory") is quite compelling because it would enable detailed exploration of subtle reactions involving the Higgs/ElectroWeak fields (\(H,W,Z\) particles and photons) and shed light on possible deviations from the predictions of the _Standard Model_ theory of particle physics, see, e.g. [Hoddeson et al., 1997, Boonekamp and Schott, 2020]. Several options for each of these types of colliders are under consideration globally, with variable technical readiness. The leading candidates for a Higgs/EW factory are (1) the \(e^{+}e^{-}\) Future Circular Collider (FCC-ee) at CERN and the quite similar Circular Electron-Positron Collider (CEPC) in China, (2) the International Linear Collider (ILC) in Japan, and (3) the Compact LInear Collider (CLIC) at CERN - see Table 1. Additional novel options for compact \(e^{+}e^{-}\) colliders, such as the Cool Copper Collider (C\({}^{3}\)), high gradient (\(\sim 70\) MV/m) superconducting RF linear collider HELEN (High Energy LEptoN collider), ERL-based circular and linear collider schemes, and a Fermilab Site Filler circular \(e^{+}e^{-}\) collider, have emerged and are under investigation. For the purpose of this analysis, all Higgs factories can be considered as low-energy machines that can be built based on generally existing technologies and within a reasonable timescale \(O\)(10-20 years) from the decision to proceed [Roser et al., 2023]. Many beam physics methods and accelerator technologies developed for Higgs factories can be employed in much higher energy machines. At the "energy frontier," the international particle physics community aspires towards a collider with an energy reach of \(\sim 10\) TeV scale to enable New Physics discoveries (i.e., particles and reactions beyond those described by the Standard Model). The energy of such a collider should significantly exceed the 14 TeV c.m.e. of the LHC, which can be provided either by a \(\sim 100\) TeV hadron (\(pp\)) collider or a \(\geq 10\) TeV lepton (\(e^{+}e^{-}\) or muon) collider. Here it should be noted, that in very high energy collisions, hadrons manifest themselves as composites of quarks and gluons, whose total energy is distributed among these constituents. Therefore, the highest accessible c.m.e. \(E^{*}_{cm}\) of individual parton-to-parton collisions is significantly lower than the nominal (proton-proton) \(E_{cm}=2E\), and, e.g., for many reactions it can be assumed that [Al Ali et al., 2022]: \[E^{*}_{cm}\simeq(1/7-1/10)\times E_{cm}=(1/7-1/10)\times 2E. \tag{6}\] Several \(\sim 10\) TeV c.m.e. scale collider options are under active discussion at present - see Table 2 - including two \(pp\) colliders, the FCC-hh at CERN and SPPC in China; 3 TeV to 14 TeV muon colliders; as well as novel \(e^{+}e^{-}\) collider schemes based on plasma wakefield acceleration. In the course of the recent _Snowmass'21_ US community strategic planning exercise, the Implementation Task Force (ITF) [Roser et al., 2023] of a dozen internationally renowned accelerator experts was convened and charged with developing metrics and processes to facilitate comparisons between projects. Essentially all (\(>30\)) collider concepts presently considered viable have been evaluated by the ITF using parametric estimators to compare physics reach (impact), beam parameters, size, complexity, power, environmental concerns, technical risk, technical readiness, validation and R&D required, cost and schedule - see Table 2. The significant uncertainty in these values was addressed by giving a range where appropriate. Notably, the ITF choose to use the proponent-provided luminosity and power consumption values. Relevant measure of the maturity of a proposal is the estimate of how much R&D time is required before a proposal could be considered for a project start (so called "Critical Decision 0" in the US scientific infrastructure project approval system). The time to first physics in a technically limited schedule includes the pre-project R&D, design, construction and commissioning of the facility, and is most useful to compare the scientific relevance of the proposals over the timeline of interest. The total project cost follows the US project accounting methods, but without taking into account the inflation escalation and (usually required) contingency. The ITF used various parametric cost models, also taking into account the estimates provided by the proponents, and - for reference - known costs of existing installations, as well as reasonably expected costs for novel equipment. For future technologies, the pre-project cost reduction R&D may further lower the ITF cost estimate ranges. As for any large scientific research facility, it is not only the cost that is of importance, but also the number of experts needed for the design, construction and commissioning of the future colliders and the environmental impact, e.g., the electrical power consumption. Therefore, it is of very practical interest for the particle physics community to assess the limits of the ultimate colliders in a quantitative manner. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \hline Proposal & & Type & \(E_{cm}\) & \(\mathcal{L}_{int}\)/IP & Yrs. of & Yrs. to 1st & Constr. cost & El. power \\ & & & [TeV] & [ab\({}^{-1}\)/yr] & R\&D & physics & [2021 B\$] & [GW] \\ \hline ILC-3 & \(e^{+}e^{-}\) & L & 3 & 0.61 & 5-10 & 19-24 & 18-30 & \(\sim\)0.4 \\ \hline CLIC-3 & \(e^{+}e^{-}\) & L & 3 & 0.59 & 3-5 & 19-24 & 18-30 & \(\sim\)0.55 \\ \hline CCC-3 & \(e^{+}e^{-}\) & L & 3 & 0.6 & 3-5 & 19-24 & 12-18 & \(\sim\)0.7 \\ \hline ReLiC-3 & \(e^{+}e^{-}\) & ERL & 3 & 4.7(9.4) & 5-10 & \(>\)25 & 30-50 & \(\sim\)0.78 \\ \hline \(\mu\mu\)Collider’-3 & \(\mu^{+}\mu^{-}\) & C & 3 & 0.23(0.46) & \(>\)10 & 19-24 & 7-12 & \(\sim\)0.23 \\ \hline LWFA-LC-3 & \(e^{+}e^{-}\) & L & 3 & 1 & \(>\)10 & \(>\)25 & 12-80 & \(\sim\)0.34 \\ \hline PWFA-LC-3 & \(e^{+}e^{-}\) & L & 3 & 1 & \(>\)10 & 19-24 & 12-30 & \(\sim\)0.23 \\ \hline SWFA-LC-3 & \(e^{+}e^{-}\) & L & 3 & 1 & 5-10 & \(>\)25 & 12-30 & \(\sim\)0.17 \\ \hline \hline Muon Collider\({}^{1}\) & \(\mu^{+}\mu^{-}\) & C & 10 & 2(4) & \(>\)10 & \(>\)25 & 12-18 & \(\sim\)0.3 \\ \hline LWFA-LC-15 & \(e^{+}e^{-}\) & L & 15 & 5 & \(>\)10 & \(>\)25 & 18-80 & \(\sim\)1 \\ \hline PWFA-LC-15 & \(e^{+}e^{-}\) & L & 15 & 5 & \(>\)10 & \(>\)25 & 18-50 & \(\sim\)0.62 \\ \hline SWFA-LC-15 & \(e^{+}e^{-}\) & L & 15 & 5 & \(>\)10 & \(>\)25 & 18-50 & \(\sim\)0.45 \\ \hline FNAL _pp_ circ. & _pp_ & C & 24 & 0.35(0.7) & \(>\)10 & \(>\)25 & 18-30 & \(\sim\)0.4 \\ \hline FCC-hh & _pp_ & C & 100 & 3(6) & \(>\)10 & \(>\)25 & 30-50 & \(\sim\)0.56 \\ \hline SPPS & _pp_ & C & 125 & 1.3(2.6) & \(>\)10 & \(>\)25 & 30-50 & \(\sim\)0.4 \\ \hline Collider in Sea & _pp_ & C & 500 & 5 & \(>\)10 & \(>\)25 & \(>\)80 & \(>\)1 \\ \hline \hline \end{tabular} \end{table} Table 2: Main parameters of the multi-TeV lepton collider proposals (3 TeV c.m.e. options) and colliders with 10 TeV or higher parton c.m.e: colliding particles; type of the collider (L for linear, C for circular, ERL for energy recovery linacs); center-of-mass energy (the relevant energies for the hadron colliders are the parton c.m. energy, which is \(\sim\) 7 times less than hadron c.m. energy \(E_{cm}\) quoted here - see Eq.6); annual integrated luminosity per interaction point (assuming \(10^{7}\)s per year effective operating time; for colliders with multiple IPs, the total peak luminosity is given in parenthesis); years of the pre-project R&D indicate an estimate of the required effort to get to sufficient technical readiness; estimated years to first physics are for technically limited timeline starting at the time of the decision to proceed; total construction cost range in 2021$ (based on a parametric estimator, including explicit labor, but without escalation and contingency); facility electric power consumption (adapted from the Implementation Task Force report [Roser et al., 2023]). ## 4 Limits of Colliders A discussion of the limits of future colliders starts with an introduction to the issue: definitions of the units and general considerations regarding energy, luminosity, and social cost of the ultimate machines. It is followed by a more detailed look into specific limitations of circular \(pp,ee\) and \(\mu\mu\) colliders; linear and plasma-based \(ee,\gamma\gamma,\mu\mu\) colliders; and some exotic schemes, such as the crystal muon colliders. The social-cost considerations (power consumption, financial costs, carbon footprint, availability of experts and time to construct) are most defined for the machines based on extensions of the existing core accelerator technologies (RF and magnets) and less so for the emerging or exotic technologies (ERLs, plasma WFA, crystals, etc). Three of the most important aspects of the evaluation are feasibility of the c.m. energy \(E_{cm}\), feasibility of the collider luminosity \(\mathcal{L}\), and feasibility of the facility cost \(C\). For each machine type (technology), current state-of-the-art machines are examined - see, e.g., Ref.[Shiltsev and Zimmermann, 2021], for more details - and several (1,2,...) orders of magnitude steps in energy made to see how that would affect the luminosity and cost. The unit of the c.m. energy \(E_{cm}\) used is 1 PeV = 1000 TeV. The units of \(\mathcal{L}\) are ab\({}^{-1}\)/yr, i.e., equal to, e.g., \(10^{35}\) cm\({}^{-2}\)s\({}^{-1}\) over \(10^{7}\) sec/yr. For reference, the LHC will deliver 0.3 ab\({}^{-1}\)/yr after its high luminosity upgrade. Due to the spread of expectations for the machine availability and annual operation time, there might be a factor of \(\sim\)2 uncertainty in peak luminosity for any ab\({}^{-1}\)/yr value. The units of electric power consumption are TWh/yr. For reference, the CERN power consumption averages about \(P_{s}\)=200MW and 1.1-1.3 TWh/yr while operating the LHC. The total facility electric power includes not only the collider and its injectors, but also detectors, infrastructure, lighting, etc. In addition, accelerator systems needed to maintain and accelerate beams (RF, magnets, etc) have their own inefficiencies, and as result, for all collider types the facility electric power is significantly larger than the power that goes into the beams. The cost is estimated in "LHC-Units" (LHCU) - the cost of the LHC construction (at present day prices, LHCU\(\simeq\)810B). An analysis similar to that of the ITF is the most reliable (see above Sec.3). With certain reservations and caveats, an approximate phenomenological \(\alpha\beta\gamma\) collider cost model [Shiltsev, 2014] is appropriate: \[C_{c}\approx\alpha\cdot L_{t}^{p_{1}}+\beta\cdot E_{cm}^{p_{2}}+\gamma\cdot P_{ s}^{p_{3}} \tag{7}\] where the cost is understood as a total project cost (of an all-new facility without previous investments, taking into account labour cost, escalation due to inflation, contingency, R&D, management, etc.) that scales with just three facility-specific parameters -- the length of the tunnels \(L_{t}\), the center-of-mass or beam energy \(E_{cm}\), and the total required site power \(P_{s}\). The second term reflects the cost of accelerator components (magnets, RF, etc. and associated auxiliary subsystems); it depends very much on technology and often dominates the total cost. Comparison with the cost of recently built large accelerators and the ITF cost estimates indicates that the model estimates are good to within a factor of 2 if the exponents are rounded up to \(p_{1}=p_{2}=p_{3}=1/2\), and the coefficients are \(\alpha\approx 0.1\)LHCU/\(\sqrt{10\text{km}}\), \(\gamma\approx 0.3\)LHCU/\(\sqrt{\text{TWh/yr}}\) and the accelerator technology dependent coefficient \(\beta_{\text{MAG}}\approx\)6LHCU/\(\sqrt{\text{PeV}}\) for high-field magnets and \(\beta_{\text{RF}}\approx\)30LHCU/\(\sqrt{\text{PeV}}\) for RF accelerating structures [Shiltsev, 2014, Roser et al., 2023]. The \(\alpha\beta\gamma\)-model should be used with caution as it still needs to be properly extended to advanced technologies (plasma WFA, lasers, crystals, etc). ### General Limitations The most obvious limit to consider is the size of the collider. Indeed, as Eqs.2 and 3 indicate, the larger the length of a linac or circumference of a ring, the higher beam energies \(E\) can be envisaged. For example, if the available site length is limited to \(L_{t}\simeq 100\) km, then two linacs of 50 km each could allow the energy to reach up to \(E_{cm}\simeq\)0.01 PeV with the current state of the art normal-conducting RF cavities with \(G=0.1\) GeV/m and up to \(E_{cm}\simeq\)0.2-0.5 PeV with the potentially achievable average accelerating gradient of \(G=2-5\) GeV/m in plasma-wakefield structures. In comparison, a 100 km-long circular tunnel (\(\rho\)=16 km radius) allows a \(\sim\)0.1 PeV collider based on the 16T Nb\({}_{3}\)Sn SC bending magnets or a 0.25 PeV collider with \(\sim\)40T high-temperature superconducting (HTS) magnets. Of course, larger circumference tunnels could fit proportionally higher c.m. energy machines. Note that not all kinds of particles can be accelerated in high-energy circular colliders due to prompt synchrotron radiation (SR) that results in the energy loss per turn of [Sands, 1970]: \[\Delta E_{SR}=\frac{1}{3\epsilon_{0}}\frac{e^{2}\beta^{3}\gamma^{4}}{\rho}\,, \tag{8}\] which increases with the fourth power of energy \(E=\gamma mc^{2}\) and scales with the inverse of the bending radius (here, \(\epsilon_{0}\) is the permittivity of the vacuum and \(\beta=\sqrt{1-1/\gamma^{2}}\)). At the limit of practicality, the SR loss per turn should be at least less than the total beam energy \(\Delta E_{SR}\leq E\), which defines the c.m. energy limit for circular colliders as: \[E_{cm}[\mbox{PeV}]\leq 0.001\cdot(m/m_{e})^{4/3}(\rho/10[\mbox{km}])^{1/3}\,\,\,, \tag{9}\] assuming \(\rho\sim 10\) km, that is \(\sim\)1 TeV for electrons, \(\sim\)1.2 PeV for muons (\(m_{\mu}\approx\)210\(m_{e}\)) and \(\sim\)25 PeV for protons (\(m_{p}\approx\)2000\(m_{e}\)). Beyond these energies, sheer energy economy will demand that colliders be linear (thus, needing no bending magnets). Survival of the particles in very long accelerators may set another energy limit. Indeed,for example, a 0.5 PeV linear collider based on individual 5 GeV plasma-wakefield accelerating stages requires \(M=10^{5}\) of them. For a beam of particles to propagate through such a chain without losing too much intensity (and power), the stage-to-stage transfer efficiency must be much better than \(\eta_{stage}\geq 1-1/M=0.99999\) - an extremely difficult challenge. Also, if the particles are unstable, they may decay before the end of the acceleration process. To guarantee delivery to the collision point, the minimum accelerator gradient must significantly exceed \(G\gg mc/\tau_{0}\) - where \(\tau_{0}\) is the proper decay time - that is, e.g., 0.3 MeV/m for muons (relatively easy to achieve even with present day technologies) and 0.3 GeV/m for tau-leptons (quite a challenge even for the most optimistic currently envisioned advanced acceleration schemes) [Shiltsev, 2012]. Performance (luminosity) reach of the ultimate colliders can be limited by many factors and effects - particle production, beamstrahlung, synchrotron-radiation power per meter, IR radiation damage, neutrino-radiation dose, beam instabilities, jitter/emittance growth, etc - which are machine specific and will be considered below. However, the most fundamental is the limit on the total beam power \(P_{b}=2\times f_{0}n_{b}N\gamma mc^{2}\) (the factor of 2 accounts for two colliding beams). Indeed, the luminosity equation (5) can be re-written as: \[\mathcal{L}=\frac{1}{16\pi f_{rep}n_{b}\varepsilon\beta^{*}mc^{2}}\cdot\frac{ P_{b}^{2}}{E}\propto\frac{P_{b}^{2}}{E}\,, \tag{10}\] where \(\sigma_{x}^{*}\sigma_{y}^{*}=\varepsilon_{n}\beta^{*}/\gamma\) has been substituted with so-called "normalized beam emittance" \(\varepsilon_{n}\) and the so-called "beta-function at IP" \(\beta^{*}\), which is generally not explicitly dependent on energy - see [Shiltsev and Zimmermann, 2021]. Particle accelerators in their essence are transformers of wall-plug site power \(P_{s}\) into high-energy beam power \(P_{b}=\eta P_{s}\) with much less than 100% efficiency (in the best-case scenario \(\eta\sim 0.1-0.3\)). It is hard to know precisely where the ever-changing societal limits on the power consumption of large accelerators will be in the future, but they will surely include "carbon footprint" considerations and the environmental impact of future accelerators' construction and operation. For reference, with the present world-average power consumption rates, 1,000,000 people require \(\sim\)3TWh/yr, which is three time larger than the CERN annual site energy usage. Wherever the limit is, Eq.(10) points out that the luminosity will decrease with energy at least as \(L\propto 1/E\). Such dependence on energy is markedly different from the traditional HEP demand for the luminosity to follow the point-like annihilation cross-section scaling, \(L\propto E^{2}\); from current knowledge, other factors \(f_{rep}\), \(n_{b}\), \(\varepsilon_{n}\), \(\beta^{*}\), \(\eta\) could be of only limited help in avoiding performance degradation in the quest for two to three orders of magnitude higher energies. Of course, there will also be societal limits on the collider's total cost \(C_{c}\). While this depends on the technology (core accelerator technology, civil construction technology, electric-power production, delivery and distribution technology, etc.), the probability of approval and realization for a technically feasible future collider facility typically decreases with cost increase beyond what is "reasonably acceptable", perhaps as \(\propto 1/C_{c}^{\kappa}\). As a guide, such a decrease, with the exponent \(\kappa\approx 2-3\), is characteristic for the price distributions of real-estate sales. Also note: i) the costs of civil construction and power systems are mostly driven by the larger economy and are not that dependent on the collider type and accelerator R&D advances; ii) if an injector complex is already available, up to 1/3 of the total cost could be saved, resulting in potential increase of a factor of 2 in the energy reach - see Eq.(7); iii) the collider cost is usually a relatively weak function of luminosity, provided new technologies are not required (the latest example is the HL-LHC $1B project that will increase luminosity of the \(O(\$10\)B) LHC by a factor of 5); iv) future machines are best designed with high \(E\) and relatively low initial \(\mathcal{L}\) in anticipation of eventual performance upgrades (for example, in the past, CESR and the Tevatron witnessed \(\mathcal{L}\) increases \(O(100)\), LHC by a factor \(\geq\)10, etc.); v) the total cost \(C_{c}\) is moderately weakly dependent on the tunnel length/circumference \(L_{t}\), but it is critically dependent on \(E_{cm}\) and the choice of the acceleration technology. The construction time of large accelerator projects to date is usually between 5 and 11 years and approximately scales as \(T\propto\sqrt{C_{c}}\). It is often limited by the peak annual spending rate, typically in the range $0.2 to $0.5 B/yr (compare to the world's global HEP budget \(\sim\)$4B), which in turn depends on the number of available technical experts. So far, the period of technical commissioning of colliders, often defined as "one particle reaches design energy", was \(O(1)\) yr - and is shorter for known technologies and longer for new ones and for larger numbers of accelerator elements. Progress towards the design (or ultimate) luminosity is dependent on the machine's "complexity" [Shiltsev, 2011], and can take as long as \(\sim\)9 yrs [Roser et al., 2023]. Taking all the above into account, various types of future colliders are analysed below and their potential energy and luminosity reach assessed - maximum \(E_{cm}\) and peak \(\mathcal{L}\) - under the assumption of the societal limits on the site power consumption and cost: \[P_{s}\,\leq 3\,\mathrm{TWh/yr}\,\,\,,\mathrm{and}\,\,\,C_{c}\,\leq 3\, \mathrm{LHCU}. \tag{11}\] ### Circular \(e^{+}e^{-}\) colliders As mentioned above, the synchrotron radiation of light leptons \(e^{+},e^{-}\) limits the energy reach of such colliders to \(E_{cm}\leq 1\) TeV, which is far below even the energy reach of the LHC, to say nothing the aspiration to reach to PeV energies. High luminosity could be a potential rationale for an interest in these types of colliders, but it is limited by synchrotron-radiation power losses \(P_{SR}=2f_{\mathrm{revolution}}en_{b}N\cdot\Delta E_{SR}\) and very quickly drops with energy as: \[\mathcal{L}_{ee\,\mathrm{cir}}=\mathcal{F}_{ee}\frac{P_{\mathrm{SR}}\rho}{ \gamma^{3}}\,, \tag{12}\] Figure 4: Very large hadron collider proposals (not in scale): a) FCChh (91 km circumference, 100 TeV), b) VLHC (233 km, 175 TeV); c) Eloisatron (300km, 200 TeV); d) “Collider in the Sea” (1,900 km, 500 TeV), e) collider on the Moon (11,000 km, 14 PeV), f) Enrico Fermi’s accelerator encircling our Earth (”Global-tron”, 40,000km, 2.9 PeV) The factor \(\mathcal{F}_{ee}\) above accounts for the IP vertical focusing parameters and a dimensionless _beam-beam parameter_ that reflect the severity of the electromagnetic disruption of one beam after collision with another - the exact expression is given elsewhere [Shiltsev and Zimmermann, 2021]. Of importance for this discussion is that \(\mathcal{F}_{ee}\) is weakly dependent on the beam energy, and the maximum practical luminosity of \(e^{+}e^{-}\) circular colliders scales as \(1/E^{3-3.5}\). These facilities naturally call for larger radius \(\rho\) and circumference \(O(100\) km) - see Eq.(12) - and are considered quite promising tools at low energies, e.g., as high-luminosity Higgs/ElectroWeak factories with typical \(E_{cm}\simeq 0.25\) TeV, but even these have an energy demand of some (1.5-2) TWh/yr and cost \(\sim\)(1.5-2) LHCU. Significant energy savings are possible by using RF energy-recovery (ERLs), but that expands the c.m.e. reach of circular \(e^{+}e^{-}\) colliders to only \(E_{cm}\sim\) 0.5 TeV. ### Circular \(pp\) colliders Being significantly less limited by the synchrotron radiation losses - see defined Eq.(9), protons can be accelerated in circular machines to multi-PeV energies, and, according to Eq.(3), the limit is fully determined by the maximum field \(B\) of the bending magnets and the tunnel circumference \(L_{t}\simeq 2\pi\rho\). Fig.4 presents several \(pp\) collider proposals aimed for higher and higher energies which are based on increasing either \(B\) or \(L_{t}\) or both. Most appropriate magnet technologies currently assume limits on the maximum bending field: about 2 T for normal-conducting magnets (usually, Figure 5: Estimates of the annual integrated luminosity for very high energy circular hadron colliders vs \(E_{cm}\). The second horizontal axis is for the approximate equivalent parton center-of-mass energy \(E_{cm}^{*}\approx E_{cm}/7\). room temperature copper conductor and steel yoke), some 8 T for NbTi SC technology, up to 16 T for Nb\({}_{3}\)Sn SC technology [Schoerling and Zlobin, 2019], and 20 T to (max) \(\sim\)40 T for high-temperature superconductor (HTS) technologies (e.g, based on rare earth oxides like ReBCO, or iron-based superconductors). There is significant knowledge in the physics community on how to design, build and operate circular \(pp\) colliders - e.g., experience with the Tevatron \(p\bar{p}\) collider (\(E_{cm}\)=0.002 PeV, \(B=\)4.5T, 6.3 km circumference) [Lebedev and Shiltsev, 2014] and 0.014 PeV LHC (8T, 27km) [Evans and Bryant, 2008]. Also, there are designs and/or parameter sets available for the Superconducting Super Collider (SSC, 0.04 PeV, 6.6T, 87km), Future Circular \(pp\) Collider (FCC-hh, 0.1 PeV, 16T, 91km) [Benedikt et al., 2019], Super proton-proton Collider (SppC, 0.075-0.125 PeV, 12-24T, 100km) [CEPC Study Group, 2018], Very Large Hadron Collider (VLHC, 0.175 PeV, 12T, 233km) [Ambrosio et al., 2001], the Eloisatron (0.3 PeV, 10T, 300km) [Barletta, 1996], and "Collider-in-the-Sea" (in the Gulf of Mexico, 0.5 PeV, 3.2T, 1900km) [McIntyre et al., 2016]. Going to the extreme, Enrico Fermi had thought of an accelerator encircling the Earth which could reach about 3 PeV c.m.e. with inexpensive normal-conducting magnets [Cronin, 2004], and, more recently, a circular collider on the Moon was discussed (CCM, 14 PeV, 20T, 11,000 km) [Beacham and Zimmermann, 2022]. The most stringent limitations come from large size (related to the magnetic field \(B\) technological limit) and very high power-consumption requirements, resulting in high cost. Already the 100-km machines like the FCChh and SPPC could be approaching an energy need of 3 TWh/yr and be over the 3 LHCU cost limits given by Eq.11. Of course, even that is small in comparison with the lunar CCM cost (about 20-40 LHCU just for the SC magnets) and the energy needs, which are (2-5)\(\cdot 10^{4}\) TWh/yr (\(O\)30%) of the world's current production). Even more serious are limitations on the maximum attainable luminosity \(\mathcal{L}_{pp}\). With the increase of beam energy, limiting detrimental effects include beams disruption due to opposite bunch EM forces experienced at each IP (beam-beam effects) and coherent beam instabilities induced by the beams' own EM interaction with induced image charges, currents and wakefields (especially dangerous in large-circumference high-intensity machines). Unavoidable will be fast beam burn-off - destruction due to inelastic interactions of high-energy protons as the result of repetitive collisions - leading to shorter and shorter beam lifetime: \[\tau_{pp}=\frac{n_{b}N}{\mathcal{L}_{pp}\sigma_{tot}}\,. \tag{13}\] The total \(pp\) cross-section grows slowly from \(\sim\)100 mbarn to \(\sim\)300 mbarn with an increase of \(E_{cm}\) from 0.001 PeV to 1 PeV. The burn-off at very high energies results in several undesired effects: first, a short beam lifetime \(\tau_{pp}\) of about an hour or even minutes, which requires the frequent injection and acceleration of new bunches of particles. Injection and acceleration in a chain of SC magnet-based boosters is a lengthy process and, therefore, a smaller fraction of the operation time is left for collisions and the entire accelerator complex efficiency drops. Secondly, the particle detectors get flooded with products of the inelastic interactions - the so-called _pile-up_ effect makes it extremely difficult to disentangle the huge number of tracks originating from approximately 1000 or more \(pp\) reactions per bunch collision with luminosity \(O(10^{35}\) cm\({}^{-2}\)s\({}^{-1})\). Thirdly, growing problems are also anticipated with radiation protection of the detectors and collider elements and collimation of beams with higher energy density. For \(pp\) colliders with \(E_{cm}\) above (0.1-0.2)PeV, synchrotron radiation will essentially limit the maximum attainable luminosity in very much the same fashion as for \(e^{+}e^{-}\) colliders - see Eq.(12) - because of either the limited RF power available to replenish the SR losses \(P_{SR}\) or due to challenges related to the cooling of the SC magnets, where the SR photons must be intercepted internally and significant heat load due to these photons needs to be extracted at cryogenic temperatures. Individual machine designs may vary in optimization approaches toward the highest luminosity; Fig.5 presents estimates of performance of circular \(pp\) colliders vs c.m. energy up to \(E_{cm}=\)14 PeV (equivalent to parton c.m. energy \(E_{cm}^{*}\simeq 2\) PeV, according to Eq.(6)). Even with the logarithmically large uncertainties (indicated by the error bars) in scenarios limited by electric power, very high energy colliders will by necessity have low luminosity. ### Circular \(\mu\mu\) colliders Colliding muons would have two key advantages: i) compared to protons, the same size machine would allow effectively a factor of 7-10 higher energy reach due to the point-like nature of the muons - see Eq.(6); and ii) according to Eq.(8), the synchrotron radiation of muons is \(\sim(m_{\mu}/m_{e})^{4}=2\) billion times weaker than that of electrons and positrons, and power- and cost-effective acceleration in rings is possible to about a fraction of a PeV - see Eq.(9). Therefore, the highest energy circular muon colliders are predicted to be more compact, more power-efficient and significantly less expensive than the equivalent energy-frontier hadron or \(e^{+}e^{-}\) machines [Long et al., 2021]. These advantages come along with difficulties due to the short lifetime of the muon, \(\gamma\tau_{0}\) where \(\tau_{0}\)=2.2\(\mu\)s. For example, a 0.1 PeV \(\mu^{-}\)-meson has on average a lifetime of one second, decaying into an electron (or positron in the case of \(\mu^{+}\) decay) and two neutrinos, each carrying a significant fraction of the initial muon momentum. It is widely believed that the time before the decay is more than sufficient to allow fast acceleration of muons to high energy, followed by a storage for some \(300B\) turns in a ring with an average bending magnet field \(B\) (in units of Tesla) where \(\mu^{-}\) and \(\mu^{+}\) particles will collide with each other [Sessler, 1998]. As schematically shown in Fig.6, a \(\mu^{+}\mu^{-}\) collider will not look much different from the \(pp\) collider rings - it will consist of accelerating RF cavities and high-field (SC) magnets, the latter determining the size of the facility for a given \(E_{cm}\). What will be different is a somewhat more complicated system of production of the muons in the reactions resulting from multi-GeV protons hitting stationary targets, collection of these muons, muon beam _cooling_ (significant reduction of the muon beam sizes and internal velocity spreads), and rapid acceleration to the energy of the collider [Palmer, 2014]. There are parameter sets available for 1.5, 3, 6, 10, and 14 TeV circular \(\mu\mu\) colliders which indicate their superior (w.r.t. other collider types) power efficiency in terms of ab\({}^{-1}\)/TWh [Shiltsev and Zimmermann, 2001]. Projecting their site power requirements and costs, "the feasibility limits" of 3TWh/yr and 3 LHCU - see Eq.(11) - will take place at \(E_{cm}\)=(0.03-0.05) PeV. The average luminosity of a muon collider is equal to: \[{\cal L}_{\mu\mu}=f_{rep}\gamma\frac{c\tau_{0}}{4\pi\rho}\frac{n_{\rm b}N^{2} }{4\pi\sigma_{x}^{*}\sigma_{y}^{*}}={\cal F}_{\mu\mu}BP_{b}\gamma\,, \tag{14}\] where \(f_{rep}\) is the rate of the facility acceleration cycles. The luminosity can be seen to scale with \(B\) and with the total beam power \(P_{b}=f_{rep}en_{b}NE_{cm}\). Exact expression for the factor \({\cal F}_{\mu\mu}\) can be found in, e.g., [Palmer, 2014]. The above Eq.(14) indicates an obvious incentive to have the highest bending magnetic field \(B\) and the luminosity increase with energy \({\cal L}_{\mu\mu}\propto\gamma\), in the case of other limiting parameters fixed. Unfortunately, above about 0.01 PeV, the intense neutrino flux originating from the muons decaying in the collider poses the challenge of minimizing the environmental impact. The collider complex is usually located underground, and when the produced neutrinos emerge at the surface, a small fraction interacts with the rock (and other material) and produces an ionizing radiation dose that quickly grows with energy \(D_{\nu}\propto f_{rep}n_{b}NE^{3}\). The impact of this neutrino-induced radiation can be mitigated, for example, by continually adjusting the orbits of the beams to spread them out Figure 6: Schematics of high-energy circular muon colliders: a) on the FNAL site, b) a general scheme (adapted from [Sessler, 1998]). Figure 7: Estimates of the annual integrated luminosity for very high-energy circular muon colliders. on a wider area, by deeper collider tunnels or by a further reduction of the emittance of the muon beam so that the required luminosity could be obtained using a substantially smaller number of muons. It is believed that the neutrino flux dilution factor \(\Phi\) could be as high as 10-100 and the ultimate luminosity will depend on it as: \[\mathcal{L}_{\mu\mu}\propto\frac{D_{\nu}\Phi}{E_{cm}^{2}}\,. \tag{15}\] Additional uncertainty at high energies will be limited capabilities to operate SC magnets with significant deposition of the beam power inside them - muons decay into high-energy electrons, which will be quickly bent by the strong magnetic field into the vacuum chamber/absorber walls, radiating SR on their way. Therefore, the resulting luminosity projections for muon colliders indicate a promising increase up to \(E_{cm}\sim\)0.02 PeV followed by fast decline, approximately as shown in Fig.7. ### Traditional, advanced and exotic linear \(ee\) or \(\mu\mu\) colliders Acceleration in linear systems (without bending magnets) allows, in principle, the avoidance of the energy limits of Eq.(9) due to the absence of synchrotron radiation (the power of a particle's radiation in a longitudinal field is \(\gamma^{2}\) times smaller than in an equivalent transverse field). A huge disadvantage of linear colliders (LCs) is that beams are used (collide) only once and then are spent in beam dumps - that leads to intrinsic power inefficiency and, as shown below, low luminosities. The energy limit will be set by the available length of the tunnel and the average accelerating gradient. In traditional RF accelerating structures, the latter is limited to \(G_{RF}\sim\)0.2 GV/m by the structure damage due to discharges. As a result, even the most optimized traditional LC designs, like the ILC[Michizono, 2019] and CLIC[Stapnes, 2019], become quite long (30-50 km, see Figs.8a and b) and get to the limits on cost and power limits (3 LHCU and 3 TWh/yr) already at 0.001-0.003 PeV. Ionized plasmas can sustain electron plasma density waves with accelerating electric field gradients up to: \[G_{p}=m_{e}\omega_{p}c/e\approx 0.1\,[\mathrm{TV/m}]\,\sqrt{n_{0}[10^{18}\, \mathrm{cm^{-3}}]}, \tag{16}\] where \(n_{0}\) denotes the ambient electron number density and \(\omega_{\mathrm{p}}=\sqrt{e^{2}n_{0}/(m_{e}\varepsilon_{0})}\) is the electron plasma frequency [Tajima and Dawson, 1979]. Such gradients can be effectively excited by either powerful external pulses of laser light or electron bunches if they are shorter than the plasma wavelength \(\lambda_{\mathrm{p}}=c/\omega_{\mathrm{p}}\approx 1\ \mathrm{mm}\times\sqrt{10^{15}\, \mathrm{cm^{-3}}/n_{0}}\), or by longer beams of protons if their charge density is modulated with the period of \(\lambda_{p}\)[Gonsalves et al., 2019, Litos et al., 2016, Adli et al., 2018]. Whether plasma acceleration will be suitable for a very high energy collider application is yet to be seen, given the necessity of very high efficiency staging and phase-locking acceleration in multiple plasma chambers [Leemans and Esarey, 2009, Schroeder et al., 2010]. Also, at the present early stage of development of this advanced plasma-wakefield technology, the cost of such a collider would be extremely high and a potential for the several orders of magnitude improvement in the cost efficiency still needs to be demonstrated. It is clear, though, that any type of linear collider will be power-hungry. Indeed, its luminosity scales as: \[\mathcal{L}_{\mathrm{lin}}=\mathcal{F}_{\mathrm{lin}}\frac{N_{\gamma}}{\sigma _{y}^{*}}\frac{P_{b}}{E_{cm}}\,, \tag{17}\] which decreases at higher energies if the total beam power is limited. Other factors in the equation above are limited too, such as the beam sizes at the IP \(\sigma_{x,y}^{*}\) (strongly dependent on the jitter of the Figure 8: Very high energy linear lepton collider proposals (not to scale): a) 1 TeV c.m.e. \(e^{+}e^{-}\) ILC (31 km long), b) 3 TeV c.m.e. \(e^{+}e^{-}\) CLIC (50 km); c) plasma wakefield linear \(e^{+}e^{-}\) collider (length) depends on energy, e.g., \(\sim\)20 km for 30 TeV c.m.e.); d) linear crystal wakefield \(\mu^{+}\mu^{-}\) collider. collider elements and sophistication of the final-focus system) and \(N_{\gamma}\approx 2\alpha r_{0}N/\sigma_{x}^{*}\) - the number of beamstrahlung photons emitted per \(e^{\pm}\) (\(\alpha\approx 1/137\) denotes the fine-structure constant). The latter characterizes the energy radiated due to the electromagnetic field of one bunch acting on the particles of the other (_beamstrahlung_) and the corresponding c.m. energy spread that should be controlled to be \(\ll E_{cm}\) - further details and the exact expression for \({\cal F}_{\rm lin}\) can be found in, e.g., [Shiltsev and Zimmermann, 2021]. Most technologically feasible are LCs colliding electrons with electrons, but the particle physics reach of high-energy \(e^{-}e^{-}\) collision (variety of possible reactions and their cross-sections) is significantly less inspiring than that of the \(e^{+}e^{-}\) colliders - see, e.g., [Barger, 2018]. In order to avoid the c.m.e. spread induced by the beamstrahlung, which at high energies \(E_{cm}\geq 3\)TeV and luminosities approaches 100%, conversion of electrons into photons - via inverse Compton scattering on the high-brightness laser beam right before the IP - was proposed [Ginzburg et al., 1983]. The resulting \(\gamma\gamma\) collisions would have kinematic advantages for some HEP reactions, though still with significant c.m.e. spread. Proton linear colliders have never been seriously considered because of the factor of 7-10 disadvantage in the effective c.m. energy reach w.r.t. leptons - see Eq.(6). Until recently, linear muon colliders were not discussed either, due to obvious difficulties with muon production and collection. An interesting opportunity of wakefield acceleration of muons in structured solid media, e.g., carbon nanotubes(CNT) or crystals with the charge carrier density \(n_{0}\sim\)10\({}^{20-22}\) cm\({}^{-3}\), was proposed in [Tajima and Dawson, 1979]. It promises extreme accelerating gradients 1-10 TV/m, continuous focusing and simultaneous acceleration (no cells, one long channel, particles get strongly cooled via the betatron radiation while channeling between the crystal planes or inside individual CNT channels). A corresponding linear crystal muon collider [Chen and Noble, 1997, Shiltsev, 2012] would be compact in size (\(\sim\)10 km for 1 PeV) - see, Fig.6 d) - and, therefore, have the promise of low(er) cost. The luminosity of such an exotic LC would still be very low - \(O(0.1\) ab\({}^{-1}\)/yr) at best - for the same reasons as for any linear collider. Fig.9 presents estimated luminosities of very high-energy linear lepton colliders, starting with the 1 TeV ILC and 3 TeV CLIC, and followed by wakefield acceleration (WFA) 0.01-0.03 PeV LCs based on gaseous plasma, and up to 1 PeV crystal muon LC options. ## 5 Conclusion The future of particle physics is critically dependent on the feasibility of future energy-frontier colliders. The concept of feasibility is complex and includes at least three factors: feasibility of energy,luminosity, cost and construction time. This article has presented major beam-physics limits of ultimate accelerators and take a look into the ultimate energy reach of possible future colliders. A paradigm change for high-energy particle physics research is looming, as the thrust for higher energies by necessity will mean lower luminosity. The above considerations of ultimate high-energy colliders for particle physics indicate that their major thrust is attainment of the highest possible energy \(E_{cm}\), while the accelerator design challenge is high luminosity \({\cal L}\) and the major limit is the cost \(C_{c}\). The cost is critically dependent on acceleration technology used to reach the required \(E_{cm}\). The limits on \(E_{cm}\) were assumed to be the total facility construction cost being less than three times the cost of the world's most powerful collider to date, the LHC, i.e., \(C_{c}\leq 3\) LHCU. The cost limitations are not well defined, being dependent on such societal factors as the priority and availability of resources to support fundamental research. Consequently, if the affordable collider cost limit can be increased, say, 3-fold to \(C_{c}\sim\)10LHCU, that would also push the maximum collider energy \(E_{cm}\) by a factor of 3-10, according to Eq.(7). Notably, employment of already existing injectors and infrastructure can greatly help to reduce \(C_{c}\). For most collider types, the pursuit of high energy typically results in low(er) luminosity. So, e.g., more than \(O\)(1 ab\({}^{-1}\)/yr) at \(E_{cm}\geq\) 30 TeV to 1 PeV can not be expected. The luminosity calculations might be assumed to be limited by the total facility (and, therefore, the beam) annual power consumption to \(\sim\)3 TWh/yr, again, depending on the societal priorities and considerations of ecological footprint and energy efficiency. For the collider types considered the following conclusions could be drawn: i) for circular \(pp\) colliders the overall feasibility limit is close to or below 100 TeV (\(\sim\)14 TeV c.m.e. for constituents); ii) for circular \(ee\) colliders the limit is at \(\sim\)0.5 TeV; iii) for circular \(\mu\mu\) colliders the limit is about 30 TeV; iv) for linear RF-based lepton colliders and plasma \(ee/\gamma\gamma\) colliders, the limit is between 3 and 10 TeV; v) there are exotic schemes, such as crystal channeling muon colliders, which potentially offer 100 TeV-1 PeV c.m.e., though at very low luminosity. All in all, muons seem to be the particles of choice for future ultimate HEP colliders. Figure 9: Estimated annual integrated luminosity for very high energy linear lepton colliders: RF-based ILC and CLIC, and plasma wakefield-based \(e^{+}e^{-}\), and linear crystal wakefield \(\mu^{+}\mu^{-}\). Acknowledgements and Further Reading This paper is mostly based on the author's presentation at the workshop on the "Physics Limits of Ultimate Beams" [Snowomass'21 Workshop on "Physics Limits of Ultimate Beams", ] (January 22, 2021; on-line) and recent review [Shiltsev and Zimmermann, 2021]. The author greatly appreciates input from and very helpful discussion on the subject of this paper with Mei Bai, William Barletta, Steve Gourlay, Vladimir Kashikhin, Valery Lebedev, Mark Palmer, Tor Raubenheimer, Thomas Roser, Daniel Schulte, John Seeman, Toshiki Tajima and Alexander Zlobin. Special thanks go to my long-term collaborator and co-author Frank Zimmermann, who always inspired me with his contributions to and visionary analysis of future colliders and suggested writing this article. For those who want to read more deeply on the topics touched in this article, one can recommend the following sources: * Ref. [Shiltsev and Zimmermann, 2021, Myers and Schopper, 2013, Myers and Bruning, 2016], * Refs. [Perkins, 2000, Barger, 2018], * Refs.[Livingston, 1954, Hoddeson et al., 1997, Sessler and Wilson, 2014], * Refs.[Seryi, 2016, Schoerling and Zlobin, 2019, Shiltsev, 2014, Florio and Pancotti, 2020, Koizumi, 2020, Roser et al., 2023], * Refs.[Shiltsev, 2012, Zimmermann, 2018, Shiltsev, 2019]. **Glossary** **beam-beam effects:** a variety of usually detrimental effects arising during collision of dense charged-particle bunches, such as blow-ups of beam sizes, increase of the energy spread, growth of the beam halo and particle losses; most prominent in collisions of high-intensity, high-brightness bunches. **beam cooling:** reduction of the beam emittance (phase-space area) without loss of intensity; there are several methods for such improvement of the particle-beam quality (each with its own limits of applicability) - radiation damping, electron cooling, stochastic cooling, laser cooling, ionization cooling, etc. **beamstrahlung:** particle's energy loss due to radiation of photons or gamma quanta, or \(e^{+}e^{-}\) pair production in the strong electromagnetic fields of the opposite bunch, one of the _beam-beam effects_, usually most pronounced in high-energy, high-intensity electron-positron colliders of all types. **ERL (energy-recovery linac):** power-efficient type of accelerator combining _linac_ and storage ring; it is based on the recirculation of a charged particle beam which is first accelerated in the linac (it borrows energy from the electric fields of RF cavities), then travels through the recirculating arc before being decelerated in the same linac structure (returning the energy). **gamma(s) \(\gamma\):** unfortunately, the particle physics nomenclature has the same Greek letter for gamma particles (photons), and for the relativistic Lorentz factor \(\gamma=E/mC^{2}\); in this paper, the context is meant to make it clear which of the two meanings is being discussed. **intrabeam scattering (IBS):** is a single-beam effect caused by collisions between particles in circular accelerators; it leads to an increase in the beam emittance (size), typically occurring slowly in one or all three dimensions. This effect is most prominent in high-intensity, high-brightness bunches. **linac:** linear accelerator. **"\(m\)"-poles (dipoles, quadrupoles, sextupoles, etc):** types of most commonly used accelerator magnets which have 2, 4, 6 etc poles and generate corresponding types of the magnetic fields configurations that are needed for charged-particle guidance (bending and focusing). **RF (radio-frequency):** This is a general term for accelerator components such as cavities and structures, as well as systems like generators and controls, that provide alternating electric or electromagnetic fields at radio frequencies ranging from dozens of kHz (rarely) and MHz, to GHz and (rarely) THz **space-charge effects:** single-beam phenomena caused by the electromagnetic interaction of particles which usually repel each other; that leads to blow-ups of beam sizes and particle losses; most prominent in low-energy (non-relativistic), high-intensity, high-brightness beams. **Standard Model of particle physics:** self-consistent theory describing three of the four known fundamental forces in the universe and classifying all known elementary particles; correspondingly, the eagerly sought new physics phenomena in HEP are often called _Beyond the Standard Model_ (BSM). **synchrotron radiation:** is the electromagnetic radiation emitted when charged particles travel in curved paths; results in particle energy loss and is most pronounced in high-energy electron accelerators. **Wake-field acceleration:** relatively novel methods of excitation of high-gradient electric fields (needed for acceleration of charged particles) by very short intense driving pulses of either lasers or electrons; such fields follow the driver pulses propagating either in plasma or in metallic/dielectric open-aperture structures - therefore, the _wakes_.
2309.17187
TBD Pedestrian Data Collection: Towards Rich, Portable, and Large-Scale Natural Pedestrian Data
Social navigation and pedestrian behavior research has shifted towards machine learning-based methods and converged on the topic of modeling inter-pedestrian interactions and pedestrian-robot interactions. For this, large-scale datasets that contain rich information are needed. We describe a portable data collection system, coupled with a semi-autonomous labeling pipeline. As part of the pipeline, we designed a label correction web app that facilitates human verification of automated pedestrian tracking outcomes. Our system enables large-scale data collection in diverse environments and fast trajectory label production. Compared with existing pedestrian data collection methods, our system contains three components: a combination of top-down and ego-centric views, natural human behavior in the presence of a socially appropriate "robot", and human-verified labels grounded in the metric space. To the best of our knowledge, no prior data collection system has a combination of all three components. We further introduce our ever-expanding dataset from the ongoing data collection effort -- the TBD Pedestrian Dataset and show that our collected data is larger in scale, contains richer information when compared to prior datasets with human-verified labels, and supports new research opportunities.
Allan Wang, Daisuke Sato, Yasser Corzo, Sonya Simkin, Abhijat Biswas, Aaron Steinfeld
2023-09-29T12:34:10Z
http://arxiv.org/abs/2309.17187v2
# TBD Pedestrian Data Collection: Towards Rich, Portable, and Large-Scale Natural Pedestrian Data ###### Abstract Social navigation and pedestrian behavior research has shifted towards machine learning-based methods and converged on the topic of modeling inter-pedestrian interactions and pedestrian-robot interactions. For this, large-scale datasets that contain rich information are needed. We describe a portable data collection system, coupled with a semi-autonomous labeling pipeline. As part of the pipeline, we designed a label correction web app that facilitates human verification of automated pedestrian tracking outcomes. Our system enables large-scale data collection in diverse environments and fast trajectory label production. Compared with existing pedestrian data collection methods, our system contains three components: a combination of top-down and ego-centric views, natural human behavior in the presence of a socially appropriate "robot", and human-verified labels grounded in the metric space. To the best of our knowledge, no prior data collection system has a combination of all three components. We further introduce our ever-expanding dataset from the ongoing data collection effort - the _TBD Pedestrian Dataset_ and show that our collected data is larger in scale, contains richer information when compared to prior datasets with human-verified labels, and supports new research opportunities. ## I Introduction Pedestrian datasets are essential tools for modeling socially appropriate robot behaviors, recognizing and predicting human actions, and studying pedestrian behavior. Researchers may use these data to predict future pedestrian motions, including forecasting their trajectories [1, 17, 18], and/or navigation goals [20, 24]. In social navigation, datasets can also be used to model [33, 21] or evaluate robot navigation behavior [4]. For this, an in-the-wild pedestrian dataset that is large-scale and supports ground-truth metric labels is desired. However, existing public pedestrian datasets are either unlabelled [19, 34], only rely on labels produced by an automated pipeline [25, 5], only contain pixel level information [36, 32] or are small in scale [23, 27, 35]. We propose a system that can collect large quantities of quality data efficiently. The data collected using our system feature a novel full combination of three critical elements: a combination of top-down and ego-centric views, natural human motion, and human-verified labels grounded in the metric space. This allows the data collected using our system to contain rich information. Large datasets with quality labels and rich information can assist in addressing human behavioral research questions that require the modeling of interaction. For example, a key problem researchers have tried to address is the _freezing robot problem_[43]. Researchers have attributed this problem to the robot's inability to model interactions [41]. Some works [31] have used datasets to show that modeling the anticipation of human reactions to the robot's actions enables the robot to deliver a better performance. However, interactions are diverse and uncommon in human crowds, they contain many types [28] and can further be diversified by the environment (e.g. an open plaza or a narrow corridor), so pedestrian datasets need to be large-scale in order to capture enough interaction data. Autonomous vehicle datasets [7, 12] have inspired a plethora of research. However, a dataset of similar caliber and label quality in pedestrian-dominant environments has yet to arrive. As a step toward this goal, we have constructed a data collection system that can achieve these two requirements: large data quantity and diversity, and human-verified positional labels. First, we ensure that our equipment is portable and easy to set up. This allows collecting data in a variety of locations with limited lead time. Second, we address the challenge of labeling large quantities of data using a semi-autonomous labeling pipeline. We employ Fig. 1: This set of images represent the same moment recorded from multiple sensors: a) Top-down view image taken by a static camera with grounded pedestrian trajectory labels shown. b) Ego-centric point cloud from a 3D LiDAR with the projected trajectories from (a). c) Ego-centric RBG and depth images from the mounted stereo camera. Green vertical bars represent the projected labels. Note that two pedestrians at the back are partially and completely occluded from the stereo camera. a state-of-the-art deep learning-based [51] tracking module combined with a human inspection and tracking error-fixing web app to semi-automatically produce high-quality ground truth pedestrian trajectories in metric space. We make the web app open-source1 so that other researchers can use this tool or contribute to this effort. Footnote 1: [https://github.com/CMU-TBD/tbd_label_correction_UI](https://github.com/CMU-TBD/tbd_label_correction_UI) While we hope our contributions support robot system improvements in the community and we aim to accommodate a wide variety of pedestrian behavior research, our dataset primarily supports human environment navigation research that requires ground truth pedestrian positional information, such as social navigation, pedestrian trajectory prediction, and ego-centric perception. Specifically, we include three important characteristics. (1) Top-down view and ego-centric views: This ensures that the robot has access to ground-truth data even with occlusions. (2) Natural human motion: The manual pushing of the inconspicuous suitcase robot mitigates the curiosity effects of nearby pedestrians. And (3) Ground truth labeling in metric space: This allows our dataset to be useful for research where positional pedestrian data are needed. To the best of our knowledge, publicly available datasets only have at most two of these characteristics. We demonstrate our system through a dataset collected in a large indoor space: the TBD Pedestrian Dataset2. Our dataset contains scenes with a variety of crowd densities and pedestrian interactions. We show through our analysis that our dataset (Batch 1: 133 minutes - 1416 trajectories; Batch 2: 626 minutes - 10300 trajectories) is larger in scale and contains unique characteristics compared to prior similar datasets. This is an ongoing effort and we plan to collect additional data in more diverse locations. Footnote 2: [https://tbd.ri.cmu.edu/tbd-social-navigation-datasets](https://tbd.ri.cmu.edu/tbd-social-navigation-datasets) ## II Related Work With the explosion of data-hungry machine learning methods in robotics, demand for pedestrian datasets has been on the rise in recent years. One popular category of research in this domain is human trajectory prediction (e.g., [17, 39, 1, 20, 18, 24, 46]). Much of this research utilizes selected mechanisms to model pedestrian interactions in hopes for better prediction performance (e.g., pooling layers in the deep learning frameworks [17, 1] or graph-based representations [30]). Rudenko et al. [38] provides a good summary into this topic. While the state-of-the-art performance keeps improving with the constant appearance of newer models, it is often unclear how well these models can generalize in diverse environments. As shown in [38], many of these models only conduct their evaluation on the relatively small-scale ETH [35] and UCY [23] datasets. Another popular demand for pedestrian datasets comes from social navigation research. Compared to human motion prediction research, social navigation research focuses more on planning. For example, social navigation research uses learning-based methods to identify socially appropriate motion for better robot behavior, such as deep reinforcement learning [14, 10, 11] or inverse reinforcement learning [33, 42]. Due to the lack of sufficiently large datasets, these models often train in simulators that lack realistic pedestrian behavior. Apart from training, datasets are also increasing in popularity in social navigation evaluation due to their realistic pedestrian behavior [15]. Social navigation methods are often evaluated in environments using pedestrian data trajectory playback (e.g., [44, 8, 41, 47]). However, similar to human motion prediction research, these evaluations are typically only conducted on the ETH [35] and UCY [23] datasets, as shown by [15]. These two datasets only use overhead views, and therefore lack the ego-centric view used by most robots. Large-scale and high-quality datasets exist for other navigation-related applications and research. Autonomous vehicle datasets such as nuScenes [7], Cityscapes [12] and ArgoVerse [48] also contain pedestrian-related data. However, pedestrians often have limited appearances on sidewalks or at crosswalks. There is also no data on how pedestrians navigate indoors. Another group of similar datasets mainly supports computer vision-related research, such as MOT [13] for pedestrian tracking, and Stanford Drone Dataset (SDD) [36] and VIRAT [32] for pedestrian motion/goal prediction on the image level. Detailed comparisons of the characteristics between the TBD Pedestrian Dataset and similar existing datasets can be found in section IV-A. Simulators can fill in the role of datasets for both training and evaluation. Simulators such as PedSIM [16], CrowdNav [10], SocNavBench [4] and SEAN [45] are in use by the research community. However, sim-to-real transfer is an unsolved problem in robotics. Apart from lack of fidelity in visuals and physics, pedestrian simulators in particular entail the additional paradox of pedestrian behavior realism [29]: If pedestrian models are realistic enough for use in simulators, why don't we apply the same model to social navigation? ## III System Description ### _Hardware Setup_ Our system supports multiple static FLIR Blackfly RGB cameras for labeling and metric space calculations (Figure 2). The TBD Pedestrian Dataset contains two different configuration sets for the same physical space. Both sets include three cameras surrounding the scene on the upper floors Fig. 2: Sensor setup used to collect the TBD Pedestrian Dataset. (left) One of the nodes used to capture top-down RGB views. (middle) The cart used to capture ego-centric sensor views during data collection for Set 1. (right) The suitcase robot used to capture ego-centric sensor views during data collection for Set 2. overlooking the ground level at roughly 90 degrees apart from each other (Figure 3). The RGB cameras are connected to portable computers powered by lead-acid batteries. The multiple cameras complement each other's coverage, because even from overhead views, partial occlusions occur. We used the three overhead view cameras to label trajectories. We also positioned three more units on the ground floor, but did not use them for pedestrian labeling. In addition to static cameras, we pushed a cart (Set 1) or robotic suitcase [22] (Set 2) through the scene. The cart (Figure 2) was equipped with a ZED stereo camera to collect ego-centric RGB-D views of the scene. A GoPro Fusion 360 camera for capturing high definition 360 videos of nearby pedestrians was mounted above the ZED. Data from the 360 camera is useful for capturing pedestrian pose data and facial expressions for future work. The ZED camera was powered by a laptop with a power bank. Our entire data collection hardware system is portable and does not require power outlets, thereby allowing data collection outdoors or in areas where wall power is inaccessible. The robotic suitcase (Set 2) is a converted carry-on rolling suitcase. It is equipped with an IMU and a 3D LiDAR sensor. In addition, the same ZED camera and GoPro Fusion 360 camera are mounted on the suitcase handle. The robot's computer, batteries, and all its internal components are hidden inside the suitcase, so pushing the robot resembles pushing a suitcase. We selected this robot because of its inconspicuous design to reduce curious, unnatural reactions from nearby pedestrians, as curious pedestrians may intentionally block robots or display other unnatural movements [6]. While it is true that real-world pedestrians will react to mobile robots curiously in the short term and some may argue in favor of a more robotic appearance, we envision that such curiosity effects will die down in the long term. During certain data collection sessions, we pushed the cart or the suitcase robot from one end of the scene to another end, while avoiding pedestrians and obstacles along the way in a natural motion similar to a human pushing a delivery cart or walking with a suitcase. This collects egocentric views from a mobile robot traversing through the human environment. However, unlike other datasets such as [49, 27, 19] and [34] that use a tele-operated robot, or [37] that uses a scripted policy to act autonomously, we chose to have all motion performed by the human walking with the system. This provides better trajectory control, increased safety, and further reduced the novelty effect from surrounding pedestrians. Both sets of data collection occurred on the ground level in a large indoor atrium area (Figure 3). Half of the atrium area has fixed entry/exit points that lead to corridors, elevators, stairs, and doors to the outside. The other half of the atrium is adjacent to another large open area and is unstructured with no fixed entry/exit points. We collected data around lunch and dinner times to ensure higher crowd densities. Additional data collection are planned at more diverse locations. ### _Post-processing Pipelines_ To ensure time synchronization across the captured videos, we employed Precision Time Protocol over a wireless network to synchronize each of the computers powering the cameras. In addition, we used an LED light and check for the light signal during the post-processing stage to synchronize the frames of all captured data for each recording session. The next step is to localize the cart/suitcase in the scene. For Set 1, this was achieved by identifying the cart/suitcase on the overhead camera videos and then applying the camera matrices to obtain the metric coordinates. For Set 2, we first made a map inside the building and then computed its location in the post processing phase by utilizing the robotic suitcase software 3 powered by Cartographer 4. Footnote 3: [https://github.com/carnotgarber-project/cartographer](https://github.com/carnotgarber-project/cartographer) Footnote 4: [https://github.com/cartographer-project/cartographer](https://github.com/cartographer-project/cartographer) For pedestrian tracking, we first tracked the pedestrians on the overhead camera videos. We found ByteTrack [51] to be very successful in tracking pedestrians in the image space. Upon human verification over our entire data, ByteTrack successfully tracks \(95.1\%\) of the pedestrians automatically. For the automatically tracked labels in pixel space, we needed to convert them into metric space. Each camera video contained a set of tracked trajectories in the image space. We estimated the 3D trajectory coordinates for each pair of 2D trajectories from different cameras and the set of estimated coordinates that resulted in the lowest reprojection error were selected to be the trajectory coordinates in the metric space. ### _Human Label Verification_ To ensure the label quality of the data, human verification of the tracked trajectories from ByteTrack is desired. Semi-autonomous labeling procedures are common in autonomous driving datasets and pedestrian datasets. However, surveying existing pedestrian datasets literature, we noticed that datasets that do contain human-verified metric space labels are often relatively small [35, 23, 9, 27], and large-scale datasets often only use automated tracking pipelines [25, 32, 5] or do not label surrounding pedestrians [19, 34]. Fig. 3: Hardware setup for the TBD Pedestrian Dataset. Blue circles indicate positions of RGB cameras. Green box shows our suitcase robot pushed through the scene. The white area is where trajectory labels are collected. We attribute this to a lack of tools to streamline the human verification process. To this end, we designed an open-source web app (Figure 4) using Matlab App Designer. The tool was designed to minimize complete human relabeling of erroneously tracked trajectories. The app contains a media player. When using the app, human labelers only need to watch videos with the automatically tracked trajectories. When an error is noticed, the labeler only needs to indicate to the system the type and location of the error. The system then fixes the errors in the background, and updates the trajectory visualization accordingly. Currently, the app contains the following set of error-fixing options: * **Break**: Used when ByteTrack incorrectly assigns the same trajectory to two different pedestrians. * **Join**: Used when two different trajectories actually belong to the same pedestrian. * **Delete**: Used when a ghost trajectory appears, such as incorrectly tracking an unworn jacket as a pedestrian. * **Disentangle**: Used when the two trajectories of two pedestrians swapped in the middle, which can happen when one partially occludes the other. The web app also supports undoing previous actions, partial or complete relabeling of trajectories, and labeling missing trajectories. For future work, we are looking at possible platforms to launch the app so that the human verification process can be a crowdsource effort. Combined with ByteTrack, it took an expert labeler about 30 hours to produce human-verified labels for 375K frames of data, or 10300 trajectories. ByteTrack successfully tracks \(95.1\%\) of trajectories. Among the trajectory errors that require human rectification, \(0.35\%\) are fixed by "Break", \(1.25\%\) are fixed by "Join", \(0.29\%\) are fixed by "Delete", \(0.89\%\) are fixed by "Disentangle", \(1.21\%\) are fixed by "Relabel", and \(0.48\%\) are fixed by "Missing". ## IV Dataset Characteristics and Analysis ### _Comparison with Existing Datasets_ Compared to existing datasets collected in natural pedestrian-dominant environments, our TBD pedestrian dataset contains three components that greatly enhances the dataset's utility. These components are: **Human verified labels grounded in metric space.** ETH [35] and UCY [23] datasets are the most popular datasets among human behavior analysis papers [38]. We believe this is partly because the trajectory labels in these datasets are human verified and are grounded in metric space rather than pixel space (e.g. [36] and [3] only contain labels in bounding boxes). Having labels grounded in metric space eliminates the possibility that camera poses might have an effect on the scale of the labels. It also makes the dataset useful for robot navigation related research because robots plan in the metric space rather than pixel space. **Combination of top-down views and ego-centric views.** Similar to datasets with top-down views, we used top-down views to obtain ground truth trajectory labels for every pedestrian present in the scene. Similar to datasets with ego-centric views, we gathered ego-centric views from a "robot" to imitate robot perception of human crowds. A dataset that contains both top-down views and ego-centric views will be useful for research projects that rely on ego-centric views. This allows ego-centric inputs to their models, while still having access to ground truth knowledge of the entire scene. **Naturalistic human behavior with the presence of a "robot".** Unlike datasets such as [49, 27, 19], and [34], the "robot" that provides ego-centric view data collection is a cart or a suitcase robot being pushed by human. As mentioned in section III-A, doing so reduces the novelty effects from the surrounding pedestrians. Having the "robot" pushed by humans also ensures safety for the pedestrians and its own motion has less jerk and more humanlike behavior. Fig. 4: App interface for the human verification process. It contains a media player and various options to fix tracking errors automatically and manually. \begin{table} \begin{tabular}{c||c c c} \hline \hline Datasets & Comp. 1 & Comp. 2 & Comp. 3 \\ & (metric labels) & (views) & (“robot”) \\ \hline TBD (Ours) & Yes & TD + E & Human + Robot \\ ETH [35] & Yes & TD & N/A \\ UCY [23] & Yes & TD & N/A \\ Edinburgh Forum [25] & No & TD & N/A \\ VIRAT [32] & No & TD & N/A \\ Town Centre [3] & No & TD & N/A \\ Grand Central [52] & No & TD & N/A \\ CFF [21] & No & TD & N/A \\ Stanford Drone [36] & No & TD & N/A \\ L-CAS [49] & No* & E & Robot \\ WildTrack [9] & Yes & TD & N/A \\ JackRubot [27] & Yes & E & Robot \\ ATC [5] & No & TD & N/A \\ THOR [37] & Yes & TD + E & Robot \\ SCAND [19] & No & E & Robot \\ Crowd-Bot [34] & No & E & Human + Robot \\ \hline \hline \end{tabular} \end{table} TABLE I: A survey of existing pedestrian datasets on how they incorporate the three components in section IV-A. For component 1, a “No” means either not human verified or not grounded in metric space. For component 2, TD stands for “top-down view” and “E” stands for “ego-centric view”. As shown in Table I, current datasets only contain at most two of the three components5. A close comparison is the THOR dataset [37], but its ego-centric view data are collected by a robot running on predefined trajectories. Additionally, unlike all other datasets in Table I, the THOR dataset is collected in a controlled lab setting rather than in the wild. This injects artificial factors into human behavior. Footnote 5: L-CAS dataset does provide human verified labels grounded in the metric space. However, its pedestrian labels do not contain trajectory data. ### _Dataset Size_ Table II demonstrates the ability of a semi-automatic labeling pipeline to produce large amounts of data. With the aid of an autonomous tracker, humans only need to verify and make occasional corrections on the tracking outcomes instead of locating the pedestrians on every single frame. The data we have collected so far surpassed all other datasets that provide human-verified labels in the metric space in terms of total time and number of pedestrians. We will continue this effort and collect more data for future work. ### _Dataset Statistics_ Extending the evaluations performed in THOR[37], we added the same suite of analysis on Set 2 of our TBD dataset. The evaluation metrics were the following. (1) _Tracking Duration_ (\(s\)): Average time duration of tracked trajectories. (2) _Perception Noise_ (\(ms^{-2}\)): The average absolute acceleration of the trajectories. (3) _Motion Speed_ (\(ms^{-1}\)): Velocities of the trajectories measured in 1 second intervals. (4) _Minimum Distance Between People_ (\(m\)): Minimum Euclidean distance between two closest observed people. As shown in Table III, our dataset has considerable average trajectory duration (\(\pm 25.6\)) and large variation (\(\pm 57.1\)), second only to ATC, which has a \(900m^{2}\) coverage. While our dataset has a much smaller coverage, we attribute this to the presence of pedestrians changing navigation goals and static pedestrians in our dataset. Static pedestrians include standing pedestrians having conversations or pedestrians sitting on chairs. Their presence in our dataset often has a long duration, which also causes big variation in this metric. The tracking noise of our system was sub-optimal when compared to other datasets, which is likely due to noisy tracking of the sitting pedestrians. We observed that sitting pedestrians change their body poses frequently, which causes the tracked bounding boxes to change size frequently. We will investigate how to improve this for future work. The motion speeds of our dataset trajectories are lower (\(0.88\)), which suggests the presence of more static pedestrians. We also have the second-highest variation in motion speed (\(\pm 0.52\)), suggesting that our dataset captures a wide range of pedestrian behavior. From the minimum distance between people, it can be inferred that our dataset captures both dense and sparse population scenarios, as indicated by the middle mean value (\(1.25\)) among the others and high variance (\(\pm 1.44\)). Note that [37] also measures trajectory curvatures, but we noticed that this measurement is heavily affected by how static pedestrians are processed. [37] does not provide any details on this, so we decided not to evaluate this metric. ### _Behavior Distribution Analysis_ We additionally leveraged trajectory prediction models to evaluate our dataset. Trajectory prediction models' performance has advanced significantly. We believe these well-trained models can be utilized in the other ways, such as characterizing the variety of pedestrian behavior in datasets. Almost all trajectory prediction models have been tuned and trained on ETH/UCY datasets. Some have additionally made predictions on SDD[36] or autonomous driving datasets. Because we were primarily concerned with metric labels and pedestrian environments, we did not evaluate models trained on SDD or autonomous vehicle datasets. We also chose models that largely leverage pedestrian positional data and can work independently without image patch inputs[39] or semantic segmentation[26]. Our dataset contains simple sessions and challenging sessions. To test if our dataset contains pedestrian behavior outside of ETH/UCY dataset domains, we analyzed the \begin{table} \begin{tabular}{c|c c c} \hline Datasets & Time length & \# Trajectories & Label Freq (Hz) \\ \hline TBD Set 1 & 133 (51) mins & 1416 & 60 \\ TBD Set 2 & 626 (213) mins & 10300 & 10 \\ ETH[35] & 25 mins & 650 & 15 \\ UCY[23] & 16.5 mins & 786 & 2.5 \\ WikITrack[9] & 200 sec & 313 & 2 \\ & 62 mins & 260 & 7.5 \\ THOR[37] & 60+ mins & 600+ & 100 \\ \hline \end{tabular} \end{table} TABLE II: Dataset comparison statistics, for those with human verified labels grounded in metric space. Numbers in parenthesis are for data that includes the ego-centric view. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & \multicolumn{4}{c}{ETH/UCY Dataset} \\ \hline \multirow{2}{*}{Models} & Static + Dynamic & Dynamic & Dynamic \\ & ADE(\(m\)) & FDE(\(m\)) & ADE(\(m\)) & FDE(\(m\)) \\ \hline Social-GAN[17] & 0.48 & 0.96 & 0.59 & 1.13 \\ Trajectron++[40] & 0.27 & 0.49 & 0.35 & 0.65 \\ AgentFormer[50] & 0.23 & 0.39 & 0.25 & 0.44 \\ \hline \multicolumn{4}{c}{TBD Set 2} \\ \hline Social-GAN & 0.36 & 0.72 & 0.64 & 1.30 \\ Trajectron++ & 0.16 & 0.28 & 0.43 & 0.83 \\ AgentFormer & 0.15 & 0.23 & 0.30 & 0.52 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Trajectory prediction displacement error on ETH/UCY datasets and TBD dataset Set 2. \begin{table} \begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{Datasets} & Tracking & Percep. & Motion & Min Dist. \\ & Duration & Noise & Speed & To Ped. \\ & [s] & [\(ms^{-2}\)] & [\(ms^{-1}\)] & [\(m\)] \\ \hline TBD Set 2 & \(25.6\pm 57.1\) & \(1\) & \(0.88\pm 0.52\) & \(1.25\pm 1.44\) \\ THOR[37] & \(16.7\pm 14.9\) & \(0.12\) & \(0.81\pm 0.49\) & \(1.54\pm 1.60\) \\ ETH[35] & \(9.4\pm 5.4\) & \(0.19\) & \(1.38\pm 0.46\) & \(1.33\pm 1.39\) \\ ATC[5] & \(39.7\pm 64.7\) & \(0.48\) & \(1.04\pm 0.46\) & \(0.61\pm 0.16\) \\ Edinburgh & \(10.1\pm 12.7\) & \(0.81\) & \(1.0\pm 0.64\) & \(3.97\pm 3.5\) \\ \hline \hline \end{tabular} \end{table} TABLE III: Comparison of statistics between our dataset and other datasets according to the methods in [37]. sessions from our dataset with great variety in pedestrian behavior for this evaluation. We selected Social-GAN [17] as the baseline model and Trajectron++ [40] and AgentFormer [50] as relatively state-of-the-art models. Because the models trained on each of the other four sub-datasets did not perform significantly differently in our dataset, we only report the average _Average Displacement Error_ (ADE) and the average _Final Displacement Error_ (FDE) across the five models. We observed that when all pedestrians are concerned, the prediction models all perform better in our dataset (Table IV). We believe it can be attributed to the larger presence of static pedestrians in our datasets compared to ETH/UCY because it is unlikely to yield large errors when predicting future trajectories of static pedestrians. We additionally defined dynamic pedestrians as pedestrians who move at least \(1m\) during the prediction window. We included static pedestrians during model inference but only dynamic pedestrians were considered for evaluation. With this, we discovered that all the prediction models' performances degrade. This indicates that the models have encountered more unseen scenarios in our dataset and that the moving pedestrians in our dataset exhibit more diverse navigation behavior and wider behavior distribution compared to the ones in ETH/UCY. ### _Qualitative Pedestrian Behavior_ Due to the nature of the environment where we collect the data, we observe a mixture of corridor and open-space pedestrian behavior, many of which are rarely seen in other datasets. As shown in Figure 5, we observe both static conversation groups and dynamic walking groups. We also observe that some pedestrians change goals mid-navigation. During certain sessions, some pedestrians set up activity areas with tables and chairs to engage with passing pedestrians. This creates more interesting interaction scenarios. ## V Conclusion This paper presents a data collection system that is portable and enables large-scale data collection. This paper also presents a human label verification tool that streamlines the labeling process. Our semi-autonomous pipeline easily produces human-verified labels in order to meet the demands of the large-scale data collected by our hardware. Our systems offer better utility for pedestrian behavior research because our systems consist of human-verified labels grounded in the metric space, a combination of both top-down views and ego-centric views, and a human-pushed cart or robot that approximates naturalistic human motion with a socially aware robot. Lastly, we present the TBD Pedestrian Dataset we have collected using our system, which not only surpasses the quantity of similar datasets, but also offers unique pedestrian interaction behavior that adds to the qualitative diversity of pedestrian interaction data. As mentioned earlier, our approach enables additional data collection in a wide range of locations and constraints. Additional data collection and public updates to this initial dataset are planned. We have also discovered additional challenges with our labeling pipeline on static pedestrians, because static pedestrians have long trajectory duration and constantly adjust their body poses, the resulting trajectories can be noisy and escape the labeler's attention when using our tool. For future works, we would also explore expanding the diversity of the labels. Some examples include: adding activity labels indicating whether the pedestrian is walking, talking or sitting; adding static obstacle labels for human-object interaction studies; adding group labels for pedestrian groups; and adding gaze direction and head orientation labels for the onboard high-definition 360 camera. ## Acknowledgments This work was supported by the National Science Foundation (IIS-1734361), National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR 90DPGE0003), Office of Naval Research (ONR N00014-18-1-2503), and Shimizu Corporation. We would like to thank Abhijat Biswas and Henny Admoni for their initiation of this project, and colleagues in the Tepper School of Business for their assistance on data collection logistics. Fig. 5: Examples from the TBD Set 1. a) a dynamic group. b) a static conversational group. c) a tour group with 14 pedestrians. d) a pedestrian affecting other pedestrians by asking them to come to the table. e) pedestrians stop and look at their phones. f) two pedestrians change their navigation goals and turn toward the table. g) a group of pedestrians change their navigation goals multiple times. h) a crowded scene where pedestrians are heading in different directions.
2309.03694
Short-Term Load Forecasting Using A Particle-Swarm Optimized Multi-Head Attention-Augmented CNN-LSTM Network
Short-term load forecasting is of paramount importance in the efficient operation and planning of power systems, given its inherent non-linear and dynamic nature. Recent strides in deep learning have shown promise in addressing this challenge. However, these methods often grapple with hyperparameter sensitivity, opaqueness in interpretability, and high computational overhead for real-time deployment. In this paper, I propose a novel solution that surmounts these obstacles. Our approach harnesses the power of the Particle-Swarm Optimization algorithm to autonomously explore and optimize hyperparameters, a Multi-Head Attention mechanism to discern the salient features crucial for accurate forecasting, and a streamlined framework for computational efficiency. Our method undergoes rigorous evaluation using a genuine electricity demand dataset. The results underscore its superiority in terms of accuracy, robustness, and computational efficiency. Notably, our Mean Absolute Percentage Error of 1.9376 marks a significant advancement over existing state-of-the-art approaches, heralding a new era in short-term load forecasting.
Paapa Kwesi Quansah, Edwin Kwesi Ansah Tenkorang
2023-09-07T13:06:52Z
http://arxiv.org/abs/2309.03694v2
Short-Term Load Forecasting Using A Particle Swarm Optimized Multi-Head Attention-Augmented CNN-LSTM Model ###### Abstract Short-term load forecasting is of utmost importance in the efficient operation and planning of power systems, given their inherent non-linear and dynamic nature. Recent strides in deep learning have shown promise in addressing this challenge. However, these methods often grapple with hyperparameter sensitivity, opaqueness in interpretability, and high computational overhead for real-time deployment. This paper proposes an innovative approach that effectively overcomes the aforementioned problems. The approach utilizes the Particle Swarm Optimization algorithm to autonomously tune hyperparameters, a Multi-Head Attention mechanism to discern the salient features crucial for accurate forecasting, and a streamlined framework for computational efficiency. The method was subjected to rigorous evaluation using a genuine electricity demand dataset. The results underscore its superiority in terms of accuracy, robustness, and computational efficiency. Notably, its Mean Absolute Percentage Error of 1.9376 marks a significant improvement over existing state-of-the-art approaches, heralding a new era in short-term load forecasting. Short-Term Load Forecasting; Deep Learning; Particle-Swarm Optimization; Multi-Head Attention; CNN-LSTM Network; Electricity Demand; ## I Introduction In contemporary society, electrical energy has emerged as a pivotal resource propelling the economic and societal progress of nations worldwide. It is extensively utilized in industries, including manufacturing, mining, construction, and healthcare, among others. The provision of consistent and high-quality electrical power is not merely a convenience; rather, it is imperative to sustain investor confidence in economies and foster further development [1]. With the advent of new technological advancements, electricity demand has surged, creating an urgent need for more cost-effective and reliable power supply solutions [2]. The current energy infrastructure lacks substantial energy storage capabilities in the generation, transmission, and distribution systems [3]. This deficiency necessitates a precise balance between electricity generation and consumption. The maintenance of balance is contingent upon the utilization of an accurate load forecasting approach. Adapting electricity generation to dynamically meet shifting demand patterns is paramount; since failure to do so puts the stability of the entire power system at risk [4]. Moreover, as the world pivots towards the increased adoption of renewable energy sources [5], power grids have witnessed a substantial transformation in their composition and structure. This integration of renewable energy sources, such as wind and solar power, introduces a degree of unpredictability into energy generation due to the stochastic nature of these sources [6]. Consequently, ensuring a stable and secure power system operation becomes an even more complex endeavor, demanding meticulous power planning and precise load forecasting. Electric load forecasting is the practice of predicting electricity demand within a specific region. This process can be categorized into three distinct groups: short-term, medium-term, and long-term forecasting, depending on the forecasting horizon. Short-term load forecasting (STLF), which focuses on predicting electricity demand for upcoming hours, a day, or a few days, serves as the foundation for effective power system operation and analysis. It facilitates the optimization of the operating schedules of generating units, including their start and stop times, and their expected output. The accuracy of STLF is of critical importance, as it directly influences the efficient utilization of generating units [7]. The absence of accurate short-term load forecasting can lead to many operational challenges, including load shedding, partial or complete outages, and voltage instability. These issues can have detrimental effects on equipment functionality and pose potential risks to human safety. Short-term load forecasting methods are pivotal in achieving this precision. These methods can be broadly classified into two main categories: statistical methods and machine learning methods [8, 9]. Machine learning-based load forecasting methods, such as the autoregressive integrated moving average model (ARIMA) [10], long short-term memory (LSTM) [11], generative adversarial network (GAN) [11], and convolutional neural network (CNN) [12], have gained prominence. These machine learning methods excel at capturing complex nonlinear data features within load patterns [13]. They leverage the ability to discern similarities in electricity consumption across diverse power supply areas and customer types, allowing for more accurate and feasible load forecasting through the consideration of spatial-temporal coupling correlations. ### _Motivation_ Based on the existing research, the following three shortcomings need to be addressed to improve the forecasting effect of the spatial-temporal distribution of the system load: (i) the lack of flexibility and scalability of traditional statistical methods, (ii) the high computational complexity of deep learning methods, and (iii) the inability of existing methods to capture the spatial-temporal correlations in load patterns. Considering these challenges, this paper proposes a novel short-term load forecasting model that uses a particle swarm-optimized multi-head attention-augmented CNN-LSTM network. The proposed model employs a particle swarm optimization (PSO) algorithm to identify the optimal hyper-parameters of the CNN-LSTM network. This enhances the model's resilience to overfitting and its accuracy. Additionally, the multi-head attention mechanism is used to learn the importance of different features for the forecasting task. Finally, a hybrid CNN-LSTM Model is used to help the system capture the spatial-temporal correlations in load patterns, hence enhancing its forecasting accuracy. ### _Contributions_ The following are the main contributions of the paper: 1. **Feature Extraction:** To improve efficiency during feature extraction for STLF, PSO is employed to optimize model hyperparameters, leading to enhanced efficiency in extracting significant features with lower computational resources. 2. **Attention-Augmented Hybrid Model:** Given that power demand is impacted by short-term fluctuations and long-term trends in data, a hybrid model is used to detect both temporal and extended dependencies, improving accuracy. 3. **Performance evaluation:** The effectiveness of PSO-A2C-LNet has been validated using three real-world electricity demand data sets (from Panama, France, and the US). Testing results demonstrate that the PSO-A2C-LNet outperforms benchmarks in terms of forecasting performance. ### _Structure of this paper_ The rest of the paper is structured as follows. Section II provides comprehensive explanations and definitions of key terminology. Section III provides an in-depth exposition of the proposed framework and a comprehensive explanation of its operation. The findings of our tests on the framework are presented in Section IV. Section V concludes the paper. ## II Framework Components ### _Definitions of Key Terms_ #### Ii-A1 Convolutional Neural Network A Convolutional Neural Network (CNN) is a deep learning model designed primarily for image-related tasks, but it can also be applied to other grid-like data, such as audio or time series data. CNNs are especially effective at capturing spatial dependencies within inputs. [14]. The CNN achieves the localization of spatial dependencies by using the following layers: 1. **Convolutional Layer**: The core operation in a CNN is the convolution operation. Convolutional layers use learnable filters or kernels to scan the input data in a localized and overlapping manner. Each filter detects specific features, like edges, textures, or more complex patterns. Mathematically, the 2D convolution operation is defined as follows: \[(Y*X)(i,j)=\sum_{m=1}^{M}\sum_{n=1}^{N}X(i+m-1,j+n-1)\cdot Y(m,n) \tag{1}\] Here, - \(Y\) is the filter (kernel) of size \(M\times N\). - \(X\) is the input data of size \((W,H)\). - \((i,j)\) represents the coordinates of the output feature map. - \((m,n)\) iterates over the filter dimensions. By sliding the filter across the input, the convolution operation computes feature maps that highlight different aspects of the input. This process effectively captures spatial dependencies. 2. **Pooling Layer**: Pooling layers are predominantly used to downsample the feature maps, reducing their spatial dimensions. Common pooling operations include max-pooling and average-pooling. Pooling aids in the invariance of network translation and minimizes the computational overhead. For max-pooling, the operation is defined as: \[\text{Max-Pooling}(x,p,q)=\max_{i,j}x(p+i,q+j) \tag{2}\] where \(x\) is the input feature map, and \((p,q)\) represents the pooling window position. Max-pooling retains the most significant information within the window. 3. **Fully Connected Layer**: After multiple convolutional and pooling layers, the spatial dimensions are reduced, and the network connects to one or more fully connected layers, also known as dense layers. These layers perform classification or regression tasks by learning high-level representations. Recognizing and exploiting spatial dependencies in CNNs is facilitated through several key mechanisms [15]. CNNs utilize local receptive fields, whereby each neuron is connected to a small region of the input data. This enables neurons to specialize in detecting specific features within their receptive fields, hence facilitating the network's ability to record spatial relationships across multiple scales. Additionally, weight sharing is a fundamental aspect of CNNs, where the same set of filters is applied consistently across the entire input. This weight sharing allows the network to learn translation invariant patterns, boosting its capacity to grasp spatial dependencies. Moreover, CNNs employ a hierarchical representation approach, where deeper layers in the network combine higher-level features derived from lower-level features. This hierarchical representation aids the network in comprehending complex spatial dependencies by gradually constructing abstractions. These mechanisms collectively empower CNNs to effectively model and exploit spatial dependencies in input data. #### Iii-A2 Long Short-Term Memory Network The LSTM network is a type of recurrent neural network (RNN) architecture that is designed to capture and model sequential data while addressing the vanishing gradient problem that plagues traditional RNNs. LSTMs are particularly effective at locating and modeling long-term dependencies in sequential data. LSTMs consist of multiple interconnected cells, each with its own set of gates and memory cells [16]. The primary components of an LSTM cell are: **Forget Gate** (\(f_{t}\)): Controls what information from the previous cell state (\(C_{t-1}\)) should be discarded or kept. It takes the previous cell state and the current input (\(x_{t}\)) as input and produces a forget gate output. \[f_{t}=\sigma(W_{f}\cdot[h_{t-1},x_{t}]+b_{f}) \tag{3}\] **Input Gate** (\(i_{t}\)): Determines what new information should be added to the cell state. It takes the previous cell state and the current input and produces an input gate output. \[i_{t}=\sigma(W_{i}\cdot[h_{t-1},x_{t}]+b_{i}) \tag{4}\] **Candidate Cell State** (\(\tilde{C}_{t}\)): This is a candidate new cell state, computed using the current input and a tanh activation function. \[\tilde{C}_{t}=\tanh(W_{c}\cdot[h_{t-1},x_{t}]+b_{c}) \tag{5}\] **Cell State Update** (\(C_{t}\)): The cell state is updated by combining the information retained from the previous cell state (\(f_{t}\cdot C_{t-1}\)) and the new candidate cell state (\(i_{t}\cdot\tilde{C}_{t}\)). \[C_{t}=f_{t}\cdot C_{t-1}+i_{t}\cdot\tilde{C}_{t} \tag{6}\] **Output Gate** (\(o_{t}\)): Determines what part of the cell state should be output as the final prediction. It takes the current input and the updated cell state and produces an output gate output. \[o_{t}=\sigma(W_{o}\cdot[h_{t-1},x_{t}]+b_{o}) \tag{7}\] **Hidden State** (\(h_{t}\)): The hidden state is the output of the LSTM cell, which is used as the prediction and is also passed to the next time step. It is calculated by applying the output gate to the cell state. \[h_{t}=o_{t}\cdot\tanh(C_{t}) \tag{8}\] LSTMs address the vanishing gradient issue of traditional RNNs by introducing key components: the cell state (\(C_{t}\)) and the forget gate (\(f_{t}\)) [17]. The forget gate dynamically adjusts (\(f_{t}\)) to enable LSTMs to remember or discard information from distant time steps, facilitating the capture of long-term dependencies. Meanwhile, the cell state (\(C_{t}\)) acts as a memory buffer, accumulating and passing relevant information across time steps, thus enabling the model to recognize and exploit long-term patterns within input sequences. #### Iii-A3 Multi-Head Attentional Mechanism The Multi-Head Attention mechanism [18] is a key component of Transformer-based models, such as BERT and GPT, used for various natural language processing tasks. It excels at capturing extremely long-term dependencies in sequences of data. Multi-Head Attention extends the idea of the self-attention mechanism [19] by employing multiple attention heads in parallel. Each attention head focuses on different parts of the input sequence, enabling the model to capture various types of information and dependencies simultaneously. The primary components of Multi-Head Attention are as follows: **Query** (\(Q\))**, Key (\(K\)), and Value (\(V\)) **Projections**: For each attention head, we project the input sequence into three different spaces: query, key, and value. These projections are learned parameters. **Scaled Dot-Product Attention**: Each attention head computes attention scores between the query (\(Q\)) and the keys (\(K\)) of the input sequence and then uses these scores to weight the values (\(V\)). The attention scores are computed as a scaled dot product: \[\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)\cdot V \tag{9}\] Here, \(d_{k}\) is the dimension of the key vectors. **Concatenation and Linear Transformation**: After computing the attention outputs for each head, we concatenate them and apply a linear transformation to obtain the final multi-head attention output: \[\text{MultiHead}(Q,K,V)=\text{Concat}(\text{head}_{1},\text{head}_{2},\dots, \text{head}_{h})W^{O} \tag{10}\] Where Concat concatenates the outputs from all attention heads, and \(W^{O}\) is a learned linear transformation. ## III PSO-A2C-LNet Architecture PSO-A2C-LNet utilizes the various aforementioned components to extract more relevant features of the power load data to provide better predictions. This algorithm effectively improves the prediction accuracy of STLF. The model architecture is described below. The model starts with an input layer designed to accept sequential data. Following the input layer, a Convolutional layer is used to capture temporal spatial patterns in the data. Subsequently, a bidirectional LSTM layer is employed to model long-term dependencies both forward and backward, enabling the capture of historical data through time. The crucial Multi-Head Attention module operates on the output of the first bidirectional LSTM layer, enabling the model to focus on the most relevant features and learn their importance. To capture intricate long-term patterns, a second bidirectional LSTM layer is employed. The final LSTM layer generates a probabilistic value. The anticipated short-term demand in kilowatts per hour is predicted by the output layer, which consists of a dense layer with one neuron and a linear activation function. A few Dropout layers were interspersed among the other model's other layers to combat overfitting. Layer Normalization is implemented subsequent to the first bidirectional layer in order to provide consistent and steady training across different inputs. The hyperbolic tangent (tanh) activation function was used for all LSTM layers. To optimize model performance and convergence during training, PSO was employed to fine-tune critical hyperparameters. Table I shows the optimized hyperparameters and their corresponding optimization ranges. The specific implementation process of the proposed algorithm is provided in Algorithm 1. ``` Data: Input data \(D\) Result: Output data \(O\) Initialize variables; Extract features \(\mathbf{X}\) and target values \(\mathbf{y}\) from \(\mathcal{D}\); Do Pre-processing; Define architecture of model; Start PSO; whilestopping criterion not metdo Find optimal parameters using PSO; Check the fitness with defined model; Update variables and data structures; Update variables globally; Train the model on the training set; Evaluate the model with the three error metrics; Post-processing steps; return\(O\); ``` **Algorithm 1**PSO-A2C-LNet ## IV Results and Discussion This section comprehensively analyzes the STLF results by implementing the above model and testing it extensively on three datasets; ERCOT, RTE, and the Panama Energy Figure 1: PSO-A2C-LNet Structure Diagram Dataset. Three regression evaluation metrics are introduced for quantitative analysis of the prediction results. The performance and fitting degree of the different models are measured by the following indicators: \[R^{2}=1-\frac{\sum_{i=1}^{n}(Y_{i}-\hat{Y}_{i})^{2}}{\sum_{i=1}^{n}(\hat{Y}_{i}- \hat{Y})^{2}}\] \[\text{MAPE}=\frac{1}{n}\sum_{i=1}^{n}\left|\frac{Y_{i}-\hat{Y}_{i}}{Y_{i}} \right|\times 100\%\] \[\text{MAE}=\frac{1}{n}\sum_{i=1}^{n}|Y_{i}-\hat{Y}_{i}|\] where \(n\): Number of Observations, \(Y_{i}\): Actual values at data point \(i\), \(\hat{Y}_{i}\): Predicted values at data point \(i\) and \(\hat{Y}\): Mean of the observed values. From the results Table II, the PSO-A2C-LNet model consistently stands out. On the Panama Energy Dataset, it achieves the highest coefficient of determination (\(R^{2}\)) at 0.88, indicating strong predictive accuracy, along with the lowest mean absolute percentage error (MAPE) of 1.9% and the smallest mean absolute error (MAE) of 7.3, making it the top-performing model. In the ERCOT Dataset, PSO-A2C-LNet also delivers competitive results with an \(R^{2}\) of 0.87, a MAPE of 2.1%, and a MAE of 7.5. Similarly, on the RTE Dataset, it outperforms other models with an \(R^{2}\) score of 0.86, a lower MAPE of 2.0%, and a MAE of 7.5. These consistent results suggest that PSO-A2C-LNet exhibits robust predictive capabilities across diverse datasets. While PSO-A2C-LNet excels on all datasets, the other models exhibit varying levels of performance. These comparative results emphasize the importance of model selection based on the specific dataset and application, with PSO-A2C-LNet emerging as a robust choice for diverse predictive tasks. #### Iv-B1 Comparison of results with results in literature Table III shows the results of the A2C-LNet and the PSO-A2C-LNet on the testing dataset compared to other models in scientific literature. ## V Conclusion In conclusion, this research paper has introduced a novel neural network architecture for short-term load forecasting, amalgamating Convolutional Neural Network and Long Short-Term Memory models, reinforced by a Multi-Head Attention Mechanism. Empirical assessments confirm its superiority over traditional methods and standalone neural network models, with demonstrated applicability to real-world datasets. Future work will focus on optimizing the proposed architecture, exploring further hyperparameter tuning, and investigating additional data preprocessing techniques for enhanced forecasting. Additionally, integrating robust data privacy measures, such as federated learning or secure enclaves, into the architecture is essential to address emerging privacy concerns in load forecasting, ensuring secure and privacy-preserving predictions while advancing the scalability and adaptability of the framework to diverse forecasting challenges and datasets. ## Declaration of Competing Interest The authors declare that there is no conflict of interest regarding the publication of this paper. ## Acknowledgment The authors would like to thank Professor Philip Yaw Okyere for guiding the research.
2309.11290
On the Rapoport-Zink space for $\mathrm{GU}(2, 4)$ over a ramified prime
In this work, we study the supersingular locus of the Shimura variety associated to the unitary group $\mathrm{GU}(2,4)$ over a ramified prime. We show that the associated Rapoport-Zink space is flat, and we give an explicit description of the irreducible components of the reduction modulo $p$ of the basic locus. In particular, we show that these are universally homeomorphic to either a generalized Deligne-Lusztig variety for a symplectic group or to the closure of a vector bundle over a classical Deligne-Lusztig variety for an orthogonal group. Our results are confirmed in the group-theoretical setting by the reduction method \`a la Deligne and Lusztig and the study of the admissible set.
Stefania Trentin
2023-09-20T13:17:40Z
http://arxiv.org/abs/2309.11290v1
# On the Rapoport-Zink space for \(\mathrm{GU}(2,4)\) over a ramified prime ###### Abstract. In this work, we study the supersingular locus of the Shimura variety associated to the unitary group \(\mathrm{GU}(2,4)\) over a ramified prime. We show that the associated Rapoport-Zink space is flat, and we give an explicit description of the irreducible components of the reduction modulo \(p\) of the basic locus. In particular, we show that these are universally homeomorphic to either a generalized Deligne-Lusztig variety for a symplectic group or to the closure of a vector bundle over a classical Deligne-Lusztig variety for an orthogonal group. Our results are confirmed in the group-theoretical setting by the reduction method a la Deligne and Lusztig and the study of the admissible set. ## 1. Introduction ### Motivation Understanding arithmetic properties of Shimura varieties has been a fundamental question in recent developments in number theory and algebraic geometry. Shimura varieties of PEL type can be described as moduli spaces of Abelian varieties with additional structure, namely polarization, endomorphism and level structure, see [14, Sec. 5]. The special fiber of a Shimura variety at a prime \(p\) can be decomposed into finitely many _Newton strata_ according to the isogeny class of the \(p\)-divisible groups corresponding to each Abelian variety. Studying the Newton stratification of the special fiber of a suitable integral model has been a fundamental tool to understand the arithmetic of Shimura varieties. There is a unique closed Newton stratum, called the basic locus, which in the Siegel case coincides with the supersingular locus of the Shimura variety. A good understanding of the basic Newton stratum is expected to be essential to prove results about general Newton strata and the whole special fiber using an induction process, as stated in the _Harris-Viehmann conjecture_[13, Sec. 5.1]. Moreover, a concrete description of basic loci has been of great importance, among others, in the work of Rapoport, Terstiege and Zhang on the arithmetic fundamental lemma, see [11]. For an overview of other applications in arithmetic geometry of the study of basic loci we refer to [13] and [14]. The aim of the present work is to study the supersingular locus of the reduction modulo \(p\) of the Shimura variety for the unitary group \(\mathrm{GU}(2,4)\) over a ramified prime \(p\). In [13] Rapoport and Zink prove the Uniformization Theorem, which enables us to formulate this problem in terms of a closed subscheme of a moduli space of \(p\)-divisible groups with additional structure, called Rapoport-Zink space. Over a field of equal characteristic, for example over \(\mathbb{F}_{p}(\!(t)\!)\), this corresponds to the study of some affine Deligne-Lusztig varieties associated to the group-theoretical datum underlying the Shimura variety. In this paper, we give a concrete description of the irreducible components of the reduced scheme underlying the reduction modulo \(p\) of the basic locus of the Rapoport-Zink space corresponding to ramified \(\mathrm{GU}(2,4)\). In addition, we prove that the Rapoport-Zink space is _flat_ over the ring of integers of the quadratic ramified extension of \(\mathbb{Q}_{p}\) associated to the Shimura variety. Previous works on the supersingular locus of Shimura varieties for unitary groups include [14] and [14] for the group \(\mathrm{GU}(1,n-1)\) over an inert prime, [13] for \(\mathrm{GU}(1,n-1)\) ###### Abstract We consider the _discrete_ a scheme \(S\) the groupoid of triples of the form \((A,\iota,\lambda)\). Here \(A\) is an Abelian scheme over \(\mathcal{O}_{S}\), equipped with an action \(\iota\) of \(\mathcal{O}_{K}\) and a principal polarization \(\lambda\), whose Rosati involution induces via \(\iota\) the automorphism \(\sigma\) on \(\mathcal{O}_{K}\). The action \(\iota\) is also required to satisfy Kottwitz' determinant condition and Pappas' wedge condition on the Lie algebra of \(A\), see [11, 2.1, 2.2]. It is proved in [11, Prop. 2.1] that \(\mathcal{M}(K,s,n-s)\) is a Deligne-Mumford stack over \(\mathcal{O}_{K}\). Moreover, the wedge condition ensures flatness for \(s=1\), as shown in [20, Thm. 4.5]. It is conjectured in [20, 4.16] that this holds for any signature, which is supported by computational evidence. We also recall there are some variants of the moduli problem, which satisfy flatness in higher signature and dimension and have been introduced for example in [1] and [17]. Our first main result is shown in Section 2 and concerns flatness of \(\mathcal{M}(K,2,4)\). **Proposition 1.1**.: _Assume that \(2\nmid\varDelta\). Then \(\mathcal{M}(K,2,4)\) is flat over \(\mathcal{O}_{K}\)._ The proof of this first result builds on the reduction of the problem to a question in algebraic geometry and commutative algebra presented in [20, 4.16]. In particular, in _loc.cit._ the author relates the flatness conjecture to an open question in invariant theory raised by [10]. Our proof combines techniques from different mathematical fields, from computational algebra to model theory, and can be adapted to prove flatness for \(n=8\) or higher. We are optimistic that our results could serve as the basis for an induction process on the dimension \(n\) to prove flatness of \(\mathcal{M}(K,2,n-2)\). Once we have established flatness, we can move to the description of the irreducible components of the basic locus of \(\mathcal{M}(K,2,4)\). To do so we introduce the associated Rapoport-Zink space \(\mathcal{N}\). It parametrizes \(p\)-divisible groups with some additional structure and equipped with a quasi-isogeny to a fixed \(p\)-divisible group \(\mathbb{X}\), we refer to Section 2 for a precise definition. In particular, we focus on \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\), the reduced scheme underlying the reduction modulo \(p\) of the closed subscheme \(\mathcal{N}^{0}\) of \(\mathcal{N}\) where the quasi-isogeny to \(\mathbb{X}\) has height zero. The Uniformization Theorem [12, Thm. 6.30] gives an isomorphism of formal stacks between the completion of \(\mathcal{M}(K,s,n-s)\) along its supersingular locus and a double quotient of \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\). Via Dieudonne theory we associate to the fixed \(p\)-divisible group \(\mathbb{X}\) a Hermitian \(E\)-vector space \(C\) of dimension \(n=6\). Here \(E\) is the quadratic extension of \(\mathbb{Q}_{p}\) given by the completion of \(K\). In \(C\) we consider two families of \(\mathcal{O}_{E}\)-lattices, whose properties we study in Section 3. As in [11], we say that \(\varLambda\) is a vertex lattice of type \(t\) if \(p\varLambda\subset\varLambda^{\sharp}\subset\varLambda\), and the quotient \(\varLambda/\varLambda^{\sharp}\) is a \(\mathbb{F}_{p}\)-vector space of dimension \(t\). Here \(\varLambda^{\sharp}\) is the dual of \(\varLambda\) with respect to the Hermitian form and it contains \(p\varLambda\). As in [17], we say a lattice \(\varLambda\) is \(2\)-modular if its dual \(\varLambda^{\sharp}\) is equal to \(p\varLambda\). This second type of lattices does not play any role in [11], and is a specific feature of signature \((2,n-2)\). As we are going to see, the behavior of the irreducible components of \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) is quite different depending on the sign of the discriminant of \(C\). Before giving a description of the irreducible components of \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\), we recall in Section 4 some properties of classical Deligne-Lusztig varieties. In particular, we study three families of varieties, one for the symplectic group, which is the generalization to signature \((2,n-2)\) of the varieties introduced in [11, Sec. 5], and two for the orthogonal group. These varieties become relevant in the subsequent sections. As a preparation for the main result, we study in Section 5 the \(k\)-valued points of \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) for any algebraically closed field \(k\). Section 6 is dedicated to the proof of the following theorem. **Theorem 1.2**.: _i) Assume \(C\) is split, that is with discriminant equal to \(1\). Then \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) has irreducible components of two types._ 1. \(\mathcal{N}_{\mathcal{L}}\)_, for every vertex lattice_ \(\mathcal{L}\subset C\) _of type_ \(6\)_. These components are universally homeomorphic to generalized Deligne-Lusztig varieties for the symplectic group_ \(\mathrm{Sp}_{6}\) _and have dimension_ \(5\) 2. \(\mathcal{N}_{\varLambda}\)_, for every_ \(2\)_-modular lattice_ \(\varLambda\subset C\)_. These components are universally homeomorphic to the closure of a line bundle over a generalized Deligne-Lusztig variety for the orthogonal group_ \(\mathrm{SO}_{6}\) _and have dimension_ \(4\)_._ 3. _Assume_ \(C\) _is non-split, that is with discriminant equal to_ \(-1\)_. Then_ \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) _is pure of dimension_ \(4\) _and has irreducible components of two types._ 1. _One irreducible component_ \(\mathcal{N}^{1}_{\varLambda}\) _for every_ \(2\)_-modular lattice_ \(\varLambda\subset C\)_. These components are universally homeomorphic to the closure of a line bundle over a generalized Deligne-Lusztig variety for the non-split orthogonal group of rank_ \(6\)_._ 2. _Two irreducible components_ \(\mathcal{N}^{2}_{\varLambda}\) _for every_ \(2\)_-modular lattice_ \(\varLambda\subset C\)_. These components are universally homeomorphic to the closure of a rank-two vector bundle over a classical Deligne-Lusztig variety of Coxeter type for the non-split orthogonal group of rank_ \(6\)_._ As expected, there is a natural way to relate the irreducible components of \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) to classical Deligne-Lusztig varieties, which is however not an isomorphism. This is coherent with the fact that the Shimura variety for \(\mathrm{GU}(2,4)\) is not fully Hodge-Newton decomposable in the sense of [1]. It is interesting to notice that in the split case the first type of irreducible components closely resembles those of the Rapoport-Zink space for signature \((1,n-1)\). One may ask whether it is possible to prove a stronger result, for example that the homeomorphisms are isomorphisms as in [21] and [16]. This is discussed in detail in Remark 6.25. In the non-split case, the fact that we have pairs of components of type \(\mathcal{N}^{2}_{\varLambda}\), corresponds to the fact that in this case the orbit of a Coxeter element under the action of the Frobenius consists of two elements. Finally, in Section 7 we study the group-theoretical datum associated to our problem. We recall some relevant definitions and results, and we study in detail the admissible set and the associated family of affine Deligne-Lusztig varieties for ramified \(\mathrm{GU}(2,4)\). Using the reduction method a la Deligne and Lusztig, we show that the description of the irreducible components of \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) given in Theorem 1.2 is mirrored by the behavior of the corresponding affine Deligne-Lusztig varieties. ### Acknowledgements First and foremost I would like to thank my supervisor Eva Viehmann for her support during my PhD. I am sincerely thankful for her constant help and feedback, which guided me through my studies. I wish to express my gratitude to Michael Rapoport and Torsten Wedhorn for very helpful discussions and for answering my questions on their papers [16] and [21]. I am thankful to Felix Schremmer for sharing his knowledge on Coxeter groups and affine Deligne-Lusztig varieties, pointing me to the relevant literature for Section 4. I would like to thank Simone Ramello for introducing me to model theory and working out together the details of Remark 2.17. I am also grateful to Urs Hartl and Damien Junger for helpful conversations. I was supported by the ERC Consolidator Grant 770936: _NewtonStrat_, by the Ada Lovelace Fellowship of the Cluster of Mathematics Munster funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044 - 390685587, Mathematics Munster: Dynamics-Geometry-Structure, and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Collaborative Research Center TRR326 "Geometry and Arithmetic of uniformized Structures", project number 444845124. ## 2. The moduli space In this section we introduce the Rapoport-Zink space associated with the Shimura variety for \(\mathrm{GU}(2,n-2)\) over a ramified prime, and we prove its flatness in the case \(n=6\). We fix the notation, which we will use in the rest of this paper. Let \(n\) be an integer greater or equal than \(3\) and \(p\) be an odd prime, we denote * \(E\) a ramified quadratic extension of \(\mathbb{Q}_{p}\) with ring of integers \(\mathcal{O}_{E}\), * \(\pi\) a uniformizer of \(E\) such that \(\pi^{2}=\pi_{0}\) is a uniformizer of \(\mathbb{Q}_{p}\), this is possible as \(p\) is odd, * \(\mathbb{F}\) an algebraic closure of \(\mathbb{F}_{p}\), its ring of Witt vectors is denoted by \(W\) and its fraction field by \(W_{\mathbb{Q}}=\operatorname{Quot}(W)\), * \(\tilde{E}=E\otimes_{\mathbb{Q}_{p}}W_{\mathbb{Q}}\) and its ring of integers \(\mathcal{O}_{\tilde{E}}=\mathcal{O}_{E}\otimes_{\mathbb{Z}_{p}}W\), * \(\sigma\) the Frobenius on \(\mathbb{F},W,W_{\mathbb{Q}}\) and also the map \(1\otimes\sigma\) on \(\tilde{E}\), * \(\psi_{0}:E\to\tilde{E}\) the natural embedding and \(\psi_{1}\) its conjugate, that is \(\psi_{1}=\psi_{0}\circ\tilde{\ }\). Rapoport-Zink spaces were first introduced in [10]. They are moduli spaces parametrizing quasi-isogenies of \(p\)-divisible groups with additional structure. By the Uniformization Theorem, see [10, Thm. 6.30], they play a crucial role in the study of the basic locus of the corresponding Shimura variety of PEL type. In this section we recall the definition of the Rapoport-Zink space \(\mathcal{N}_{s,n}\) associated to the Shimura variety for \(\operatorname{GU}(s,n-s)\). We follow the notation of [11, Sec. 2] and of [12, Sec. 4]. Fix a supersingular \(p\)-divisible group \(\mathbb{X}\) of dimension \(n\) and height \(2n\) over \(\mathbb{F}\) equipped with an action \(\iota_{\mathbb{X}}:\mathcal{O}_{E}\to\operatorname{End}(\mathbb{X})\). Let \(\lambda_{\mathbb{X}}\) be a principal quasi-polarization of \(\mathbb{X}\) whose Rosati involution induces on \(\mathcal{O}_{E}\) the non-trivial automorphism over \(\mathbb{Q}_{p}\). Let \(\operatorname{Nilp}\) be the category of \(\mathcal{O}_{\tilde{E}}\)-schemes \(S\) such that \(\pi\cdot\mathcal{O}_{S}\) is a locally nilpotent ideal sheaf. Fix \(n\geq 3\) and \(s\leq n\). We study the moduli functor \(\mathcal{N}_{s,n}\) associating to a scheme \(S\) in \(\operatorname{Nilp}\) the set of isomorphism classes of quadruples \((X,\iota,\lambda,\rho)\), where \(X\) is a \(p\)-divisible group over \(S\) and \(\iota:\mathcal{O}_{E}\to\operatorname{End}(X)\) is a homomorphism satisfying the following two conditions, introduced respectively by Kottwitz and Pappas, \[\operatorname{char}(\iota(a)\mid\operatorname{Lie}(X)) =(T-\psi_{0}(a))^{s}(T-\psi_{1}(a))^{n-s} \tag{2.2}\] \[\bigwedge^{n-s+1}(\iota(\pi)-\pi\mid\operatorname{Lie}(X)) =0 \bigwedge^{s+1}(\iota(\pi)+\pi\mid\operatorname{Lie}(X))=0. \tag{2.1}\] Furthermore, \(\lambda:X\to X^{\vee}\) is a principal quasi-polarization and \(\rho:X\times_{S}(S\times_{\mathcal{O}_{E}}\mathbb{F})\to\mathbb{X}\times_{ \mathbb{F}}(S\times_{\mathcal{O}_{E}}\mathbb{F})\) is an \(\mathcal{O}_{E}\)-linear quasi-isogeny such that \(\lambda\) and \(\rho^{*}\lambda_{\mathbb{X}}\) differ locally on \((S\times_{\mathcal{O}_{E}}\mathbb{F})\) by a factor in \(\mathbb{Q}_{p}^{\times}\). We also require that the Rosati involution associated to \(\lambda\) induces on \(\mathcal{O}_{E}\) the non-trivial automorphism over \(\mathbb{Q}_{p}\). Last, two quadruples \((X,\iota,\lambda,\rho)\) and \((X^{\prime},\iota^{\prime},\lambda^{\prime},\rho^{\prime})\) are isomorphic if there is an \(\mathcal{O}_{E}\)-linear isomorphism \(\alpha:X\to X^{\prime}\) such that \(\rho^{\prime}\circ(\alpha\times_{S}(S\times_{\mathcal{O}_{E}}\mathbb{F}))=\rho\) and \(\alpha^{*}\lambda^{\prime}\) is a \(\mathbb{Z}_{p}^{\times}\)-multiple of \(\lambda\). **Proposition 2.3**.: _[_10_, Sec. 6.9]_ _The moduli functor \(\mathcal{N}_{s,n}\) is representable by a separated formal scheme \(\mathcal{N}_{s,n}\) locally formally of finite type over \(\operatorname{Spf}\mathcal{O}_{\tilde{E}}\)._ ### Flatness The conditions (2.2) on the exterior powers of the action of \(\pi\) on the Lie algebra of \(X\) were introduced by Pappas in [12, Sec. 4] and ensure flatness of the moduli space \(\mathcal{N}_{1,n}\) over \(\mathcal{O}_{\tilde{E}}\), as proved in [12, Thm. 4.5]. It is conjectured that this holds for any signature \(s\). In [12, 4.16] the author presents his computations in dimension \(n\leq 6\) and for primes \(p\leq 31991\) which confirm flatness in these cases. We prove in this section that for signature \(2\) and dimension \(6\) the moduli space \(\mathcal{N}_{2,6}\) is flat for any odd prime \(p\). The first step of the proof is already in [12, Sec. 4.16], where the author relates flatness of the Rapoport-Zink space to a conjecture by de Concini and Procesi [1, Sec. 1] on ideals generated by matrix entries. In particular, it is sufficient to show that a certain polynomial ideal is radical. We prove that for signature \((2,n-2)\) some generators of this ideal are redundant. We consider then the case \(n=6\) and give a method to prove radicality almost independently of the characteristic \(p\). **Proposition 2.4**.: _[_10_, Sec. 4.16]_ _Let \(X\) denote the generic matrix over \(\mathbb{F}_{p}[x_{ij},1\leq i,j\leq n]\)_ \[X=\left(\begin{array}{ccc}x_{11}&\cdots&x_{1n}\\ \vdots&&\vdots\\ x_{n1}&\cdots&x_{nn}\end{array}\right).\] _Consider the ideal \(J(s,n)\subset\mathbb{F}_{p}[x_{ij},1\leq i,j\leq n]\) generated by the polynomials given by the entries of \(X^{2}\), the \((s+1)\)-rank minors of \(X\), the entries of \(X-X^{t}\) and by the (non-leading) coefficients of the characteristic polynomial of \(X\). Then if \(J(s,n)\) is radical, the Rapoport-Zink space \(\mathcal{N}_{s,n}\) is flat over \(\mathcal{O}_{E}\)._ We are then interested in showing that the ideal \(J(2,6)\) is radical. First, we show that some generators of \(J(2,n)\), for any \(n\), are actually redundant. **Lemma 2.5**.: _Let \(X=[x_{ij}=x_{ji}]\) denote the \(n\)-dimensional generic symmetric matrix over \(\mathbb{F}_{p}[x_{ij},1\leq i\leq j\leq n]\). Then_ \[J(2,n)=\langle X^{2},\bigwedge^{3}X,\operatorname{Tr}(X)\rangle, \tag{2.6}\] _where the right-hand side denotes the ideal of \(\mathbb{F}_{p}[x_{ij},1\leq i\leq j\leq n]\) generated by the polynomials given by the entries of \(X^{2}\), the rank-\(3\) minors of \(X\) and by its trace._ Proof.: Since \(J(2,n)\) contains the polynomials \(x_{ij}-x_{ji}\) it is clear that we can reduce the number of variables and assume that \(X\) is symmetric. Recall that the coefficient of the term of degree \(n-k\) in the characteristic polynomial of \(X\) is given by the sum of the \(k\times k\) principal minors of \(X\). By definition of \(J(2,n)\), the polynomials corresponding to the minors of rank at least \(3\) are already contained in it. It follows that the equations given by the coefficients of degree \(n-k\) with \(k\geq 3\) are redundant as generators of \(J(2,n)\). Let \(\sigma_{2}\) denote the coefficient of degree \(n-2\) of the characteristic polynomial of \(X\). It is easy to check that for any matrix \(X\) the trace of \(X\) is related to that of \(X^{2}\) by the identity \(\operatorname{Tr}(X^{2})=\operatorname{Tr}(X)^{2}-2\sigma_{2}(X)\). Since \(p\neq 2\) this tells us that \(\sigma_{2}(X)\) is unnecessary as generator of \(J(2,n)\). Observe that if we change to \(2\) the exponent in the exterior power of (2.6) we obtain again the ideal studied in [10, 4.12]. ### Flatness in small dimension We fix for the rest of this section \(n=6\) and \(s=2\), and we let \(p\) denote an odd prime. From now on we simplify the notation and just write \(J\) for the ideal \(J(2,6)\). Our goal is to prove the following proposition. **Proposition 2.7**.: _The ideal \(J=J(2,6)\subset\mathbb{F}_{p}[x_{ij},1\leq i\leq j\leq 6]\) is radical for all primes \(p\neq 2\). By Proposition 2.4, it follows that the Rapoport-Zink space \(\mathcal{N}_{2,6}\) is flat over \(\mathcal{O}_{E}\)._ _Remark 2.8_.: Observe that for \(p=2\) the ideal \(J\) is not radical. Indeed, it contains for example the polynomial \((X^{2})_{11}=x_{11}^{2}+x_{12}^{2}+\cdots+x_{16}^{2}\), which is a square over \(\mathbb{F}_{2}\), while the only polynomial of degree \(1\) in \(J\) is the trace. Proving that an ideal is radical is known in general to be a quite hard problem. There are several algorithms to compute the radical of a polynomial ideal, both in zero and positive characteristic, see for example [15]. However, they all require fixing the field of coefficients beforehand and therefore are not a feasible choice for us, as we want to prove that \(J\subset\mathbb{F}_{p}[x_{ij},1\leq i\leq j\leq 6]\) is radical for any \(p\). As far as we could research in the literature there is also no algorithm for directly proving that an ideal is radical without first computing its primary decomposition or its radical. Proof.: Our strategy for proving that \(J\) is radical is to reduce ourselves to solving the same problem for a sequence of polynomial ideals in one variable. This will turn out to be much easier as the resulting univariate ideals will be generated by polynomials of degree at most two. We will have to solve two other problems along the way. First, we have to explicitly describe this sequence of univariate ideals, that is we have to give a set of generators for each of them. Second, since our goal is to prove radicality independently of the characteristic, we will have to show that our arguments and computations hold over \(\mathbb{F}_{p}\) for almost all primes \(p\). The key idea for reducing to univariate polynomial ideals is in the following easy observation from commutative algebra. **Lemma 2.9**.: _Let \(I\) be an ideal in \(R[x]\), where \(R\) is any commutative ring with unit. Then \(I\) is radical if and only if the image of \(I\) in \((R/R\cap I)[x]\) is radical. Moreover, if \(R\) is a reduced algebra and \(I\) is radical, then so is the ideal \(R\cap I\) in \(R\)._ Proof.: Let \(\bar{I}\) be the image of \(I\) in \((R/R\cap I)[x]\). If \(I\) is radical and \(f\in R[x]\) is such that \(\overline{f^{n}}\in\bar{I}\), this means that \(f^{n}+i\in I\) for some \(i\in R\cap I\), that is \(f^{n}\in I\). Therefore, \(f\in I\), since \(I\) is radical, from which it follows that \(\overline{f}\in\bar{I}\). Conversely, if \(\bar{I}\) is radical, and we have \(f^{n}\in I\), then the image \(\overline{f^{n}}\in\bar{I}\), which is radical, hence \(\overline{f}\in\bar{I}\). This means that \(f+i\in I\) for some \(i\in R\cap I\), that is \(f\in I\). Last, observe that if \(R\) is a reduced algebra, for any polynomial in \(R[x]\), the degree of \(f^{n}\) is equal to \(n\deg(f)\). Therefore, if \(f^{n}\in R\cap I\) it means that \(f^{n}\) has degree zero and therefore \(f\in R\), as well. Since \(I\) is radical, \(f\in I\), from which the statement follows. In order to prove that \(J\subset\mathbb{F}_{p}[x_{ij},1\leq i\leq j\leq 6]\) is radical we can start for example by inspecting its intersection \(J_{12}=J\cap\mathbb{F}_{p}[x_{12},x_{13},\ldots,x_{66}]\). If \(J_{12}\) is not radical, then by the previous lemma \(J\) is not radical either, and we have to stop. Otherwise, to prove that \(J\) is radical is equivalent by Lemma 2.9 to prove that the image \(\overline{J}\) of \(J\) in \(R_{12}[x_{11}]\) is radical, where \(R_{12}=\mathbb{F}_{p}[x_{12},x_{13},\ldots,x_{66}]/J_{12}\). If \(J_{12}\) is radical, then the algebra \(R_{12}\) is reduced, hence we are confronted with the easier problem of proving radicality for an ideal in a univariate polynomial ring with reduced coefficient ring. We can apply this reasoning recursively to each variable \(x_{ij}\), so that we obtain a chain of ideals \[J_{66}=J\cap\mathbb{F}_{p}[x_{66}]\subset J_{56}=J\cap\mathbb{F}_{p}[x_{56},x _{66}]\subset\cdots\subset J_{12}=J\cap\mathbb{F}_{p}[x_{12},x_{13},\ldots,x_{66 }]\subset J. \tag{2.10}\] Our strategy will then consist of proving radicality twenty-one times, one for each variable \(x_{ij}\), as follows. * We start with proving that \(J_{66}\) is radical. * At step \(ij\) we know that the previous ideal \(J_{ij+1}\) (or \(J_{i+1i+1}\) if \(j\) is \(6\)) is radical, and we prove that the image \(\overline{J_{ij}}\) in \(R_{ij+1}[x_{ij}]\) is radical, which by Lemma 2.9 implies that \(J_{ij}\) is radical as well. Here again \(R_{ij+1}=\mathbb{F}_{p}[x_{ij+1},\ldots,x_{66}]/J_{ij+1}\). This technique is a standard method in computational algebra called _elimination_. It was inspired to us by reading the primality testing algorithm of [11, Sec. 4]. To apply our elimination strategy we are confronted with the problem of finding generators for each intersection ideal \(J_{ij}\) and for each image \(\overline{J_{ij}}\). To do so we have to first recall the notion of Grobner basis and present some relevant results. **Definition 2.11**.: Consider the polynomial ring \(R[x_{1},\ldots,x_{m}]\), where \(R\) is any commutative ring with unit. 1. The lexicographic order given by \(x_{1}>x_{2}>\cdots>x_{m}\) is the total order on the set of monomials in \(R[x_{1},\ldots,x_{m}]\) defined by \[x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\leq x_{1}^{b_{1}}\cdots x_{m}^{b_{m}} \Longleftrightarrow\exists i\text{ such that }a_{j}=b_{j}\text{ for all }j\leq i,\text{ and }a_{i+1}<b_{i+1}.\] Moreover, the lexicographic order is a _monomial order_, that is, if \(u,v\) are two monomials such that \(u\leq v\) and \(w\) is a third monomial, then \(uw\leq vw\). 2. For a polynomial \(f\in R[x_{1},\dots,x_{m}]\) the leading term \(\operatorname{lt}(f)\) is the highest monomial of \(f\) with respect to a given monomial order. For an ideal \(I\subset R[x_{1},\dots,x_{m}]\), the initial ideal \(\operatorname{in}(I)\) is the ideal generated by the leading terms of all elements of \(I\). 3. A finite subset \(G\subset I\) is a _Grobner basis_ for \(I\) if the leading terms of the polynomials in \(G\) generate the initial ideal of \(I\). Grobner bases where first introduced in [1], where it is proved that for any ideal \(I\) and any choice of monomial order, there exists a Grobner basis, and that it generates \(I\). We collect here some relevant results about Grobner bases that we will need in this section, proofs can be found for example in [1, Sec. 3]. **Lemma 2.12**.: _Let \(I\) be an ideal in \(R[x_{1},\dots,x_{m}]\) and \(G\) a Grobner basis for \(I\) with respect to the lexicographic order given by \(x_{1}>x_{2}>\dots>x_{m}\)._ 1. \(G\cap R[x_{i},\dots,x_{m}]\) _is a Grobner basis for the elimination ideal_ \(I\cap R[x_{i},\dots,x_{m}]\)_._ 2. _Consider the quotient map_ \(\pi:R[x_{1},\dots,x_{m}]\to(R/R\cap I)[x_{1},\dots,x_{m}]\)_. Then_ \(\pi(G\smallsetminus G\cap R)\) _is a Grobner basis for_ \(\pi(I)\)_._ 3. _Let_ \(S\) _be a multiplicatively closed subset of_ \(R[x_{1},\dots,x_{m}]\)_. Then_ \(G\) _is a Grobner basis for_ \(S^{-1}I\) _in the localization_ \(S^{-1}R[x_{1},\dots,x_{m}]\)_._ Consider our chain of elimination ideals (2.10). The theory of Grobner bases provides us with an effective way to compute a generating set of each ideal \(J_{ij}=J\cap\mathbb{F}_{p}[x_{ij},\dots,x_{66}]\) and of its image \(\overline{J_{ij}}\) in \(R_{ij+1}[x_{ij}]=\mathbb{F}_{p}[x_{ij+1},\dots,x_{66}]/(J_{ij+1})[x_{ij}]\). We fix the lexicographic order on \(\mathbb{F}_{p}[x_{11},\dots,x_{66}]\) given by \(x_{11}>x_{12}>\dots>x_{16}>x_{22}>x_{23}>\dots>x_{66}\). By [1] we know that we can compute a Grobner basis \(G\) for \(J\) with respect to this lexicographic order. By Lemma 2.12 we know then that a Grobner basis for \(\overline{J_{ij}}\) is given by the image of \(G_{ij}\) in \(R_{ij+1}[x_{ij}]\), where \[G_{ij}=(G\cap\mathbb{F}_{p}[x_{ij},x_{ij+1},\dots,x_{66}])\smallsetminus(G\cap \mathbb{F}_{p}[x_{ij+1},\dots,x_{66}]). \tag{2.13}\] Here by \(x_{ij+1}\) we mean again the variable directly after \(x_{ij}\) in the lexicographic order. Since Grobner bases are in particular generating sets, this proves that we can compute a set of generators of the ideal \(\overline{J_{ij}}\). We have then solved our first problem, as we have reduced the proof of radicality for \(J\) to showing radicality for the sequence (2.10) of univariate ideals \(\overline{J_{ij}}\), and we have given a concrete way to compute a generating set for each of them. Before showing that each ideal in the sequence is radical, we have to address the question of the characteristic of the coefficient ring. A priori, the computation of a Grobner basis is sensitive of the characteristic, see [11, Ex. 1] for some examples. In other words, the Grobner basis \(G\) computed for the ideal \(J\) over \(\mathbb{F}_{p}\) may differ from the basis \(G^{\prime}\) of \(J\) over another coefficient ring \(\mathbb{F}_{p^{\prime}}\). For example, it could have a different number of elements or different degrees. Nevertheless, it is proved by Winkler in [11] how to compute a Grobner basis for \(J\) that works for almost all primes. Roughly speaking, we can see \(J\) as an ideal with coefficients in \(\mathbb{Q}\) and compute a Grobner basis for \(J\) over \(\mathbb{Q}\). Its image over \(\mathbb{F}_{p}\) will be a Grobner basis for \(J\) for almost all primes \(p\). For example, we need to exclude those primes dividing the coefficients of \(G\). In the following, by a normalized reduced Grobner basis we mean a basis such that no proper subset is still a basis. Recall that the Syzygy matrix for a set of generators \(G\) has as rows the coefficients of the polynomial relations between the generators of \(G\). **Proposition 2.14**.: _[_11_, Thm. 1]_ _Let \(F=(f_{1},\dots,f_{m})^{t}\) be a finite sequence of polynomials in \(\mathbb{Q}[x_{1},\dots,x_{n}]\) and \(G=(g_{1},\dots,g_{r})^{t}\) the normalized reduced Grobner basis for \(F\) in \(\mathbb{Q}[x_{1},\dots,x_{n}]\) _Then, for almost all primes \(p\) the images \(\overline{F}=F\ {\rm mod}\ p\) and \(\overline{G}=G\ {\rm mod}\ p\) exist and \(\overline{G}\) is the normalized reduced Grobner basis for \(\overline{F}\) in \(\mathbb{F}_{p}[x_{1},\ldots,x_{n}]\)._ _Moreover, the primes for which \(\overline{G}\) is not a Grobner basis, called unlucky primes, are the divisors of the denominators of the coefficients of \(F\) and \(G\) and of the coefficients of the entries of the polynomial matrices \(Z,Y,R\) defined as_ \[G=Z.F,\qquad F=Y.G,\qquad R\text{ the Syzygy matrix of }G.\] It follows that our elimination strategy so far is almost independent of the characteristic. Indeed, we can compute a Grobner basis \(G\) for \(J\) as the ideal in \(\mathbb{Q}[x_{11},\ldots,x_{66}]\) generated by the same polynomial equations as in (2.6). Then we compute the matrices \(Z,Y\) and \(R\) as in Proposition 2.14 and looking at the coefficients of their entries, together with the coefficients of \(G\) we obtain the set \(U\) of unlucky primes. Now we know that for \(p\not\in U\) the image of \(G\) modulo \(p\) is a Grobner basis for \(J\subset\mathbb{F}_{p}[x_{11},\ldots,x_{66}]\) and the image of the subset \(G_{ij}\) as in (2.13) is a basis for \(\overline{J_{ij}}\). To compute \(G\) and \(U\) we use the computer algebra software Sagemath [23]. The Grobner basis \(G\) is listed in the Appendix A and a script for the calculation of \(U\) is in Appendix B. The set of unlucky primes turns out to be \(U=\{2,3\}\). For \(p\not\in U=\{2,3\}\), by the previous discussion, we have to inspect the Grobner basis \(G\) of \(J\) and its subsets \(G_{ij}\). We observe that these satisfy one of the following. 1. \(G_{ij}\) is empty. This is the case for the eight variables \(\{x_{35},x_{45},x_{55},x_{26},x_{36},x_{46},x_{56},x_{66}\}\). 2. \(G_{ij}\) contains a linear polynomial in \(x_{ij}\). For \(j\leq 4\) one possible linear polynomial is the \(3\times 3\) minor of \(X\) corresponding to the rows \(i,5,6\) and the columns \(j,5,6\), which has leading coefficient \(x_{55}x_{66}-x_{56}^{2}\). The subset \(G_{15}\) contains a linear polynomial in \(x_{15}\) as well, with leading coefficient \(x_{16}\). This polynomial is given by the entry \((5,6)\) of \(X^{2}\). 3. The remaining subsets \(G_{16}\) and \(G_{25}\) consist of only one polynomial of degree \(2\). Consider the chain of ideals (2.10) and our elimination strategy described above. We start with proving that \(J_{66}=J\cap\mathbb{F}_{p}[x_{66}]\) is radical. Since \(G_{66}\) is empty, this means that \(J_{66}=0\), so there is nothing to prove. By induction, at step \(ij\), we have to prove that \(J_{ij}\) is radical, knowing that the previous ideal \(J_{ij+1}\) is radical. We discuss how to do this in each of the three cases above. Proof of the empty case.: If \(G_{ij}\) is empty this means that the image of \(J_{ij}\) in the quotient ring \(R_{ij+1}[x_{ij}]\) is zero, or in other words that \(J_{ij}=J_{ij+1}\). Since we know that the ideal \(J_{ij+1}\) preceding \(J_{ij}\) in the chain (2.10) is radical, there is nothing to prove. As a side remark, we note that this is the case for eight variables, which means that \(J\cap\mathbb{F}_{p}[x_{35},x_{45},x_{55},x_{26},\ldots,x_{66}]=0\). In other words these variables are an _independent set_ for \(J\), in the sense of [11, Sec. 1]. By [11, Lem. 1.3], this implies that \(J\) has dimension eight, which has already been proved by other methods by Pappas in [10, Sec. 4.16]. Proof of the linear case.: Consider \(G_{ij}\) for \(j\leq 4\), together with \(G_{15}\). As we have remarked above, \(G_{ij}\), and therefore \(\overline{J_{ij}}\) contains a linear polynomial in \(x_{ij}\). However, \(\overline{J_{ij}}\) is far from being principal and contains polynomials of degree two, as well. Our goal is to reduce to the case of a principal ideal generated by a monic linear polynomial, which is then clearly radical. To do so we can localize at the leading coefficient of the fixed linear polynomial of \(G_{ij}\). Localization does not preserve radicality in general, but we can make the following observation. **Lemma 2.15**.: _Let \(I\) be an ideal in a reduced ring \(R\). If \(s\in R\) is not a zero divisor modulo \(I\) and the localization \(I_{s}\) is radical in \(R_{s}\), then \(I\) is radical, too._ Proof.: Indeed, if some element \(f\in R\) belongs to the radical of \(I\), then it belongs to the radical to \(I_{s}\), too and by hypothesis then to \(I_{s}\). This means that for some high enough power of \(s\) we have \(s^{m}f\in I\), and since \(s\) is not a zero divisor modulo \(I\), we deduce that \(f\in I\) Suppose we know that the leading coefficient of the given linear polynomial in \(G_{ij}\) is not a zero divisor modulo \(J_{ij}\). By the previous lemma it suffices to prove that the localization of \(\overline{J_{ij}}\) is radical. By Lemma 2.12 we know that the localization of \(G_{ij}\) is again a Grobner basis for the localization of \(\overline{J_{ij}}\). This basis is however not reduced. Indeed, since we have localized at the leading coefficient of a linear polynomial, it contains a _monic_ linear polynomial. It follows that the initial ideal of the localization of \(\overline{J_{ij}}\) is generated by the leading term \(x_{ij}\) of this monic linear polynomial, which is then a Grobner basis, hence a set of generators. The localization of \(\overline{J_{ij}}\) is then principal and generated by a monic linear polynomial, hence clearly radical. It remains to prove that the leading coefficient of the chosen linear polynomial in \(G_{ij}\) is a non-zero divisor modulo \(J_{ij}\). As we have observed this coefficient is \((x_{55}x_{66}-x_{56}^{2})\) if \(j\leq 4\) or \(x_{16}\) for \(J_{15}\). In order to show that these polynomials are not zero-divisors modulo \(J\), we want to use again the theory of Grobner bases, so that with Proposition 2.14 we can argue almost independently of the characteristic. First, observe that an element \(s\in\mathbb{F}_{p}[x_{11},\ldots,x_{66}]\) is a non-zero divisor modulo \(J\) if and only if the division ideal \((J:s)=\{f\in\mathbb{F}_{p}[x_{11},\ldots,x_{66}]\mid fs\in J\}\) is equal to \(J\). The division ideal can be computed using exclusively Grobner bases by the following result, see for example [1, Cor. 3.2] for a proof. **Lemma 2.16**.: _Let \(I=\langle f_{1},\ldots,f_{r}\rangle\) be an ideal in a polynomial ring \(R[x_{1},\ldots,x_{m}]\), and \(s\in R[x_{1},\ldots,x_{m}]\). Then it is possible to compute the division ideal \((I:s)\) as follows. Compute a Grobner basis \(G\) for the ideal \(\langle tf_{1},\ldots,tf_{r},ts-s\rangle\subset R[t,x_{1},\ldots,x_{n}]\) with respect to a monomial order such that \(t>x_{1},\ldots,x_{n}\). Then \((G\cap R[x_{1},\ldots,x_{m}])/s\) is a Grobner basis for \((I:s)\)._ Using Sagemath and with the previous lemma one can compute a Grobner basis over \(\mathbb{Q}\) for \((J:s)\) for \(s\) equal to the leading coefficients \((x_{55}x_{66}-x_{56}^{2})\) and \(x_{16}\) of the chosen linear polynomials. We compare it to the basis \(G\) of \(J\), and we obtain that they coincide, hence these leading coefficients are non-zero divisors modulo \(J\). It remains to compute the set of unlucky primes for these bases according to Proposition 2.14, which is again \(\{2,3\}\). It follows that for \(p\neq 2,3\) the elimination ideals \(J_{ij}\) for \(j\leq 4\) as well as \(J_{15}\) are radical in \(\mathbb{F}_{p}[x_{11},\ldots,x_{66}]\). A script (with outputs) for the calculations so far can be found in Appendix B. Proof of the quadratic case.: It remains to discuss the steps corresponding to the elimination ideals \(\overline{J_{25}}\) and \(\overline{J_{16}}\). As we have already seen these ideals are principal and generated by a polynomial of degree two. In order to prove that \(J_{ij}\) is radical, it suffices then to show that the leading coefficients and discriminants of these quadratic polynomials are non-zero divisors modulo \(J\). This implies computing four other Grobner bases as in Lemma 2.16 and the corresponding sets of unlucky primes, according to Proposition 2.14. We use again Sagemath and obtain that \(J_{25}\) and \(J_{16}\) are radical for \(p\not\in U\). The set \(U\) of unlucky primes is quite large in this case and is listed in Appendix B. We can conclude that \(J\subset\mathbb{F}_{p}[x_{11},\ldots,x_{66}]\) is radical for \(p\not\in U\). For \(p=2\) we have already seen that \(J\) is not radical. Observe that the set \(U\) consists of primes \(\leq 809\), see Appendix B, which have already been checked by Pappas in [2, Sec. 4.16]. Therefore, for any \(p\neq 2\), the ideal \(J\) is radical. _Remark 2.17_.: We note that the core of the proof of Proposition 2.7 is in the observation that all the arguments and computations we used to prove radicality hold in (almost) any odd characteristic. This is based on the result on Grobner bases of Proposition 2.14, which allows us to move to characteristic zero, prove radicality only with Grobner bases calculations, and deduce the same result over almost all positive characteristics \(p\). Roughly speaking, we have proved that the ideal \(J\) of (2.6) is radical over \(\mathbb{Q}\) if and only if it is radical over \(\mathbb{F}_{p}\) for almost all primes \(p\), and we have indicated how to find the finitely many primes for which this may not hold. This is actually not as surprising as it may seem in light of some recent results in model theory. We give here the fundamental idea, which we worked out with S. Ramello, who, together with F. Jahnke, pointed us to the relevant literature. In model theory, a _language_ consists of all sentences that can be formulated using a given set of symbols. For example, the language of rings consists of all the statements that can be expressed just using the symbols \(+,\cdot,0,1\), see [11, Sec. 1] for a detailed explanation. As a consequence of the _compactness theorem_, see [11, Cor. 2.2.10], any statement in the language of rings is true in an algebraically closed field of characteristic zero if and only if it is true in an algebraically closed field of characteristic \(p\) for every \(p\) large enough. At a first glance, the statement "the ideal \(J\subset R\) is radical", which is equivalent to the statement "for every \(f\in R\), if \(f^{n}\in J\) then \(f\in J\)", seems not to belong to the language of rings, as it requires using the quantifier \(\forall\), the set of natural numbers (for the exponent and the degree of \(f\)) and the quantifier \(\exists\) (\(f\in I\) means that there exists a linear combination of the generators of \(I\) that is equal to \(f\)). However, it is proved in [1, Sec. 5.1] that if \(R\) is a polynomial ideal, the statement "\(J\) is radical" can actually be formulated in an equivalent way without quantifiers and without the full set of natural numbers. Therefore, it can be expressed in this case in the language of rings. It follows that an ideal \(J\subset\mathbb{Z}[x_{1},\ldots,x_{m}]\) is radical over \(\mathbb{Q}\) if and only if it is radical over \(\mathbb{F}_{p}\) for \(p\) large enough. We note that the compactness theorem of model theory is highly non-constructive, that is, it does not indicate how to find the prime \(p_{0}\) such that, if an ideal is radical in characteristic zero, then it is radical in characteristic \(p>p_{0}\). Since our goal was to prove flatness of the Rapoport-Zink space in any odd characteristic, a purely model-theoretical approach would not have been sufficient. _Remark 2.18_.: Another important idea in the proof of Proposition 2.7 is the reduction of the proof of radicality to the case of one variable. This approach can be applied to any ideal, as long as an algorithm or criterion for proving radicality of the resulting univariate polynomial ideals is known. In our case, we have linear or quadratic polynomials, and we have seen how to prove radicality in these cases by using only Grobner bases. Our strategy can be applied to the ideals \(J(2,n)\) of Proposition 2.4, as well, and we have carried out the computations for \(n\leq 8\), and obtained that these ideals are radical. ## 3. Vertex lattices and modular lattices Now that we have proved that the scheme \(\mathcal{N}_{2,6}\) is flat over \(\mathcal{O}_{E}\), we can turn to the description of its geometry. The results in this and the next section are actually true for \(\mathcal{N}_{2,n}\) in any dimension \(n\), from Section 5 on we will restrict again to the case \(n=6\). As we have mentioned in the introduction, the object of our studies is \(\bar{\mathcal{N}}^{0}_{2,n}\), the reduction modulo \(\pi\) of the open and closed formal subscheme of \(\mathcal{N}_{2,n}\) consisting of quadruples where the height of the quasi-isogeny \(\rho\) is zero. More precisely, the moduli functor \(\bar{\mathcal{N}}^{0}_{2,n}\) parametrizes quadruples \((X,\lambda,\iota,\rho)\), where \(X\) is a \(p\)-divisible group of height \(2n\) and dimension \(n\), and where \(\lambda\) is a principal quasi-polarization whose Rosati involution induces on \(\mathcal{O}_{E}\) the non-trivial automorphism over \(\mathbb{Q}_{p}\). Since the conjugate embeddings \(\psi_{0,1}:E\to\breve{E}\) coincide modulo \(\pi\), and since we have fixed \(s=2\), Pappas' and Kottwitz's conditions reduce to \[\bigwedge^{3}(\iota(\pi)\mid\operatorname{Lie}(X))=0. \tag{3.1}\] Moreover, \(\rho:X\to\mathbb{X}\times_{\mathbb{F}}S\) is now a quasi-isogeny of height \(0\) which is \(\mathcal{O}_{E}\)-linear and such that \(\rho^{*}(\lambda_{\mathbb{X}})\) and \(\lambda\) differ locally on \(S\) by a factor in \(\mathbb{Z}_{p}^{\times}\). We first study the \(\mathbb{F}\)-valued points of \(\bar{\mathcal{N}}^{0}_{2,n}\). By Dieudonne theory to the fixed \(p\)-divisible group \(\mathbb{X}\) corresponds a unique free \(W(\mathbb{F})\)-module of rank equal to the dimension \(n\) of \(\mathbb{X}\). We consider \(N\), the rational Dieudonne module of \(\mathbb{X}\), that is the vector space obtained by tensoring with the field of fractions \(W_{\mathbb{Q}}=\operatorname{Quot}(W(\mathbb{F}))\). The action \(\iota_{\mathbb{X}}\) of \(\mathcal{O}_{E}\) induces an action of the field \(E\) on \(N\). Since by definition of \(\iota_{\mathbb{X}}:\mathcal{O}_{E}\to\operatorname{End}(\mathbb{X})\) the action of any element in \(\mathcal{O}_{E}\) on \(\mathbb{X}\) is an endomorphism of \(\mathbb{X}\) as \(p\)-divisible group, the action of \(E\) on the rational Dieudonne module \(N\) commutes with the Frobenius and Verschiebung maps on \(N\). We denote by \(\varPi\) the action of \(\pi\) on \(N\). Last, the principal quasi-polarization \(\lambda_{\mathbb{X}}\) induces a skew-symmetric \(W_{\mathbb{Q}}\)-bilinear form \(\langle\cdot,\cdot\rangle\) on \(N\) satisfying \[\langle Fx,y\rangle=\langle x,Vy\rangle^{\sigma}\] \[\langle\iota_{\mathbb{X}}(a)x,y\rangle=\langle x,\iota_{\mathbb{ X}}(\bar{a})y\rangle,\] for any \(x,y\in N\) and any \(a\in E\). For a \(W(\mathbb{F})\)-lattice \(M\subset N\), that is a free \(W(\mathbb{F})\)-submodule of \(N\) of rank \(n\), we denote by \(M^{\vee}\) the lattice \(\{x\in N\mid\langle x,M\rangle\subset W(\mathbb{F})\}\), and call it the _dual_ of \(M\) with respect to the alternating form on \(N\). In the following, we write an exponent over an inclusion of lattices \(M_{1}\subset^{m}M_{2}\) to indicate the index, _i.e._ the length of the quotient module \(M_{2}/M_{1}\). The following lemma is the analogue of [10, Prop. 2.2], and it is proved in the same way. For completeness, we recall here their proof with the modifications due to the different signature. **Lemma 3.2**.: _Associating to a point in \(\bar{\mathcal{N}}^{0}_{2,n}(\mathbb{F})\) its Dieudonne module defines a bijection of \(\bar{\mathcal{N}}^{0}_{2,n}(\mathbb{F})\) with the set of \(W(\mathbb{F})\)-lattices_ \[\{M\subset N\mid M^{\vee}=M,\ \varPi M\subset M,\ pM\subset VM\subset^{n}M,\ VM \subset^{\leq 2}VM+\varPi M\},\] Proof.: Given a quadruple \((X,\lambda,\iota,\rho)\) in \(\bar{\mathcal{N}}^{0}_{2,n}(\mathbb{F})\), the quasi-isogeny \(\rho\) from \(X\) to the fixed \(p\)-divisible group \(\mathbb{X}\) translates into an inclusion of the Dieudonne module \(M\) of \(X\) into the rational module \(N\) of \(\mathbb{X}\). Since \(\lambda\) is a principal polarization, \(M\) is a self-dual lattice. The stability of \(X\) under the action \(\iota\) of \(\mathcal{O}_{E}\), together with the \(\mathcal{O}_{E}\)-linearity of \(\rho\), is equivalent to the stability of \(M\) under the action \(\varPi\) of \(\pi\) on \(N\). Condition (3.1) says that the action of \(\varPi\) on \(\operatorname{Lie}(X)=M/VM\) has rank at most \(2\), which is equivalent to the index condition in the last inclusion. Conversely, if a \(W(\mathbb{F})\)-lattice \(M\subset N\) satisfies all these properties, by the inclusions \(pM\subset VM\subset M\), we see that also \(FM\subset M\). Then \(M\) corresponds to a \(p\)-divisible group \(X\) with additional structure \((\iota,\lambda)\) as claimed and with a quasi-isogeny \(\rho\) to \(\mathbb{X}\) induced by the inclusion of \(M\) in \(N\). As in [10, Sec. 2] we also consider the Hermitian \(E\)-vector space \(C\) constructed as follows. Let \(\eta\in W^{\times}\) be such that \((\eta\pi)^{2}=p\) and consider the \(\sigma\)-linear map \(\tau:=\eta\varPi V^{-1}:N\to N\). Recall that the \(p\)-divisible group \(\mathbb{X}\) is supersingular, which means that all the slopes of its Newton polygon are \(\frac{1}{2}\). Therefore, \(\tau\) has all slopes zero. We define \(C\) as the \(n\)-dimensional \(\mathbb{Q}_{p}\)-vector space consisting of the points of \(N\) that are fixed by \(\tau\). Since the action of \(E\) on \(N\) commutes with the Frobenius and Verschiebung maps, the action of \(\varPi\) commutes with \(\tau\). The structure of \(E\)-vector space on \(C=N^{\tau}\) is then induced by the action of \(\varPi\) on \(N\). Last, we note that there is an isomorphism \(C\otimes_{\mathbb{Q}_{p}}W_{\mathbb{Q}}\xrightarrow{\sim}N\) such that \(\operatorname{id}_{C}\otimes\sigma\) corresponds to \(\tau\). As remarked in _loc.cit._, the restriction of the skew-symmetric form of \(N\) induces an alternating bilinear form on \(C\) with values in \(\mathbb{Q}_{p}\), which we denote again by \(\langle\cdot,\cdot\rangle\). In particular, it satisfies \[\langle\varPi x,y\rangle=-\langle x,\varPi y\rangle,\quad\text{for }x,y\in C.\] Therefore, we can define a symmetric \(E\)-bilinear form on \(C\) by setting \[(x,y):=\langle\varPi x,y\rangle.\] As remarked in [10, Sec. 2], we can also define a Hermitian form \(h\) on \(C\) via the formula \[h(x,y):=\langle\varPi x,y\rangle+\langle x,y\rangle\pi.\] This form in particular satisfies \[\langle x,y\rangle=\tfrac{1}{2}\operatorname{Tr}_{E/\mathbb{Q}_{p}}(\pi^{-1}h(x,y)) \tag{3.3}\] \[(x,y)=\tfrac{1}{2}\operatorname{Tr}_{E/\mathbb{Q}_{p}}(h(x,y)), \tag{3.4}\] for all \(x,y\in C\). We extend the Hermitian form of \(C\) (and consequently the symmetric and alternating forms, too) onto \(C\otimes_{E}\tilde{E}\) by setting \[h(v\otimes a,w\otimes b)=a\cdot\sigma(b)\cdot h(v,w).\] **Lemma 3.5**.: _We denote by \(M^{\vee},M^{\sharp},M^{\perp}\) the duals of an \(\mathcal{O}_{\tilde{E}}\)-lattice \(M\) in \(C\otimes_{E}\tilde{E}\) respectively for the alternating, Hermitian and symmetric from. Then we have_ \[M^{\vee}=M^{\sharp}=\varPi M^{\perp}.\] Proof.: If \(x\in M^{\vee}\), then for every \(m\in M\) the value of \(\langle x,m\rangle\) is an element of \(\mathcal{O}_{\tilde{E}}\). Since \(M\) is an \(\mathcal{O}_{\tilde{E}}\)-lattice we have \(\varPi M\subset M\), and therefore \(\langle\varPi x,m\rangle=-\langle x,\varPi m\rangle\) is an integer, too. From the definition of the Hermitian form \(h\), it follows then that \(x\in M^{\sharp}\). The other inclusion is clear from the relation (3.3) above between the alternating form and the trace of the Hermitian form. If \(x\in M^{\perp}\), then by definition of the symmetric form the value of \(\langle\varPi x,m\rangle\) is an integer for all \(m\in M\), and therefore \(\varPi x\in M^{\vee}\). Conversely, if \(x\in M^{\vee}\), then \((\varPi^{-1}x,m)=\langle\varPi(\varPi^{-1}x),m\rangle=\langle x,m\rangle\) is an integer for all \(m\in M\) and therefore \(\varPi^{-1}x\in M^{\perp}\). **Lemma 3.6**.: _Associating to a \(p\)-divisible group its Dieudonne module defines a bijection of \(\bar{\mathcal{N}}^{0}_{2,n}(\mathbb{F})\) with the set of \(\mathcal{O}_{\tilde{E}}\)-lattices_ \[\mathcal{V}(\mathbb{F})=\{M\subset C\otimes_{E}\tilde{E}\mid M^{\sharp}=M,\ \varPi\tau(M)\subset M\subset^{n}\varPi^{-1}\tau(M),\ M\subset^{\leq 2 }(M+\tau(M))\}.\] Proof.: This is simply a reformulation of Lemma 3.2 in terms of the map \(\tau\) and the isomorphism \(C\otimes_{E}\breve{E}\xrightarrow{\sim}N\). _Remark 3.7_.: In the following sections we will often have to distinguish between two, sometimes quite different, cases. Consider the discriminant of the Hermitian space \(C\). It is given by the image of \((-1)^{\frac{n(n-1)}{2}}\det V\) in the order-2 group \(\mathbb{Q}_{p}^{\times}/\mathrm{Norm}_{E/\mathbb{Q}_{p}}(E^{\times})\). We say that the form is _split_ if the discriminant is the trivial element in this group, respectively _non-split_ if it is non-trivial. As noted in [12, Rem. 4.2] for even dimension \(n\) both cases, \(C\) split and non-split, can appear. This only depends on the choice of \(\mathbb{X}\) used to define the moduli space \(\mathcal{N}_{2,n}\), and in _loc.cit._ it is shown how to construct examples for both cases. If the dimension is odd, since we can multiply \(\lambda_{\mathbb{X}}\) by a unit in \(\mathbb{Z}_{p}\), one can assume without loss of generality that the discriminant of \(C\) is \(1\), compare also [12, Rem. 4.2] and the references there. We show now how to associate to any lattice \(M\) in \(\mathcal{V}(\mathbb{F})\) a unique minimal \(\tau\)-stable \(\mathcal{O}_{E}\)-lattice \(\varLambda(M)\subset C\) such that \(M\subset\varLambda(M)\otimes_{\mathcal{O}_{E}}\mathcal{O}_{\tilde{E}}\). The construction is the same as that of [12, Sec. 4], however, due to the different index appearing in the last inclusion in Lemma 3.6, the resulting lattice \(\varLambda(M)\) will satisfy a weaker property. In the following we denote by \(\pi\) both the element of \(E\) and its action \(\varPi\) on \(N\) or \(C\). **Definition 3.8**.: Let \(\varLambda\) be an \(\mathcal{O}_{E}\)-lattice in \(C\). 1. [label=(0)] [MISSING_PAGE_POST] 3. In this paper we say that \(\varLambda\) is a _\(2\)-vertex lattice_ if \(\pi^{2}\varLambda\subset\varLambda^{\vee}\subset\varLambda\). Clearly vertex lattices and \(2\)-modular lattices are also \(2\)-vertex lattices. Given a lattice \(M\in\mathcal{V}(\mathbb{F})\), for each positive integer \(j\) we consider the lattice \[T_{j}:=M+\tau(M)+\cdots+\tau^{j}(M).\] We also denote by \(\tau_{j}\) the image of \(T_{j}\) under \(\tau\). It is clear from the definition that \(T_{j+1}=T_{j}+\tau_{j}\) and that \(\tau_{j-1}\subset T_{j}\cap\tau_{j}\). From the properties of \(M\) it follows that for every \(j\) the lattice \(T_{j}\) satisfies \[\pi T_{j}\subset T_{j},\quad\pi\tau(T_{j})\subset T_{j}\subset\pi^{-1}\tau(T_ {j}),\quad T_{j}\subset^{\leq 2}T_{j}+\tau(T_{j}), \tag{3.9}\] and similarly for \(\tau_{j}\). By [11, Prop. 2.17] there is an integer \(d\) such that \(T_{d}=T_{d+1}\) and the minimal such integer satisfies \(d\leq n-1\), where \(n\) is again the dimension of the \(\tilde{E}\)-vector space \(N\). Consider the chain of inclusions \[M=T_{0}\subset T_{1}\subset\cdots\subset T_{d}. \tag{3.10}\] We now give a series of rather combinatorial remarks which will be of key importance for the proof of Proposition 3.19 later. _Remark 3.11_.: For any \(i=1,\ldots,d\) the lattices \(T_{i-1}\) and \(\tau_{i-1}\) have the same index in \(T_{i}\). This follows from the fact that they are both contained in \(T_{i}\), by definition, and since \(\tau\) has slopes zero, they have the same volume. By the second isomorphism theorem for modules, it also follows that the index of the inclusion \(T_{i}\subset T_{i+1}\) is the same as that of the inclusion \(T_{i}\cap\tau_{i}\subset T_{i}\). _Remark 3.12_.: There is an index \(1\leq k\leq d\) such that \[M=T_{0}\subset^{2}\cdots\subset^{2}T_{k}\subset^{1}\cdots\subset^{1}T_{d}. \tag{3.13}\] Indeed, let \(k\) be the minimal integer such that \(T_{k-1}\subset^{2}T_{k}\subset^{1}T_{k+1}\), with the convention that if all inclusions have index \(1\) or \(2\), we simply say \(k=0\), respectively \(k=d\). Assume \(0<k<d\), we show by induction that for all \(k\leq i<d\) the index of \(T_{i}\) in \(T_{i+1}\) is one. For \(i=k\) this is just the definition of \(k\). Assume \(k<i<d\). By induction, we have \(T_{i-1}\subset^{1}T_{i}\) and by Remark 3.11 this implies \(\tau_{i-1}\subset^{1}T_{i}\). We know that \[\tau_{i-1}\subset T_{i}\cap\tau_{i}\subsetneq T_{i},\] where the second inclusion is proper as \(i<d\) and therefore \(T_{i}\) is not \(\tau\)-stable. Since \(\tau_{i-1}\) has index \(1\) in \(T_{i}\) we have that \(\tau_{i-1}=T_{i}\cap\tau_{i}\subset^{1}T_{i}\). By the previous remark we conclude that \(T_{i}\subset^{1}T_{i+1}\), which concludes the proof of (3.13). Let \(k\) be as above, then we claim that \[\tau_{i-1}=T_{i}\cap\tau_{i}\quad\text{if $i\neq k$},\] \[\tau_{k-1}\subset^{1}T_{k}\cap\tau_{k}\subset^{1}T_{k}.\] We have already proved the case \(i>k\). For \(i<k\) we have \(T_{i-1}\subset^{2}T_{i}\subset^{2}T_{i+1}\). Then by Remark 3.11 and the first inclusion it follows \(\tau_{i-1}\subset^{2}T_{i}\). By the same remark and the second inclusion we also have \(T_{i}\cap\tau_{i}\subset^{2}T_{i}\), from which we deduce equality. At step \(k\) we have \(T_{k-1}\subset^{2}T_{k}\subset^{1}T_{k+1}\). From the first inclusion we obtain \(\tau_{k-1}\subset^{2}T_{k}\), while from the second inclusion and Remark 3.11 it follows \(T_{k-1}\cap\tau_{k-1}\subset^{1}T_{k}\). _Remark 3.14_.: From the inclusions \(\pi\tau(M)\subset M\subset\pi^{-1}\tau(M)\) in the definition (3.6) of \(\mathcal{V}(\mathbb{F})\) it follows that \(\pi T_{2}=\pi M+\pi\tau(M)+\pi\tau^{2}(M)\subset\tau(M)\). As in the proof of [11, Prop. 4.2] we deduce that for \(i\geq 2\) \[T_{i}=(M+\tau(M)+\cdots+\tau^{i}(M))\] \[=(M+\tau(M)+\tau^{2}(M))+\tau(M+\tau(M)+\tau^{2}(M))+\cdots+\tau^{i-2}( M+\tau(M)+\tau^{2}(M))\] \[=T_{2}+\tau(T_{2})+\cdots+\tau^{i-2}(T_{2})\] \[\subset\pi^{-1}\tau(M)+\cdots+\pi^{-1}\tau^{i-1}(M)\] \[\subset\pi^{-1}\tau_{i-2}.\] So for any \(2\leq i\leq d\) we have \(\pi T_{i}\subset\tau_{i-2}\subset T_{i-1}\cap\tau_{i-1}\). In particular, it follows that \(\pi T_{d}\subset T_{d-1}\). Since \(T_{d}\) is \(\tau\)-stable we have \[\pi T_{d}\subset\bigcap_{m\in\mathbb{Z}}\tau^{m}(T_{d-1}).\] By Remark 3.12 we know that for \(k<i<d\) the intersection \(T_{i}\cap\tau_{i}\) coincides with \(\tau_{i-1}\). After applying this recursively to the previous equation we obtain \[\pi T_{d}\subset\bigcap_{m\in\mathbb{Z}}\tau^{m}(T_{k})\subset T_{k}\cap\tau_ {k}\subset T_{k}. \tag{3.15}\] Since \(\tau_{k-1}\subset^{1}T_{k}\cap\tau_{k}\) it is in general not true that \(\pi T_{d}\subset\tau_{k-1}\). However, by the previous discussion we know that \(\pi T_{k}\subset\tau_{k-1}\) hence we can at least say that \(\pi^{2}T_{d}\subset\tau_{k-1}\) or equivalently, by \(\tau\)-stability, \(\pi^{2}T_{d}\subset T_{k-1}\). By \(\tau\)-stability, again, \[\pi^{2}T_{d}\subset\bigcap_{m\in\mathbb{Z}}\tau^{m}(T_{k-1}).\] Again we can apply Remark 3.12 recursively since for \(i<k\) we still have \(T_{i}\cap\tau_{i}=\tau_{i-1}\). We can then conclude that \[\pi^{2}T_{d}\subset\bigcap_{m\in\mathbb{Z}}\tau^{m}(M)\subset M. \tag{3.16}\] _Remark 3.17_.: If \(k=0\) or \(k=d\) we know by Remark 3.12 that for all \(i\) the intersection \(T_{i}\cap\tau_{i}\) coincides with \(\tau_{i-1}\). Therefore, when we apply this to (3.15) we obtain \(\pi T_{d}\subset M\). If \(d=k+1\) then arguing as in the second part of the previous remark we obtain \(\pi T_{d}\subset M\). Note that these are not the only possible cases, one may still have \(\pi T_{d}\subset M\) even if \(0<k<d-2\). In order to prove the next proposition we need one more observation concerning \(\tau\)-stable lattices in \(N\). **Lemma 3.18**.: _Let \(\mathcal{L}\) be a \(\tau\)-stable \(\mathcal{O}_{\tilde{E}}\)-lattice in \(N\), then \(\mathcal{L}\) has a basis consisting of \(\tau\)-stable elements._ Proof.: By the isomorphism \(C\otimes_{E}\breve{E}\xrightarrow{\sim}N\) given above it follows that \(N\) has a \(\tau\)-stable basis. Let \(\varLambda\) be the \(\tau\)-stable lattice spanned by such a basis. Since \(\mathcal{L}\) is an \(\mathcal{O}_{\tilde{E}}\)-lattice in \(N\), there is an integer \(i\) such that \(\pi^{i}\varLambda\subset\mathcal{L}\). It follows that \(\mathcal{L}\) contains at least one element that is \(\tau\)-stable. We show by induction that \(\mathcal{L}\) has a basis consisting of \(\tau\)-stable elements. Suppose \(N\) has dimension one. Up to multiplication by powers of the uniformizer \(\pi\), we can assume that there is an element \(v\in\mathcal{L}\) that is \(\tau\)-stable and such that if \(av\in\mathcal{L}\) for some \(a\in\breve{E}\), then \(a\in\mathcal{O}_{\tilde{E}}\). We show that \(v\) generates \(\mathcal{L}\). Again, observe that there is an integer \(i\) such that \(\pi^{i}\mathcal{L}\subset\mathcal{O}_{\tilde{E}}\cdot v\). Therefore, for any element \(l\in\mathcal{L}\) there is an integer \(j\) such that \(l=a\pi^{j}v\) for some \(a\in\mathcal{O}_{\tilde{E}}^{\times}\). By our choice of \(v\), the coefficient \(a\pi^{j}\) has to be an integer, hence \(\mathcal{L}\subset\mathcal{O}_{\tilde{E}}\cdot v\subset\mathcal{L}\), which concludes the proof for the one-dimensional case. Suppose now that \(N\) has dimension \(n+1\geq 2\) and let \(\mathcal{L}=\tau(\mathcal{L})\) be a lattice in \(N\). We can again find a \(\tau\)-stable element \(v\in\mathcal{L}\), and up to multiplication by powers of \(\pi\) we can assume that if \(av\in\mathcal{L}\) then \(a\in\mathcal{O}_{\tilde{E}}\). Consider the \(n\)-dimensional quotient space \(N/\breve{E}v\) and observe that \(\tau\) commutes with the quotient map as \(v\) is \(\tau\)-stable. It follows that the image of \(\mathcal{L}\) in this quotient is again a \(\tau\)-stable lattice and hence by induction it has a basis consisting of \(\tau\)-fixed elements. Lift this basis to \(\tau\)-stable elements \(\{e_{1},\ldots,e_{n}\}\) of \(N\), which is possible since \(N\) has a \(\tau\)-stable basis. Then we have that \(\mathcal{L}\) has a basis of the form \(\{a_{0}v,e_{1}-a_{1}v,\ldots e_{n}-a_{n}v\}\) for suitable \(a_{i}\in\tilde{E}\). By the choice of \(v\) it immediately follows that we can assume \(a_{0}=1\). If \(a_{i}\in\mathcal{O}_{\tilde{E}}\), then the corresponding \(\tau\)-stable vector \(e_{i}\) is already in \(\mathcal{L}\), and we can substitute it to \(e_{i}-a_{i}v\) in the basis of \(\mathcal{L}\). Assume that for some \(i\) the coefficient \(a_{i}\in\tilde{E}\) is not an integer. Observe that since \(\mathcal{L}\) is \(\tau\)-stable we have that \(\mathcal{L}\) contains the element \((e_{i}-a_{i}v)-\tau(e_{i}-a_{i}v)=(\sigma(a_{i})-a_{i})v\) for each \(i\). By definition of \(v\) it follows that \((\sigma(a_{i})-a_{i})\in\mathcal{O}_{\tilde{E}}\). We can then write \(a_{i}=b_{i}+c_{i}\) with \(c_{i}\in\mathcal{O}_{\tilde{E}}\) and \(b_{i}=\sigma(b_{i})\in\tilde{E}\). We can substitute \(e_{i}-a_{i}v\) in the basis of \(\mathcal{L}\) by the \(\tau\)-stable element \(e_{i}-b_{i}v\), which concludes the proof. **Proposition 3.19**.: _For any lattice \(M\) in \(\mathcal{V}(\mathbb{F})\) there is a unique minimal \(\mathcal{O}_{E}\)-lattice \(\Lambda(M)\subset C\) such that \(M\subset\Lambda(M)\otimes_{\mathcal{O}_{\tilde{E}}}\mathcal{O}_{\tilde{E}}\). Moreover, \(\Lambda(M)\) is a \(2\)-vertex lattice._ Proof.: Consider \(M\) in \(\mathcal{V}(\mathbb{F})\) and the corresponding lattice \(T_{d}\) as above. As in [12, Prop. 4.1] we define \(\Lambda(M):=T_{d}^{\tau}=T_{d}\cap C\). Since \(T_{d}\) is \(\tau\)-stable, by Lemma 3.18 it has a basis consisting of \(\tau\)-stable elements. It follows that \(\Lambda(M)\) is an \(\mathcal{O}_{E}\)-lattice in \(C\) and that \(T_{d}=\Lambda(M)\otimes_{\mathcal{O}_{\tilde{E}}}\mathcal{O}_{\tilde{E}}\). By definition of \(T_{d}\), it follows that \(\Lambda(M)\) is the minimal lattice in \(C\) which, after tensoring with \(\mathcal{O}_{\tilde{E}}\), contains \(M\). If \(d=0\) or \(d=1\) it directly follows from the definition of \(M\) that \(\pi T_{d}\subset M\cap\tau(M)\). If \(2\leq d\), by Remark 3.14 we know that \[\pi^{2}T_{d}\subset\bigcap_{l\in\mathbb{Z}}\tau^{l}(M)\subset M\cap\tau(M) \cap\cdots\cap\tau^{d}(M)=T_{d}^{\vee},\] where the last equality follows from the fact that \(M\) is self-dual and that \(\tau\) commutes with taking duals, as it has slopes zero. This proves that \(\Lambda(M)\) is a \(2\)-vertex lattice. _Remark 3.20_.: Observe that \(\Lambda(M)\) is a vertex lattice if and only if \(\pi T_{d}\subset M\). Indeed, if this is the case then arguing as above we obtain \(\pi T_{d}\subset M\cap\tau(M)\cdots\cap\tau^{d}(M)=T_{d}^{\vee}\). Conversely, if \(\pi T_{d}\subset T_{d}^{\vee}\), since \(T_{d}^{\vee}\) is contained in \(M\) we have that \(\pi T_{d}\subset M\). Note that if \(\Lambda\) is a vertex lattice and \(\Lambda(M)\subset\Lambda\), it follows that \(\Lambda(M)\) is a vertex lattice as well. Indeed, if \(\Lambda(M)\subset\Lambda\) by taking duals and by definition of (2-) vertex lattice, we have that \[\pi\Lambda(M)\subset\pi\Lambda\subset\Lambda^{\vee}\subset\Lambda(M)^{\vee} \subset\Lambda(M)\subset\Lambda.\] Let \(\Lambda\) be a \(2\)-vertex lattice, we denote \[\mathcal{V}_{\Lambda}(\mathbb{F}) =\{M\in\mathcal{V}(\mathbb{F})\mid\Lambda(M)\subset\Lambda\}\] \[\mathcal{V}_{\Lambda}^{\alpha}(\mathbb{F}) =\{M\in\mathcal{V}(\mathbb{F})\mid\Lambda(M)=\Lambda\}.\] We recall some results from [12, Sec. 3] about the set of vertex lattices in order to compare them to the behavior of \(2\)-vertex lattices. For \(n\) even and non-split form, and for odd \(n\) let \(\mathscr{L}\) denote the set of vertex lattices. If \(n\) is even and the form is split, which is the only case where vertex lattices of type \(n\) exist, we let \(\mathscr{L}\) be the set of vertex lattices of type different from \(n-2\). In both cases, we give \(\mathscr{L}\) the structure of a simplicial complex as follows. We say that two vertex lattices \(\Lambda_{1}\) and \(\Lambda_{2}\), at least one of which is of type \(\leq n-2\), are neighbors if \(\Lambda_{1}\subset\Lambda_{2}\) or vice versa. For two vertex lattices both of type \(n\), we say that they are neighbors if their intersection is a vertex lattice of type \(n-2\). Then an \(r\)-simplex of \(\mathscr{L}\) is a subset of \(r\) vertex lattices which are pairwise neighbors. Let \(\operatorname{SU}(C)\) be the special group of unitary similitudes of \((C,h)\), _i.e._ the subgroup of linear transformations of \(C\) preserving the Hermitian form \(h\) and having determinant one. As remarked in [12, Sec. 3] there is an action of \(\operatorname{SU}(C)(\mathbb{Q}_{p})\) on \(\mathscr{L}\) which preserves the simplicial complex structure we just defined. **Proposition 3.21**.: _Keep notation as above._ 1. _[_14_, Prop. 3.4]_ _There is a_ \(\operatorname{SU(C)}(\mathbb{Q}_{\mathrm{p}})\)_-equivariant isomorphism between_ \(\mathscr{L}\) _and the Bruhat-Tits simplicial complex of_ \(\operatorname{SU(C)}\) _over_ \(\mathbb{Q}_{p}\)_. Moreover,_ \(\mathscr{L}\) _is connected._ 2. _[_14_, Prop. 4.3, 6.7]_ _Let_ \(\Lambda_{1}\) _and_ \(\Lambda_{2}\) _be two vertex lattices in C. Then_ \(\mathcal{V}_{\Lambda_{1}}\subset\mathcal{V}_{\Lambda_{2}}\) _if and only if_ \(\Lambda_{1}\subset\Lambda_{2}\) _and equality holds if and only if the two lattices are also equal. It follows_ \[\mathcal{V}_{\Lambda}=\bigsqcup_{\Lambda^{\prime}\subset\Lambda}\mathcal{V}_{ \Lambda^{\prime}}^{\circ}\] _and every summand is non-empty._ 3. _[_14_, Prop. 4.2]_ _The intersection_ \(\mathcal{V}_{\Lambda_{1}}\cap\mathcal{V}_{\Lambda_{2}}\) _is non-empty if and only if_ \(\Lambda_{1}\cap\Lambda_{2}\) _is a vertex lattice, in which case it coincides with_ \(\mathcal{V}_{\Lambda_{1}\cap\Lambda_{2}}\)_._ For \(2\)-vertex lattices the situation is more complicated, and there is not a full analogue of the results above, compare Remark 3.23 below. First, we need to recall the _Jordan splitting_ for lattices in the Hermitian space \(C\). It is proved in [11, Prop. 4.3] that any \(\mathcal{O}_{E}\)-lattice in \(C\) has a canonical decomposition as a direct sum of modular lattices in possibly smaller-dimensional Hermitian subspaces. Moreover, this decomposition is compatible with taking duals, _i.e._ if \(L\) is a lattice in \(C\) with Jordan splitting \[L=\bigoplus_{1\leq\lambda\leq t}L_{\lambda},\] with each \(L_{\lambda}\) modular, then its dual \(L^{\vee}\) has Jordan splitting \(L^{\vee}=\bigoplus_{1\leq\lambda\leq t}(L_{\lambda})^{\vee}\). Indeed, observe that the dual of an \(m\)-modular lattice is by definition \((-m)\)-modular. **Proposition 3.22**.: _Consider the set of \(2\)-vertex lattices, i.e. the set of \(\mathcal{O}_{E}\)-lattices \(\Lambda\) in \(C\) such that \(\pi^{2}\Lambda\subset\Lambda^{\vee}\subset\Lambda\)._ 1. _The set of_ \(2\)_-modular lattices is in bijection with the set of vertex lattices of type_ \(0\)_, hence with the_ \(0\)_-simplices of the Bruhat-Tits building of_ \(\operatorname{SU(C)}\) _over_ \(\mathbb{Q}_{p}\)_._ 2. _Every_ \(2\)_-vertex lattice is contained in some, possibly non-unique,_ \(2\)_-modular lattice. Hence,_ \[\mathcal{V}(\mathbb{F})=\bigcup_{\Lambda\in\{2\text{-modular}\}}\mathcal{V}_{ \Lambda}(\mathbb{F}),\] _and for every_ \(2\)_-modular lattice_ \(\Lambda\) _already the set_ \(\mathcal{V}_{\Lambda}^{\circ}(\mathbb{F})\) _is non-empty._ Proof.: Let \(\Lambda\) be a \(2\)-modular lattice. Thus, we have \(\pi^{2}\Lambda=\Lambda^{\vee}\subset\pi\Lambda\subset\Lambda\). Observe that \((\pi\Lambda)^{\vee}=\pi^{-1}(\Lambda^{\vee})=\pi^{-1}(\pi^{2}\Lambda)=\pi\Lambda\), which means that \(\pi\Lambda\) is a self-dual vertex lattice, that is a vertex lattice of type \(0\). Conversely, given a vertex lattice \(L\) of type \(0\), the lattice \(\pi^{-1}L\) satisfies \((\pi^{-1}L)^{\vee}=\pi L^{\vee}=\pi L=\pi^{2}(\pi^{-1}L)\), hence it is a \(2\)-modular lattice. If \(L\) is a \(2\)-vertex lattice, which means \(\pi^{2}L\subset L^{\vee}\), the summands appearing in its Jordan decomposition can only be \(0,1\) or \(2\)-modular lattices. Therefore, it is enough to prove that every \(0\) or \(1\)-modular lattice is contained in a \(2\)-modular lattice. If \(L\) is \(0\)-modular, then consider \(\pi^{-1}L\), which we have already seen is a \(2\)-modular lattice, and it contains \(L\). If \(\pi L=L^{\vee}\), by the connectedness of the simplicial complex \(\mathscr{L}\) and its bijection with the Bruhat-Tits building for \(\operatorname{SU}(C)(\mathbb{Q}_{p})\) as recalled in the previous proposition, we know that \(L\) contains a self-dual lattice \(\pi L\subset L_{0}^{\vee}=L_{0}\subset L\). Then the \(2\)-modular lattice \(\pi^{-1}L_{0}\) contains \(L\). The non-emptiness of the set \(\mathcal{V}_{\Lambda}^{\circ}\) will actually follow from the results of Section 5, in particular from Lemma 5.8 and 5.13. _Remark 3.23_.: We have seen that there is a bijection between the set of \(2\)-modular lattices and of \(0\)-modular lattices. One could ask if there is a bijection between the set of generic \(2\)-vertex lattices and vertex lattices, along the lines of the proposition above. It is true, for example, that for a vertex lattice \(L\), if \(L\) is not \(1\)-modular, one obtains a \(2\)-vertex lattice by taking \(\pi^{-1}L^{\vee}\). The converse does however not work. Given a \(2\)-vertex lattice \(\Lambda\) (that is of course not a vertex lattice), we would have to consider \(L=(\pi\Lambda)^{\vee}=\pi^{-1}\Lambda^{\vee}\). Its dual, which is \(\pi\Lambda\), is contained in \(L\), since \(\pi^{2}\Lambda\subset\Lambda^{\vee}\) and therefore \(L^{\vee}=\pi\Lambda\subset\pi^{-1}\Lambda^{\vee}=(\pi\Lambda)^{\vee}=L\). However, it is not true in general that \(L^{\vee}=\pi\Lambda\supset\pi L=\pi(\pi\Lambda)^{\vee}=\Lambda^{\vee}\). For example, consider a \(2\)-vertex lattice with Jordan decomposition \(\Lambda=\Lambda_{1}\oplus\Lambda_{2}\) where \(\Lambda_{1}\) is a \(2\)-modular lattice and \(\Lambda_{2}\) is a \(0\)-modular (hence self-dual) lattice. Then \(\Lambda^{\vee}=\pi^{2}\Lambda_{1}\oplus\Lambda_{2}\) and it is not contained in \(\pi\Lambda\). This is one of the reasons why, unlike [14, Sec. 4], we are not going to attempt at a stratification of \(\mathcal{V}_{\Lambda}\) in terms of sets \(\mathcal{V}_{\Lambda^{\prime}}^{\circ}\) for smaller \(2\)-vertex lattices \(\Lambda^{\prime}\). The other main reason is that it does not seem to be feasible to describe one single such stratum in terms of Deligne-Lusztig varieties, as we are going to note in the next section, for example in Remark 4.21. ## 4. Deligne-Lusztig varieties for the symplectic and orthogonal group In this section we recall some facts about (generalized) Deligne-Lusztig varieties and focus on three families of varieties for the symplectic and orthogonal group. Their relevance will become clear in the next section. ### Reminder on Deligne-Lusztig Varieties Deligne-Lusztig varieties were first introduced in [10]. Here, as in the original paper, we give a description in terms of their \(\mathbb{F}\)-valued points. We also follow the notation of [14, Sec. 5] and the references in there. Let \(G\) be a connected reductive group over a finite field \(\mathbb{F}_{q}\). Let \(T\subset B\subset G\) be respectively a maximal torus defined over \(\mathbb{F}_{q}\) and a Borel subgroup over \(\mathbb{F}_{q}\) containing it. Fix an algebraic closure \(\mathbb{F}\) of \(\mathbb{F}_{q}\). Let \(W\) be the Weyl group \(N_{G}(T)(\mathbb{F})/T(\mathbb{F})\). Denote by \(\varPhi\) the Frobenius on \(G(\mathbb{F})\). Consider the _relative position map_ \[\operatorname{inv}\colon G/B\times G/B\to W\] which sends a pair \((g_{1},g_{2})\) to the unique element \(w\in W\) such that \(g_{1}^{-1}g_{2}\in BwB\). For \(w\in W\) the corresponding _Deligne-Lusztig variety_ is \[X_{B}(w)=\{g\in G/B\mid\operatorname{inv}(g,\varPhi(g))=w\}.\] Deligne-Lusztig varieties can be related to Schubert varieties via the local model diagram by Gortz and Yu [1, 5.2]. Consider the quotient map \(\pi\colon G\to G/B\) and denote by \(L\) its composition with the Lang map \(g\mapsto g^{-1}\varPhi(g)\) \[G/B\stackrel{{\pi}}{{\leftarrow}}G\xrightarrow{L}G/B.\] Then we have that Deligne-Lusztig varieties and Schubert cells are smoothly equivalent to each other under these maps \[\pi^{-1}(X_{B}(w))=L^{-1}(BwB/B).\] It follows that \(X_{B}(w)\) is smooth, of pure dimension \(\ell(w)\) and that the singularities of the closure \(\overline{X_{B}(w)}\) are smoothly equivalent to the singularities of the Schubert variety \(\overline{BwB}/B\), compare [1, 5.2]. The closure \(\overline{X_{B}(w)}\) is stratified by Deligne-Lusztig varieties for smaller elements in the Bruhat order on \(W\) as follows \[\overline{X_{B}(w)}=\bigsqcup_{w^{\prime}\leq w}X_{B}(w^{\prime}). \tag{4.1}\] This is a consequence of the analogue closure relations for Schubert cells and the local model diagram, compare [1, Sec. 5] for a detailed proof. In the next sections we will also be interested in some _generalized_ Deligne-Lusztig varieties, which are defined as the analogue in a partial flag variety. More precisely, let \(\varDelta=\{\alpha_{1},\dots,\alpha_{n}\}\) be the set of simple roots associated to the datum \((B,T)\). Recall that to each simple root \(\alpha_{i}\) corresponds a simple reflection \(s_{i}\) in the Weyl group. Let \(I\) be a subset of the simple roots. We denote by \(W_{I}\) the subgroup of \(W\) generated by the simple reflections corresponding to \(I\) and by \(P_{I}\) the standard parabolic subgroup \(BW_{I}B\). The partial flag variety \(G/P_{I}\) parametrizes then parabolic subgroups of type \(I\). Again, one can define a relative position map \[\operatorname{inv}\colon G/P_{I}\times G/P_{I}\to W_{I}\backslash W/W_{I},\] and for a class \(w\in W_{I}\backslash W/W_{I}\) the corresponding generalized Deligne-Lusztig variety \[X_{P_{I}}(w)=\{g\in G/P_{I}\mid\operatorname{inv}(g,\Phi(g))=w\}.\] We recall a result by Bonnafe and Rouquier [1, Thm. 2] concerning irreducibility. **Theorem 4.2**.: _Let \(I\subset\Delta\) and \(w\in W_{I}\backslash W/W_{I}\). The corresponding generalized Deligne-Lusztig variety \(X_{P_{I}}(w)\) is irreducible if and only if \(W_{I}w\) is not contained in any proper \(\Phi\)-stable standard parabolic subgroup of \(W\)._ Moreover, by the results of [1, Sec. 5] we can see that \(X_{P_{I}}(w)\) is equidimensional of dimension \[\dim(X_{P_{I}}(w))=\ell_{I}(w)-\ell(w_{I}). \tag{4.3}\] Here \(w_{I}\) is the longest element in the subgroup \(W_{I}\) and \(\ell_{I}(w)\) denotes the maximal length of an element in the double coset \(W_{I}wW_{I}\). We aim to give a description of the closure of a generalized Deligne-Lusztig variety analogous to that in (4.1). To do so, we have to study the set \(W_{I}\backslash W/W_{I}\). By [1, Prop. 2.4.4] there is a system of representatives of \(W_{I}\backslash W/W_{I}\), which we denote by \({}^{I}W^{I}\) and consists of a minimal length element in each double coset. Such a minimal length element is actually unique by [1, Prop. 4.22a], and for every element \(y\in W\) there is a decomposition \(y=z_{I}xz_{I}^{\prime}\) with \(x\in{}^{I}W^{I}\) and \(z_{I},z_{I}^{\prime}\in W_{I}\) such that \(\ell(y)=\ell(z_{I})+\ell(x)+\ell(z_{I}^{\prime})\). Before we can prove the analogue for generalized Deligne-Lusztig varieties of the closure relations (4.1), we need the following combinatorial results. **Lemma 4.4**.: _For \(x_{1},x_{2}\) in the system of minimal length representatives \({}^{I}W^{I}\) the following are equivalent_ 1. \(x_{1}\leq x_{2}\) _in the Bruhat order on_ \(W\)_,_ 2. _there are elements_ \(y_{1}\leq y_{2}\) _such that_ \(y_{i}\in W_{I}x_{i}W_{I}\)_,_ 3. _for every_ \(y_{1}\in W_{I}x_{1}W_{I}\) _there exists_ \(y_{2}\in W_{I}x_{2}W_{I}\) _such that_ \(y_{1}\leq y_{2}\)_._ Proof.: The implications (i) \(\Rightarrow\) (ii) and (iii)\(\Rightarrow\) (ii) are clear. The implication (ii) \(\Rightarrow\) (i) is proved in [1, Prop. 4.22c]. Assume (i) holds and fix \(y_{1}\in W_{I}x_{1}W_{I}\). Consider the factorization \(y_{1}=z_{I}x_{1}z_{I}^{\prime}\) such that \(\ell(y_{1})=\ell(z_{I})+\ell(x_{1})+\ell(z_{I}^{\prime})\), as given in [1, Prop. 4.22a]. Let \(z_{I}=s_{1}\cdots s_{q}\) be a reduced expression for \(z_{I}\). If \(\ell(s_{q}x_{2})=\ell(x_{2})+1\) since \(x_{1}\leq x_{2}\) we have \(s_{q}x_{1}\leq s_{q}x_{2}\). Otherwise, by the so-called _lifting property_ of the Bruhat order, compare [1, Prop. 2.2.7], we have \(s_{q}x_{1}\leq x_{2}\). By induction on the length of \(z_{I}\), we obtain an element \(y_{2}^{\prime}\in W_{I}x_{2}\) such that \(z_{I}x_{1}\leq y_{2}^{\prime}\). By repeating the same construction on the right with a reduced expression of \(z_{I}^{\prime}\) we obtain an element \(y_{2}\in W_{I}x_{2}W_{I}\) such that \(y_{1}=z_{I}x_{1}z_{I}^{\prime}\leq y_{2}\). The following result is proved in [1] and allows us to move between generalized Deligne-Lusztig varieties for two different parabolic subgroups. **Lemma 4.5**.: _[_1_, Eq. 2]_ _Let \(I\subset J\) be two subsets of simple reflections in \(W\) and \(P_{I}\subset P_{J}\) the corresponding standard parabolic subgroups. Let \(f_{IJ}:G/P_{I}\to G/P_{J}\) be the morphism of varieties that sends a parabolic subgroup of type \(I\) to the unique parabolic of type \(J\) containing it._ _Let \(w\in W\) and \(X_{P_{J}}(w)\) the corresponding generalized Deligne-Lusztig variety. Then its preimage under \(f_{IJ}\) is the union of Deligne-Lusztig varieties_ \[f_{IJ}^{-1}(X_{P_{J}}(w))=\bigcup_{W_{I}xW_{\Phi(I)}\subset W_{J}wW_{\Phi(J)}}X_ {P_{I}}(x).\] We are now ready to prove the analogue of the closure relations 4.1 for generalized Deligne-Lusztig varieties. **Lemma 4.6**.: _Let \(P_{I}\) be the standard parabolic subgroup of type \(I\), with \(I\) a \(\Phi\)-stable subset of simple reflections, and \(w\in{}^{I}W^{I}\). The closure in \(G/P_{I}\) of the generalized Deligne-Lusztig variety \(X_{P_{I}}(w)\) satisfies_ \[\overline{X_{P_{I}}(w)}=\bigcup_{w^{\prime}\in{}^{I}W^{I},w^{\prime}\leq w}X_ {P_{I}}(w^{\prime}).\] Proof.: We consider the morphism of projective varieties \(f:G/B\to G/P_{I}\) which maps a Borel subgroup to the unique parabolic subgroup of type \(I\) containing it. This map is surjective by definition of parabolic subgroups. Since \(f\) is surjective and, as a morphism between projective varieties, it is closed, we have \[\overline{X_{P_{I}}}=\overline{f(f^{-1}(X_{P_{I}}(w)))}=f(\overline{f^{-1}(X_ {P_{I}}(w))}).\] Moreover, the preimage under \(f\) of any generalized Deligne-Lusztig variety satisfies \[f^{-1}(X_{P_{I}}(w))=\bigcup_{x\in W_{I}wW_{I}}X_{B}(x).\] This follows by setting \(I=\emptyset\) in Lemma 4.5. Since the union on the right runs over a finite set, by the closure relations (4.1) for classical Deligne-Lusztig varieties, we have \[\overline{f^{-1}(X_{P_{I}}(w))}=\overline{\bigcup_{x\in W_{I}wW_{I}}X_{B}(x)} =\bigcup_{x\in W_{I}wW_{I}}\overline{X_{B}(x)}=\bigcup_{x\in W_{I}wW_{I}} \bigcup_{x^{\prime}\leq x}X_{B}(x^{\prime}).\] By Lemma 4.4 there is a bijection of sets \[\{x^{\prime}\in W\mid x^{\prime}\leq x,\text{for some }x\in W_{I}wW_{I}\} \longleftrightarrow\{x^{\prime}\in W_{I}yW_{I}\mid y\in{}^{I}W^{I},y\leq w\}.\] Putting these observations together, we conclude that the closure of \(X_{P_{I}}(w)\) is \[\overline{X_{P_{I}}(w)} =f(\overline{f^{-1}(X_{P_{I}}(w))})=f\big{(}\bigcup_{x\in W_{I}wW _{I}}\bigcup_{x^{\prime}\leq x}X_{B}(x^{\prime})\big{)}=f\big{(}\bigcup_{ \begin{subarray}{c}y\in{}^{I}W^{I}\\ y\leq w\end{subarray}}\bigcup_{y^{\prime}\in W_{I}yW_{I}}X_{B}(y^{\prime}) \big{)}\] \[=f\big{(}\bigcup_{\begin{subarray}{c}y\in{}^{I}W^{I}\\ y\leq w\end{subarray}}f^{-1}(X_{P_{I}}(y))\big{)}=\bigcup_{\begin{subarray}{c}y \in{}^{I}W^{I}\\ y\leq w\end{subarray}}X_{P_{I}}(y).\] The remainder of this chapter is dedicated to the study of some families of Deligne-Lusztig varieties which will be relevant in the sequel. In particular, we are going to decompose some generalized Deligne-Lusztig varieties in terms of other such varieties for smaller parabolic subgroups. The strategy was inspired to us by reading the proofs of [14, Sec. 4] and [17, Sec. 5], and it is based on the morphism introduced in Lemma 4.5 and the following observation. **Lemma 4.7**.: _With notation as in Lemma 4.5 above, suppose the morphism \(f_{IJ}:G/P_{I}\to G/P_{J}\) induces a bijection between the closed points of \(f_{IJ}^{-1}(X_{P_{J}}(w))=\bigcup_{W_{I}xW_{\Phi(I)}\subset W_{J}wW_{\Phi(J)} }X_{P_{I}}(x)\) and \(X_{P_{I}}(w)\). Then \(f_{IJ}\) induces an isomorphism between these two varieties._ Proof.: First observe that \(f_{IJ}:G/P_{I}\to G/P_{J}\) is a smooth morphism. Indeed, both flag varieties are smooth, as they are homogeneous spaces for \(G\). The fibers of this morphism are all isomorphic to \(P_{J}/P_{I}\), hence they are again smooth, as homogeneous spaces for \(P_{J}\) and all have the same dimension. By so-called _miracle flatness_, see for example [12, B.9], this map is flat with smooth fibers, hence smooth. Recall that the base-change of a smooth map is smooth, therefore, the morphism \(f_{X}\) defined as the base change of \(f_{IJ}\) along the following diagram is smooth. Here the vertical arrows are the just immersions of the generalized Deligne Lusztig varieties in the corresponding flag varieties. By hypothesis, we know that \(f_{X}\) gives a bijection between the sets of closed points. By [12, Rem. 12.16], to prove that \(f_{X}\) is quasi-finite, it is enough to prove that it has finite fibers on \(k\)-valued points, for any algebraically closed field \(k\). Since it is injective on closed points, this is clearly the case, hence the morphism \(f_{X}\) is quasi-finite. Recall that a smooth morphism of finite type is etale if and only if it is smooth and quasi-finite. It is then enough to prove that \(f_{X}\) is surjective and universally injective. Indeed, since \(f_{X}\) is etale, universally injective implies that it is an open immersion. Since an open immersion is an isomorphism onto its image, if \(f_{X}\) is surjective we are done. Recall that universally injective is equivalent to the diagonal morphism being bijective on \(k\)-valued points for any field \(k\). Since \(f_{X}\) is a morphism between projective schemes, it is proper, hence the diagonal morphism is a closed immersion and therefore it is already injective on \(k\)-valued points. Moreover, for a scheme of finite type over an algebraically closed field, as in our case, the set of closed points is very dense, see [12, Prop. 3.35]. Therefore, there is no proper closed subscheme containing all closed points. It follows that we can test if the diagonal morphism is surjective on closed points, which is equivalent to the map being injective. Last, by the same argument, \(f_{X}\) is surjective since it is surjective on closed points. ### Some Deligne-Lusztig varieties for the symplectic group In this section we study a family of Deligne-Lusztig varieties that is the analogue of the one analyzed in [13, Sec. 5], and contains it as a proper subset. We follow here their notation. In particular, we aim to find a stratification as in _loc.cit._ that will be related to the decomposition over the admissible set studied in the last section of this paper. Let \(V\) be a vector space of dimension \(2m\) over \(\mathbb{F}_{p}\), endowed with a skew-symmetric form \(\langle\,\ \rangle\). We fix a basis \(e_{1},\dots,e_{2m}\) of \(V\) such that \[\langle e_{i},e_{2m+1-j}\rangle=\delta_{i,j}\ \ \ \ i,j=1,\dots,m.\] Let \(T\subset B\subset\operatorname{Sp}(V)=G\) be the torus of diagonal matrices in \(\operatorname{Sp}_{2m}\) and the Borel subgroup of upper triangular matrices. Then the simple reflections generating the Weyl group \(W\) can be enumerated as follows * for \(1\leq i\leq m-1\) the reflection \(s_{i}\) exchanges \(e_{i}\) with \(e_{i+1}\) and \(e_{2m-i}\) with \(e_{2m+1-i}\), * the reflection \(s_{m}\) exchanges \(e_{m}\) with \(e_{m+1}\). We say that a subspace \(U\) of \(V\) is isotropic if it is contained in its orthogonal space with respect to the symplectic form. The maximal dimensional isotropic subspaces are called Lagrangian subspaces and have dimension \(m\). As remarked in [13, Sec. 5.2], if \(P\) is the Siegel parabolic, _i.e._ the standard parabolic corresponding to the reflections \(\{s_{1},\dots,s_{m-1}\}\), then the flag variety \(G/P\) parametrizes the Lagrangian subspaces of \(V\). In particular, we are interested in the subvariety \(S_{V}\) of \(G/P\) given by \[S_{V}=\{U\subset V,\ \text{Lagrangian}\mid\dim(U\cap\varPhi(U))\geq m-2\}.\] Observe that this can be considered as the analogue in signature \((2,n-2)\) of the variety defined in _loc.cit._, and contains it as a proper closed subvariety. **Lemma 4.8**.: \(S_{V}\) _can be identified with the closure of the generalized Deligne-Lusztig variety \(X_{P}(s_{m}s_{m-1}s_{m})\) in \(G/P\). In particular, \(S_{V}\) is normal with isolated singularities._ Proof.: If \(U\in S_{V}\) then the relative position \(\operatorname{inv}(U,\varPhi(U))\) is either the identity, the class of \(s_{m}\) or of \(s_{m}s_{m-1}s_{m}\) in \(W_{0}\backslash W/W_{0}\), where \(W_{0}\) denotes the subgroup of the Weyl group corresponding to \(P\), thus generated by \(\{s_{1},\dots,s_{m-1}\}\). It follows that \(S_{V}\) is the disjoint union \[S_{V}=X_{P}(1)\sqcup X_{P}(s_{m})\sqcup X_{P}(s_{m}s_{m-1}s_{m}). \tag{4.9}\] Observe now that the identity and \(s_{m}\) are the only minimal length representatives in \(W_{0}\backslash W/W_{0}\) smaller than \(s_{m}s_{m-1}s_{m}\) in the Bruhat order. By Lemma 4.6 this proves the first claim. As in _loc.cit._, the second statement follows from Gortz and Yu's local model diagram and the fact that generalized Schubert varieties are normal with isolated singularities. By the discussion in [12, Prop. 5.5] we also know that the union \(X_{P}(1)\sqcup X_{P}(s_{m})\) corresponds to the Lagrangian subspaces \(U\) in \(S_{V}\) such that \(\dim(U\cap\varPhi(U))\geq m-1\). #### 4.2.1. The six-dimensional case We construct a stratification of \(S_{V}\) which will be relevant especially in the study of the admissible set in Section 7. In this paper we restrict to the case \(n=6\), but a similar stratification can be defined for any dimension. We consider the following parabolic subgroups * \(P_{3}=P\), the Siegel parabolic, it corresponds to the reflections \(\{s_{1},s_{2}\}\). * \(P_{2}\), the standard parabolic corresponding to the reflection \(\{s_{1}\}\). It is the stabilizer of the partial isotropic flag \(\langle e_{1},e_{2}\rangle\subset\langle e_{1},e_{2},e_{3}\rangle\). * \(P_{2}^{\prime}\), the standard parabolic corresponding to \(\{s_{2}\}\). It is the stabilizer of \(\langle e_{1}\rangle\subset\langle e_{1},e_{2},e_{3}\rangle\). * \(B\) the Borel subgroup, it can be identified with the stabilizer of the complete isotropic flag \(\langle e_{1}\rangle\subset\langle e_{1},e_{2}\rangle\subset\langle e_{1},e_{ 2},e_{3}\rangle\). In order to give a stratification of \(S_{V}\) we follow the approach of [12, Sec. 5]. In particular, we recursively show that the restriction of the quotient maps \(G/P_{i}\to G/P_{i-1}\) for \(P_{i}\subset P_{i-1}\) gives a bijection on closed points. By Lemma 4.7, this will produce isomorphisms \(X_{P_{i-1}}(w)\cong X_{P_{i}}(w_{1})\sqcup X_{P_{i}}(w_{2})\) for suitable \(w_{1},w_{2}\) depending on \(w\). **Lemma 4.10**.: _There is a decomposition of \(S_{V}\) as disjoint union of locally closed subvarieties_ \[\begin{split} S_{V}=& X_{P}(1)\sqcup X_{P_{2}}(s_{ 3})\sqcup X_{B}(s_{3}s_{2})\sqcup X_{P_{2}^{\prime}}(s_{3}s_{2}s_{3})\ \sqcup\\ & X_{B}(s_{3}s_{2}s_{1})\sqcup X_{B}(s_{3}s_{2}s_{3}s_{1})\sqcup X _{B}(s_{3}s_{2}s_{3}s_{1}s_{2}),\end{split} \tag{4.11}\] _and this decomposition is a stratification such that the closure of each stratum is given by the union of the strata with smaller dimension. The variety \(S_{V}\) is irreducible of dimension \(5\)._ Proof.: In [12, Prop. 5.5] a stratification of \(X_{P}(1)\sqcup X_{P}(s_{3})\) is already given, namely as the union of locally closed subvarieties \[X_{P}(1)\sqcup X_{P}(s_{3})\cong X_{P}(1)\sqcup X_{P_{2}}(s_{3})\sqcup X_{B}( s_{3}s_{2})\sqcup X_{B}(s_{3}s_{2}s_{1}). \tag{4.12}\] Each of the four generalized Deligne-Lusztig varieties appearing on the right-hand side parametrizes isotropic flags of the form \[U\cap\varPhi(U)\cap\dots\cap\varPhi^{i}(U)\subset\dots\subset U\cap\varPhi(U)\subset U\] for \(i=0,1,2,3\), respectively, and such that the \((3-i)\)-dimensional subspace \(U\cap\varPhi(U)\cap\cdots\cap\varPhi^{i}(U)\) is \(\varPhi\)-stable. It follows that the irreducible components of \(X_{P}(1)\), \(X_{P_{2}}(s_{3})\) and \(X_{B}(s_{3}s_{2})\) are indexed over the \(\varPhi\)-stable subspaces \(W\subset V\) of dimension \(3,2\) and \(1\), respectively. Similarly, we want to construct a stratification of the remaining subvariety \(X_{P}(s_{3}s_{2}s_{3})\) as disjoint union of locally closed subspaces. First, we want to prove that there is a decomposition \[X_{P}(s_{3}s_{2}s_{3})\cong X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3})\sqcup X_{P^{ \prime}_{2}}(s_{3}s_{2}s_{3}s_{1}),\] and that \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s_{1})\) is open and dense in this union. By Lemma 4.5 we know that \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3})\sqcup X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s _{1})\) is the preimage of \(X_{P}(s_{3}s_{2}s_{3})\) under the morphism \(G/P^{\prime}_{2}\to G/P\). Therefore, by Lemma 4.7, it is enough to show that this morphism induces a bijection on closed points. Let \(k\) be an algebraically closed field. We know that the \(k\)-points of \(X_{P}(s_{3}s_{2}s_{3})\) are Lagrangian subspaces \(U\subset V_{k}\) such that \(\dim(U\cap\varPhi(U))=3-2=1\). Therefore, we can consider the partial isotropic flag \(U\cap\varPhi(U)\subset^{2}U\), which is a \(k\)-point of \(G/P^{\prime}_{2}\). It belongs to either \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3})(k)\) or \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s_{1})(k)\) depending on whether \(U\cap\varPhi(U)\) is stable under the Frobenius or not. This defines a map between the \(k\)-points of \(X_{P}(s_{3}s_{2}s_{3})\) and \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3})\sqcup X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s _{1})\). This map is the inverse on closed points of the map \(G/P^{\prime}_{2}(k)\to G/P(k)\) which sends a flag \(U_{1}\subset U\) to its second subspace. By Lemma 4.7, it follows that the restriction of the quotient map gives the desired isomorphism. The subvariety \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3})\) is open and dense in the union above by Lemma 4.6. Our goal is to obtain a decomposition of \(S_{V}\) which we can later relate to the simplicial complex \(\mathscr{L}\) of the previous section and to the admissible set of Section 7. To do so, we need to further decompose the open subvariety \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s_{1})\). Consider again the map \(G/B\to G/P^{\prime}_{2}\) which on \(k\)-points sends a complete flag \(U_{1}\subset U_{2}\subset U_{3}\) to the partial flag \(U_{1}\subset U_{3}\) obtained by forgetting its middle term. By Lemma 4.5 we know that the preimage of \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s_{1})\) under this map is \(X_{B}(s_{3}s_{2}s_{3}s_{1})\sqcup X_{B}(s_{3}s_{2}s_{3}s_{1}s_{2})\). Again, by Lemma 4.7, it is enough to show that this map induces a bijection between the sets of closed points. To do so, we construct its inverse (as a map of sets). We claim that the desired map is obtained by sending a partial isotropic flag \(U\cap\varPhi(U)\subset U\) in \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s_{1})(k)\) to the complete flag \[U\cap\varPhi(U)\subset U\cap(\varPhi(U)\cap\varPhi^{2}(U))^{\vee}\subset U.\] Indeed, let \(U\cap\varPhi(U)\subset^{2}U\) be a partial isotropic flag in \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s_{1})(k)\), we can assume that it has this form by the previous construction on closed points. We have already observed that partial flags \(U\cap\varPhi(U)\subset^{2}U\) in \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s_{1})(k)\) satisfy \(U\cap\varPhi(U)\cap\varPhi^{2}(U)=0\). This means that the one-dimensional subspace \(\varPhi(U)\cap\varPhi^{2}(U)\) is not contained in \(U\). Consider the subspace \(U\cap(\varPhi(U)\cap\varPhi^{2}(U))^{\vee}\), where the exponent denotes the orthogonal subspace with respect to the alternating form on \(V\). We can compute its dimension as follows \[\dim(U\cap(\varPhi(U)\cap\varPhi^{2}(U))^{\vee}) =6-\dim((U\cap(\varPhi(U)\cap\varPhi^{2}(U))^{\vee})^{\vee})\] \[=6-\dim(U^{\vee}+(\varPhi(U)\cap\varPhi^{2}(U)))\] \[=6-\dim(U+(\varPhi(U)\cap\varPhi^{2}(U)))=2,\] where we use the fact that \(U\) is Lagrangian, hence it coincides with its orthogonal, and has dimension \(3\), and the fact that the \(1\)-dimensional space \((\varPhi(U)\cap\varPhi^{2}(U))\) is not contained in \(U\). Therefore, the flag above is actually complete. It follows from Lemma 4.7 that the base change to \(X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s_{1})\) of the quotient morphism \(G/B\to G/P^{\prime}_{2}\) is an isomorphism \[X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3}s_{1})\cong X_{B}(s_{3}s_{2}s_{3}s_{1})\sqcup X _{B}(s_{3}s_{2}s_{3}s_{1}s_{2}).\] Since \(S_{V}\) is the closure in \(G/P\) of \(X_{P}(s_{3}s_{2}s_{3})\) and the latter contains \(X_{B}(s_{3}s_{2}s_{3}s_{1}s_{2})\) as an open and dense subset (by the previous decomposition and the closure relations 4.1), we deduce that \(S_{V}\) is irreducible and of dimension \(5\) We show that the stratification of \(S_{V}\) given above has good _hereditary_ properties in the sense of [11, Prop. 5.7]. Roughly speaking, this means that the strata of \(S_{V}\) that are not irreducible, can be identified with a union of varieties of the form \(S_{V^{\prime}}\) for suitable smaller-dimensional, symplectic spaces \(V^{\prime}\). The next proposition is proved in the same way as [11, Prop. 5.7]. For completeness, we recall here the main ideas of the proof. **Lemma 4.13**.: _Denote \(S_{0}=X_{P}(1)\), \(S_{1}=X_{P_{2}}(s_{3})\) and \(S_{2}=X_{B}(s_{3}s_{2})\sqcup X_{P^{\prime}_{2}}(s_{3}s_{2}s_{3})\)._ 1. _The irreducible components of_ \(S_{i}\) _are in bijection with the_ \(\varPhi\)_-stable isotropic subspaces of_ \(V\) _of dimension_ \(3-i\)_._ 2. _Let_ \(W\) _be such an isotropic subspace. The irreducible component_ \(X_{W}\) _of_ \(S_{i}\) _corresponding to_ \(W\) _by (i) is a Deligne-Lusztig variety for the symplectic group_ \(\operatorname{Sp}(W^{\vee}/W)\) _of rank_ \(3-i\)_. The closure of_ \(X_{W}\) _in_ \(S_{i}\) _is isomorphic to_ \(S_{W^{\vee}/W}\)_, the variety defined in the same way as_ \(S_{V}\) _but for the symplectic vector space_ \(W^{\vee}/W\)_._ Proof.: As in [11, Prop. 5.7] we observe that the generalized Deligne-Lusztig varieties appearing in \(S_{i}\) parametrize isotropic flags \[U_{3-i}\subset U_{3-i+1}\subset\cdots\subset U_{3},\] where \(U_{3-i}\) is \(\varPhi\)-stable. Then \(U_{3-i}\) is a \(\mathbb{F}_{p}\)-rational isotropic subspace of \(V\) of dimension \(3-i\). For \(i=0,1\), we already know by [11, Prop. 5.7] that the subvariety \(X_{W}\) of points of \(S_{i}\) such that \(U_{3-i}\) above is equal to a fixed subspace \(W\) can be identified with the Deligne-Lusztig variety for \(\operatorname{Sp}(W^{\vee}/W)\) and a Coxeter element. In case \(i=2\) the subvariety \(X_{W}\) can be identified with the union of two Deligne-Lusztig varieties for \(\operatorname{Sp}(W^{\vee}/W)\), one for the Coxeter element \(s_{3}s_{2}\) and one for the element \(s_{3}s_{2}\) in the Weyl subgroup of type \(C_{2}\) generated by \(s_{2},s_{3}\). In all three cases, such elements have full support in the corresponding Weyl groups, hence by Theorem 4.2 the subvarieties \(X_{W}\) are irreducible. Last, as remarked in _loc.cit._, for a \(\varPhi\)-stable subspace \(W\) of dimension \(i\), the closure of \(X_{W}\) in \(S_{V}\) is \[\overline{X_{W}}=\{U\in S_{V},W\subset U\},\] which can be identified with the closed variety \(S_{W^{\vee}/W}\) by sending \(U\) to its image in the quotient \(W^{\vee}/W\). ### Some Deligne-Lusztig varieties for the orthogonal group In this section, following the notation of [14, Sec. 2] we introduce two other families of Deligne-Lusztig varieties that will be relevant in the next sections. Let \(V\) be an \(n\)-dimensional \(\mathbb{F}\)-vector space with a fixed \(\mathbb{F}_{p}\)-structure. Denote with \(\varPhi\) again its Frobenius morphism. Let \((\,\ ):V\times V\to\mathbb{F}\) be a non-degenerate symmetric bilinear form on \(V\), such that \((\varPhi(x),\varPhi(y))=(x,y)^{p}\). We say that a subspace \(U\) of \(V\) is isotropic if it is contained in its orthogonal with respect to the symmetric form. A maximal isotropic subspace of \(V\) has dimension \(\lfloor\frac{n}{2}\rfloor\). If the dimension of \(V\) is even, we say that the form is _split_ if there exists a maximal \(\varPhi\)-stable isotropic subspace, which has then dimension \(\frac{n}{2}\), otherwise the form is called _non-split_ and a maximal \(\varPhi\)-stable isotropic subspace has dimension \(\frac{n}{2}-1\). As in _loc.cit._ we fix a Borel subgroup of \(\operatorname{SO}(V)\) corresponding to an isotropic flag of length \(\lfloor\frac{n-1}{2}\rfloor\). Recall that if the dimension of \(V\) is even, the correspondence between parabolic subgroups of \(\operatorname{SO}(V)\) and isotropic flags in \(V\) is slightly more involved than, for example, for the symplectic group, compare [10, App. T] and the references there. Roughly speaking, the usual map which sends a flag to its stabilizer is a bijection onto the set of parabolic subgroups of \(\operatorname{SO}(V)\) if and only if we restrict to isotropic flags where subspaces of dimension \(\frac{n}{2}\) and \(\frac{n}{2}-1\) do not appear together. In the next sections we will be interested in the following family of generalized Deligne-Lusztig varieties for the special orthogonal group \(\operatorname{SO}(V)\). **Definition 4.14**.: _[_16_, Def. 2]_ Given an integer \(a\geq 1\) consider the locally closed subscheme \(Y_{a}\) of the projective space \(\mathbb{P}(V)\) defined by the homogeneous equations \[(x,\Phi^{i}(x))=0\text{ for }0\leq i\leq a-1,\text{ and }(x,\Phi^{a}(x))\neq 0.\] We also consider the variety \(Y_{\infty}\) defined by the equations \((x,\Phi^{i}(x))=0\) for all \(i\geq 0\). **Lemma 4.15**.: _[_16_, Lemma 3]_ _Let_ \[a_{0}=\begin{cases}\frac{n}{2}-1&\text{ if }\dim(V)\text{ is even and the form is split}\\ \frac{n}{2}&\text{ if }\dim(V)\text{ is even and the form is non-split}\\ \frac{n-1}{2}&\text{ if }\dim(V)\text{ is odd}.\end{cases}\] _Then \(Y_{a}=\emptyset\) for any \(a>a_{0}\). Moreover, \(Y_{a_{0}}\) can be identified with the Deligne-Lusztig variety \(X_{B}(w)\) for some \(\Phi\)-Coxeter element, respectively in the non-split case with the union \(X_{B}(w)\cup X_{B}(\Phi(w))\). Here a \(\Phi\)-Coxeter element is an element of \(W\) that is obtained as the product of one reflection for each \(\Phi\)-orbit in \(W\)._ We fix some more notation. Assume first that \(V\) has even dimension \(n=2d\). We fix a basis \(e_{1},\dots,e_{d},f_{1},\dots,f_{d}\) such that \[(e_{i},e_{j})=(f_{i},f_{j})=0,\quad(e_{i},f_{j})=\delta_{i,j}.\] Moreover, if the form is split we can assume that all the basis vectors are fixed by \(\Phi\), otherwise we can assume that \(\Phi\) exchanges \(e_{d}\) with \(f_{d}\) and fixes the other vectors, compare [14, App. T]. Let \(T\subset B\subset G=\operatorname{SO}(V)\) denote the diagonal torus and the Borel of upper triangular matrices in the orthogonal group. Then the simple reflections generating the Weyl group can be enumerated as follows * For \(1\leq i\leq d-1\) the reflection \(t_{i}\) exchanges \(e_{i}\) with \(e_{i+1}\) and \(f_{i}\) with \(f_{i+1}\). * The reflection \(t_{d}\) exchanges \(e_{d-1}\) with \(f_{d}\) and \(e_{d}\) with \(f_{d-1}\). If the form is split, the action of \(\Phi\) on the Weyl group is trivial, otherwise, \(\Phi\) exchanges the reflection \(t_{d-1}\) with \(t_{d}\). Suppose now that \(V\) has odd dimension \(n=2d+1\), then there is a basis \(e_{0},e_{1},\dots,e_{d},f_{1},\dots,f_{d}\) of \(V\) such that \[(e_{i},e_{j})=(f_{i},f_{j})=0,\quad(e_{i},f_{j})=\delta_{i,j},\quad(e_{0},e_{ 0})=1.\] The Weyl group is generated in this case by the reflections \(t_{1},\cdots t_{d-1}\) defined as in the case \(n=2d\) while the reflection \(t_{d}\) only exchanges \(e_{d}\) with \(f_{d}\). The action of the Frobenius on \(W\) is trivial. We study the variety \(R_{V}\) in the projective space \(\mathbb{P}(V)\) given by \[R_{V}=\{x\in\mathbb{P}(V)\mid(x,x)=(x,\Phi(x))=0\}=Y_{\infty}\sqcup\bigsqcup_ {a\geq 2}^{a_{0}}Y_{a}, \tag{4.16}\] where the varieties \(Y_{a}\) are those of Definition 4.14. As in the previous section, we want to show that the decomposition above is actually a stratification. To do so we need first to fix some notation. If the dimension of \(V\) is \(2d\) or \(2d+1\), consider for \(1\leq i\leq d-2\) the standard parabolic subgroup \(P_{i}\) of \(\operatorname{SO}(V)\) corresponding to the subset of simple reflections \(I_{i}=\{t_{i+1},\dots,t_{d}\}\) of \(W\). Observe that each subset \(I_{i}\) is \(\Phi\)-stable. We also set \(P_{d-1}=P_{d}=B\). In other words, for \(i\leq d-1\) the parabolic \(P_{i}\) is the stabilizer of the standard partial isotropic flag of length \(i\) \[\langle e_{1}\rangle\subset\langle e_{1},e_{2}\rangle\subset\cdots\subset \langle e_{1},e_{2},\dots,e_{i}\rangle.\] We consider the following elements in the Weyl group. * For \(2\leq a\leq a_{0}\) we set \(w_{a}=t_{1}t_{2}\cdots t_{d-1}t_{d}t_{d-2}t_{d-3}\cdots t_{a}\), with the convention that \(w_{a_{0}}\) is the \(\Phi\)-Coxeter element of Lemma 4.15. * If the dimension is even and the form is split, we set \(w_{\infty}=t_{1}\cdots t_{d-1}\), otherwise we let \(w_{\infty}=t_{1}\cdots t_{d-2}\). **Lemma 4.17**.: _The variety \(R_{V}\) can be identified with the closure of the generalized Deligne-Lusztig variety \(X_{P_{1}}(t_{1})\) in \(G/P_{1}\). In particular, it is normal with isolated singularities._ Proof.: By definition \(R_{V}\) parametrizes isotropic lines \(l\) in \(V\) such that \(l+\Phi(l)\) is an isotropic subspace. Therefore, the relative position \(\operatorname{inv}(l,\Phi(l))\) is either the identity or the class of \(t_{1}\) in \(W_{I_{1}}\backslash W/W_{I_{1}}\). Hence, we obtain a decomposition as union of an open and closed subset \[R_{V}=X_{P_{1}}(1)\sqcup X_{P_{1}}(t_{1}), \tag{4.18}\] and we can conclude with Lemma 4.6. The second statement follows again from Gortz and Yu's local model diagram. **Lemma 4.19**.: _For \(1\leq i\leq a_{0}\) the subset \(R_{i}=Y_{\infty}\sqcup\bigsqcup_{a_{0}\prec i}^{a_{0}}Y_{a},\) is closed in \(R_{V}\) and can be identified with the closure of the generalized Deligne-Lusztig variety \(X_{P_{i+1}}(w_{i+1})\), which is isomorphic to \(Y_{i+1}\). In particular, \(Y_{2}\) is open and dense in \(R_{V}\), hence \(R_{V}\) is irreducible of dimension \(2d-3\)._ Proof.: By the decomposition (4.18) of \(R_{V}\) and the generalized closure relations of Lemma 4.6 we have that \(X_{P_{1}}(t_{1})\) is open and dense in \(R_{V}\). Let \(l\) be a closed point of \(X_{P_{1}}(t_{1})\subset R_{V}\), that is an isotropic line in \(V\). We can consider the isotropic flag \(l\subset l+\Phi(l)\). This defines a closed point in \(X_{P_{2}}(t_{1})\) if \(l+\Phi(l)\) is \(\Phi\)-stable, otherwise in \(X_{P_{2}}(t_{1}t_{2})\) if \(l+\Phi(l)+\Phi^{2}(l)\) is isotropic, or in \(X_{P_{2}}(w_{2})\), if \(\Phi^{2}(l)\) is not orthogonal to \(l\). Again this map is the inverse on closed points of the base change to \(X_{P_{1}}(t_{1})\) of the projection \(G/P_{1}\to G/P_{2}\). By Lemma 4.7 it follows that we have decomposed \(R_{V}\) as \[R_{1}=R_{V}=X_{P_{1}}(1)\sqcup X_{P_{1}}(t_{1})\cong X_{P_{1}}(1)\sqcup X_{P_ {2}}(t_{1})\sqcup X_{P_{2}}(t_{1}t_{2})\sqcup X_{P_{2}}(w_{2}).\] By the generalized closure relations of Lemma 4.6\(X_{P_{2}}(w_{2})\) is then open and dense in \(X_{P_{1}}(t_{1})\) and therefore in \(R_{V}\). Observe that the image of \(X_{P_{2}}(w_{2})\) under the quotient map \(G/P_{2}\to G/P_{1}\) is \(Y_{2}\). This can again be tested on \(k\)-valued points, for any algebraically closed field \(k\). It follows that we have an isomorphism \(X_{P_{2}}(w_{2})\cong Y_{2}\). We can conclude that \[R_{2}=R_{1}\setminus Y_{2}\cong X_{P_{1}}(1)\sqcup X_{P_{2}}(t_{1})\sqcup X_{ P_{2}}(t_{1}t_{2}),\] which is closed in \(R_{1}\), and it contains \(X_{P_{2}}(t_{1}t_{2})\) as an open subset. Assume that we have a decomposition \(R_{i}=X_{P_{1}}(1)\sqcup\bigsqcup_{j=2}^{i}X_{P_{j}}(t_{1}\cdots t_{j-1})\sqcup X _{P_{i}}(t_{1}\cdots t_{i})\) with \(X_{P_{i}}(t_{1}\cdots t_{i})\) open in \(R_{i}\). Observe that the closed points of \(X_{P_{i}}(t_{1}\cdots t_{i})\) correspond to isotropic flags of the form \[l\subset l+\Phi(l)\subset\cdots\subset l+\Phi(l)+\cdots+\Phi^{i-1}(l)\] such that \(\Phi^{i}(l)\) is orthogonal to \(l\). Again we consider the base change to \(R_{i}\) of the quotient map \(G/P_{i+1}\to G/P_{i}\). By Lemma 4.7 we only have to show that this gives a bijection between the sets of closed points. We can construct its inverse (as map of sets) by sending a flag of length \(i\) in \(X_{P_{i}}(t_{1}\cdots t_{i})(k)\), for an algebraically closed field \(k\) to the isotropic flag of length \(i+1\) obtained by appending the isotropic subspace \(l+\Phi(l)+\cdots+\Phi^{i}(l)\). This defines a closed point in \(X_{P_{i+1}}(t_{1}\cdots t_{i})\) if this subspace of dimension \(i+1\) is \(\Phi\)-stable, a point in \(X_{P_{i+1}}(t_{1}\cdots t_{i+1})\) if \(\Phi^{i+2}(l)\) is orthogonal to \(l\), or otherwise in \(X_{P_{i+1}}(w_{i+1})\). The latter is open in \(R_{i}\) by Lemma 4.6. Observe that its image under the composition \(G/P_{i+1}\to G/Pi\to G/P_{1}\) is the subvariety \(Y_{i}\) defined above. Again this can be checked on closed points. Last, \(R_{i+1}=R_{i}\smallsetminus Y_{i}\) is the union \[R_{i+1}=R_{i}\smallsetminus Y_{i}=X_{P_{1}}(1)\sqcup\bigsqcup_{j=2}^{i}X_{P_{j }}(t_{1}\cdots t_{j-1})\sqcup X_{P_{i+1}}(t_{1}\cdots t_{i})\sqcup X_{P_{i+1}} (t_{1}\cdots t_{i+1}),\] and by Lemma 4.6\(X_{P_{i+1}}(t_{1}\cdots t_{i+1})\) is open in it, and we can conclude by induction. With Theorem 4.2 we can compute the dimension of \(X_{P_{2}}(w_{2})=Y_{2}\), from which the last statement follows. _Remark 4.20_.: Observe that by the previous lemma, for \(i=a_{0}\) we obtain that \(Y_{\infty}\) is isomorphic to the closure of \(X_{B}(w_{\infty})\). Moreover, from the proof it follows that if the dimension is odd or the form is split, the variety \(Y_{\infty}\) is isomorphic to the union of generalized Deligne-Lusztig varieties \(\bigsqcup_{i=1}^{d-1}X_{P_{i}}(t_{1}\cdots t_{i-1})\), otherwise to the union \(\bigsqcup_{i=1}^{d-2}X_{P_{i}}(t_{1}\cdots t_{i-1})\). The different index appearing in these unions is due to the fact that if the form is non-split there are no isotropic subspaces of dimension \(d\). _Remark 4.21_.: Observe that each closed stratum \(R_{i}\subset R_{V}\) is irreducible as it is the closure of the generalized Deligne-Lusztig variety \(X_{P_{a}}(w_{a})\), which is irreducible by Theorem 4.2. It follows that the stratification of \(R_{V}\) we have just found does not have as good hereditary properties as that of \(S_{V}\). In other words, unlike the stratification of \(S_{V}\), see Lemma 4.13, the strata \(R_{i}\) of \(R_{V}\) cannot be interpreted as a variety of the form \(R_{V^{\prime}}\) for some smaller vector space \(V^{\prime}\). Moreover, given a line \(l\) in \(R_{V}\) denote by \(T_{l}\) the minimal \(\Phi\)-stable subspace of \(V\) containing \(l\). Then \(l\) belongs to the stratum \(Y_{a}\) of \(R_{V}\) if the maximal length of an isotropic chain in \(T_{l}\) is at least \(a\), which however carries little information on \(T_{l}\) or its dimension. One can only say that the set of lines \(l\in Y_{a}\) such that \(T_{l}=V\) defines an open and therefore dense subscheme in \(Y_{a}\), as also remarked in [10, Lem. 6]. We study one last family of Deligne-Lusztig varieties for the orthogonal group, which will be relevant for the analysis of the non-split case in the next sections. Roughly speaking, these new varieties are a _dual version_ of the varieties \(Y_{a}\), as instead of isotropic lines, we consider isotropic subspaces of dimension \(d-1\). For all \(0\leq i\leq d-1\) we consider * \(\mathtt{P}_{i}\) the parabolic subgroup of \(G=\operatorname{SO}(V)\) corresponding to the subset of simple reflections \(\mathtt{I}_{i}=\{t_{1},\ldots,t_{d-2-i}\}\). In other words, \(\mathtt{P}_{i}\) is the stabilizer of the standard isotropic flag of length \(i+1\): \(\langle e_{1},\ldots,e_{d-1-i}\rangle\subset\cdots\subset\langle e_{1}, \ldots,e_{d-2}\rangle\subset\langle e_{1},\ldots,e_{d-1}\rangle\). In particular \(\mathtt{P}_{d-2}=\mathtt{P}_{d-1}=B\). * \(u_{i}=t_{d-1}\cdots t_{d-i}\) with the convention that \(u_{0}=1\) and \(u_{1}=t_{d-1}\). In particular, if the form is non-split \(u_{d-1}=t_{d-1}\cdots t_{1}\) is a \(\Phi\)-Coxeter element, and it is the inverse of \(w_{a_{0}}\), the \(\Phi\)-Coxeter element of Lemma 4.15. Consider the subvariety \(Q_{V}\) of the partial flag variety \(\operatorname{SO}(V)/\mathtt{P}_{0}\) parametrizing the isotropic subspaces \(U\) of \(V\) of dimension \(d-1\) such that \(U+\Phi(U)\) is isotropic \[Q_{V}=\{U\subset V\mid\dim(U)=d-1,U+\Phi(U)\subset U^{\perp}\cap\Phi(U)^{ \perp}\}.\] We can give an analogous stratification for \(Q_{V}\) as we did for \(R_{V}\) above or \(S_{V}\) in the previous section. **Lemma 4.22**.: _The variety \(Q_{V}\) is the closure in \(\operatorname{SO}(V)/\mathtt{P}_{0}\) of the generalized Deligne-Lusztig variety \(X_{\mathtt{P}_{0}}(t_{d-1})\). There is a stratification_ \[Q_{V}=\bigsqcup_{i=0}^{d-1}Z_{i}\] _where each stratum \(Z_{i}\) parametrizes \((d-1)\)-dimensional isotropic subspaces \(U\) of \(V\) such that \(U+\Phi(U)\) is isotropic and \(i\) is the smallest index such that \(U\cap\Phi(U)\cap\cdots\cap\Phi^{i}(U)\) is \(\Phi\)-stable._ _Moreover, each subvariety \(Z_{i}\) can be identified with the (generalized) Deligne-Lusztig variety \(X_{\mathtt{P}_{i}}(u_{i})\). In particular, \(Z_{d-1}\cong X_{B}(u_{d-1})\) or in the non-split case \(Z_{d-1}\cong X_{B}(u_{d-1})\cup X_{B}(\Phi(u_{d-1}))\), and it is open and dense in \(Q_{V}\), from which it follows that \(Q_{V}\) is pure of dimension \(d-1\). In particular, in the non-split case \(Q_{V}\) has exactly two irreducible components._ Proof.: The strategy of the proof is the same as in the proof of Lemma 4.10 and 4.19. First, we observe that if \(U\) is a \((d-1)\)-dimensional isotropic subspace of \(V\) such that \(U+\varPhi(U)\) is again isotropic, then either \(U\) is \(\varPhi\)-stable (observe that this is possible also when the form is non-split), or \(U+\varPhi(U)\) has dimension \(d\). In other words the relative position \(\operatorname{inv}(U,\varPhi(U))\) is either the identity or the class of \(t_{d-1}\) in \(W_{\mathbb{I}_{0}}\backslash W/W_{\mathbb{I}_{0}}\). Hence, we have \[Q_{V}=X_{\mathbb{P}_{0}}(1)\sqcup X_{\mathbb{P}_{0}}(t_{d-1}),\] and by Lemma 4.6\(X_{\mathbb{P}_{0}}(t_{d-1})\) is open and dense in \(Q_{V}\). It is clear that \(Z_{0}=X_{\mathbb{P}_{0}}(1)\). Consider an isotropic subspace \(U\) that is a closed point of \(X_{\mathbb{P}_{0}}(t_{d-1})\). Since \(U+\varPhi(U)\) has dimension \(d\), the intersection \(U\cap\varPhi(U)\) is an isotropic subspace of \(U\) of dimension \(d-2\). Again we obtain a map on closed points \(X_{\mathbb{P}_{0}}(t_{d-1})\to G/\mathbb{P}_{1}\) by sending \(U\) to the partial isotropic flag \(U\cap\varPhi(U)\subset U\). This flag defines a closed point of \(X_{\mathbb{P}_{1}}(t_{d-1})\) if \(U\cap\varPhi(U)\) is \(\varPhi\)-stable, otherwise a point of \(X_{\mathbb{P}_{1}}(t_{d-1}t_{d-2})\). Again, this is the inverse on closed points of the map given by the base change to \(X_{\mathbb{P}_{0}}(t_{d-1})\) of the quotient map \(G/\mathbb{P}_{1}\to G/\mathbb{P}_{0}\). Therefore, by Lemma 4.7 there is an isomorphism \[X_{\mathbb{P}_{0}}(t_{d-1})\cong X_{\mathbb{P}_{1}}(t_{d-1})\sqcup X_{\mathbb{ P}_{1}}(t_{d-1}t_{d-2}).\] In particular, we can check on closed points that the image of \(X_{\mathbb{P}_{1}}(t_{d-1})\) under this isomorphism is \(Z_{1}\). We know by Lemma 4.6 that the subvariety \(X_{\mathbb{P}_{1}}(t_{d-1}t_{d-2})\) is open and dense in \(X_{\mathbb{P}_{0}}(t_{d-1})\) and therefore in \(Q_{V}\). One can then use induction as in the proof of Lemma 4.19. Observe that by Theorem 4.2 the Deligne-Lusztig variety \(X_{B}(t_{d-1}\dots t_{1})\) is irreducible if and only if the action of the Frobenius map \(\varPhi\) on the Weyl group is non-trivial, that is only if the dimension of \(V\) is even and the form is non-split. Otherwise, \(u_{d-1}\) is contained in the non-trivial \(\varPhi\)-stable subgroup of \(W\) generated by \(t_{1},\dots,t_{d-1}\). It follows that if the form is non-split \(Z_{d-1}\), and consequently \(Q_{V}\), has two irreducible components. _Remark 4.23_.: A key observation which we will need in the proof of Theorem 1.2 is the existence of a morphism from \(Z_{d-1}\) into a flag variety. Assume the form is non-split and consider the isomorphism given in the previous proposition \(Z_{d-1}\cong X_{B}(u_{d-1})\cup X_{B}(\varPhi(u_{d-1}))\). Then there is an immersion \(X_{B}(u_{d-1})\cup X_{B}(\varPhi(u_{d-1}))\to G/B\), where \(G\) is the orthogonal group. As we have recalled above, \(G=\operatorname{SO}(V)\) acts on \(V\) and \(B\) is the stabilizer of the standard isotropic flag. It follows that \(G/B\) is locally the orbit of the standard isotropic flag under the action of \(G\). As in the construction of the Grassmannian, compare [10, Sec. 8.4], one sees that \(G/B\) is actually isomorphic to the orbit space of the isotropic flag. Let \(\mathcal{F}l(V)\) be then the projective variety parametrizing flags of subspaces of \(V\) of the form \(U_{1}\subset U_{2}\subset\dotsb U_{d-1}\) where the dimension of each subspace \(U_{i}\) is \(i\). By sending an isotropic flag to itself as a point of \(\mathcal{F}l(V)\) we obtain an immersion \(G/B\to\mathcal{F}l\). By precomposing with the immersion \(X_{B}(u_{d-1})\cup X_{B}(\phi(u_{d-1}))\to G/B\) we obtain the desired morphism. ## 5. Pointwise decomposition of \(\bar{\mathcal{N}}^{0}_{2,6}\) In this section we study the \(k\)-valued points of \(\bar{\mathcal{N}}^{0}_{2,6}\) for any algebraically closed field \(k\) containing \(\mathbb{F}\). This serves as preparation for the description of the irreducible components of the reduced scheme underlying \(\bar{\mathcal{N}}^{0}_{2,6}\). From now on, we fix \(n=6\) and \(s=2\) and drop the subscript from the notation \(\bar{\mathcal{N}}^{0}_{2,6}\). We extend the Hermitian form \(h\) on \(C\) to a sesquilinear form on \(C\otimes_{\mathbb{Q}_{p}}W(k)_{\mathbb{Q}}\) by setting \(h(v\otimes a,w\otimes b)=a\sigma(b)h(v,w)\). Similarly, using the relation (3.3) between the Hermitian and alternating form on \(C\), we can extend the alternating form on \(C\) to an alternating form on \(C\otimes_{\mathbb{Q}_{p}}W(k)_{\mathbb{Q}}\), which we denote again with angled brackets. By the same arguments as in (3.6) we have a bijection between the \(k\)-valued points of \(\bar{\mathcal{N}}^{0}\) and the set of \(\mathcal{O}_{E}\otimes_{\mathbb{Z}_{p}}W(k)\)-lattices \[\mathcal{V}(k)=\{M\subset C\otimes_{\mathbb{Q}_{p}}W(k)_{\mathbb{Q}}\mid M^{ \vee}=M,\pi\tau(M)+\pi M\subset M\cap\tau(M),M\subset^{\leq 2}(M+\tau(M))\}. \tag{5.1}\] Observe that we have reformulated the condition \(\pi\tau(M)\subset M\subset\pi^{-1}\tau(M)\) of (3.6) in an equivalent way, which will be useful in the sequel. ### The set \(\mathcal{V}_{\varLambda}(k)\) for a vertex lattice \(\varLambda\) Let \(\varLambda\) be a vertex lattice of type \(2m\) in \(C\), recall that \(m\leq 3\) if the form is split, otherwise \(m\leq 2\). The strategy is the same as [14, Sec. 6], with a few modifications due to the different signature, _i.e._ to the different index in (5.1). For an algebraically closed field \(k\) containing \(\mathbb{F}\) we denote by \(\varLambda_{k}\) the \(\mathcal{O}_{E}\otimes_{\mathbb{Z}_{p}}W(k)\)-lattice \(\varLambda\otimes_{\mathbb{Z}_{p}}W(k)\). Since \(\varLambda\) is a vertex lattice, if a self-dual lattice \(M\) is contained in \(\varLambda_{k}\), then \(\pi\varLambda_{k}\subset\varLambda_{k}^{\vee}\subset M\subset\varLambda_{k}\). Moreover, by \(\tau\)-stability of \(\varLambda_{k}\), and consequently of \(\pi\varLambda_{k}\), we have that \[\pi M+\pi\tau(M)\subset\pi\varLambda_{k}\subset M\cap\tau(M).\] Therefore, if \(M\subset\varLambda_{k}\), the inclusion in the middle of definition (5.1) of \(\mathcal{V}(k)\) is always satisfied and we can omit it, compare also [14, Cor. 6.3]. It follows that for a vertex lattice \(\varLambda\) \[\mathcal{V}_{\varLambda}(k)=\{M\in\mathcal{V}(k)\mid M\subset\varLambda_{k} \}=\{M\subset\varLambda_{k}\mid M=M^{\vee},M\subset^{\leq 2}(M+\tau(M))\}.\] As in _loc.cit._ we consider the \(2m\)-dimensional \(\mathbb{F}_{p}\)-vector space \(V=\varLambda/\varLambda^{\natural}=\varLambda/\varLambda^{\natural}\) and the corresponding \(k\)-vector space \(V_{k}=V\otimes_{\mathbb{F}_{p}}k=\varLambda_{k}/\varLambda_{k}^{\vee}\). One can define an alternating form on \(V\) as follows. For \(x,y\in V\) with lifts \(x^{\prime},y^{\prime}\) in \(\varLambda\), we let \(\langle x,y\rangle_{V}\) be the image of \(p\langle x^{\prime},y^{\prime}\rangle\) in \(\mathbb{F}_{p}\). This form can then be extended \(k\)-linearly to \(V_{k}\). Since \(\varLambda^{\vee}\) is the dual of \(\varLambda\) with respect to the alternating form, the form just defined on \(V_{k}\) is a well-defined, non-degenerate and alternating bilinear form, see [14, Lem. 6.4] for a detailed proof. Moreover, as remarked in _loc.cit._, by the isomorphism \(\hat{C}\otimes_{\mathbb{Q}_{p}}W(\mathbb{F})_{\mathbb{Q}}\cong N\) given in Section 3, the map \(\tau\) on \(\varLambda\) induces the identity on \(V\) and the Frobenius on \(V_{k}\). The following result is proved in the same way as [14, Lem. 6.5]. For completeness, we recall here the main ideas of the proof. **Lemma 5.2**.: _The map \(M\mapsto M/\varLambda_{k}^{\vee}\) induces a bijection between \(\mathcal{V}_{\varLambda}(k)\) and the set of \(k\)-valued points of the generalized Deligne-Lusztig variety \(S_{V}\) defined in Section 4.2._ Proof.: The fact that \(M\) is self-dual is equivalent to its image \(U\) under the quotient map being a Lagrangian subspace of the sympletic space \(V_{k}\). Similarly, \(M\) having index at most \(2\) in \(M+\tau(M)\) is equivalent to its image \(U\) satisfying \(U\cap\Phi(U)\subset^{\leq 2}U\), from which it follows that \(U\) is a point of \(S_{V}\). Conversely, consider a Lagrangian subspace \(U\) in \(S_{V}\). Its preimage under the quotient map \(\varLambda_{k}\to V_{k}\) is a self-dual lattice \(M\) contained in \(\varLambda_{k}\), such that \(M\subset^{\leq 2}M+\tau(M)\). ### The set \(\mathcal{V}_{\varLambda}(k)\) for a \(2\)-modular lattice \(\varLambda\) Fix a \(2\)-modular lattice \(\varLambda\) in \(C\), that is an \(\mathcal{O}_{E}\)-lattice satisfying \(\pi^{2}\varLambda=\varLambda^{\vee}\subset\varLambda\). Recall that in this case \(\pi\varLambda\) is self-dual. As in the previous case, for an algebraically closed field \(k\) containing \(\mathbb{F}\) we consider the lattice \(\varLambda_{k}=\varLambda\otimes_{\mathbb{Z}_{p}}W(k)\) in \(C\otimes_{\mathbb{Q}_{p}}W(k)_{\mathbb{Q}}\) and the set of \(\mathcal{O}_{E}\otimes_{\mathbb{Z}_{p}}W(k)\)-lattices \[\mathcal{V}_{\varLambda}(k)=\{M\subset\varLambda_{k}\mid M=M^{\vee},\pi M+ \pi\tau(M)\subset M\cap\tau(M),M\subset^{\leq 2}(M+\tau(M))\}.\] Observe that for \(M\in\mathcal{V}_{\varLambda}(k)\), if \(\pi\varLambda_{k}\subset M\) the two lattices coincide by self-duality. Therefore, in general \(\pi\varLambda_{k}\not\subset M\). It follows that, unlike in the previous case, the inclusion \(\pi M+\pi\tau(M)\subset M\cap\tau(M)\) in the definition of \(\mathcal{V}_{\varLambda}(k)\) above does not follow from \(M\subset\varLambda_{k}\), and is therefore not redundant. As a first consequence, we are going to see that in the analogue of Lemma 5.2 for \(2\)-modular lattices we loose surjectiveness. As above, we consider the \(\mathbb{F}_{p}\)-vector space \(V=\varLambda/\varLambda^{\vee}\) and its base change \(V_{k}\). Observe that since \(\varLambda\) is \(2\)-modular \(V\) has dimension \(2n=12\). Again, the alternating form on \(\varLambda\) induces an alternating form on \(V\) that can be extended \(k\)-linearly to \(V_{k}\). **Lemma 5.3**.: _For a \(2\)-modular lattice \(\varLambda\), the map \(\mathcal{V}_{\varLambda}(k)\to S_{V}(k)\) sending \(M\) to \(M/\varLambda_{k}^{\vee}\) is injective but not surjective._ Proof.: The first claim is proved as in Lemma 5.2. If \(M\in\mathcal{V}_{\varLambda}(k)\), we have \(\varLambda_{k}^{\vee}\subset M^{\vee}=M\subset\varLambda_{k}\), therefore, the map is clearly injective. By definition of the form on \(V_{k}\), if \(M\) is a self-dual lattice, then its image is a Lagrangian subspace of \(V_{k}\). Similarly, since, as we have remarked, the map \(\tau\) induces the Frobenius \(\varPhi\) on \(V_{k}\), the index of \(M\) in \(M+\tau(M)\) is equal to the codimension of its image \(U\) in \(U+\varPhi(U)\). Therefore, \(M\) is sent to a point of \(S_{V}(k)\). Observe that the action of \(\pi\) on \(\varLambda\) induces a linear map \(\bar{\pi}:V_{k}\to V_{k}\) of rank \(6\). Indeed, since \(\varLambda^{\vee}=\pi^{2}\varLambda\), the image of the map \(\bar{\pi}\) is the six-dimensional subspace \(\overline{\pi\varLambda}_{k}=\pi\varLambda_{k}/\pi^{2}\varLambda_{k}\subset V _{k}\). Moreover, \(\overline{\pi\varLambda}_{k}\) is also the kernel of \(\bar{\pi}\). As we have already observed, \(\pi\varLambda_{k}\) is a self-dual, \(\tau\)-stable lattice, hence \(\overline{\pi\varLambda}_{k}\) is a \(\varPhi\)-stable Lagrangian subspace of \(V_{k}\). Consider now \(\overline{\mathcal{L}}\), a \(\varPhi\)-stable Lagrangian complement of \(\overline{\pi\varLambda}_{k}\) in \(V_{k}\). For example, one can take the base change to \(k\) of any Lagrangian complement of the image of \(\pi\varLambda\) in \(V\). Clearly, \(\overline{\mathcal{L}}\) belongs to \(S_{V}(k)\). Since \(\overline{\mathcal{L}}\cap\overline{\pi\varLambda}_{k}=0\), when we lift it to a \(W(k)\)-lattice \(\mathcal{L}\subset\varLambda_{k}\), we have that \(\mathcal{L}\cap\pi\varLambda_{k}=\pi^{2}\varLambda_{k}\). Moreover, since \(\overline{\pi\varLambda}_{k}\) is both the kernel and image of \(\bar{\pi}\), we have that \(\bar{\pi}(\overline{\mathcal{L}})=\overline{\pi\varLambda}_{k}\). It follows that \(\pi\mathcal{L}=\pi\varLambda_{k}\), which is not contained in \(\mathcal{L}\), so \(\mathcal{L}\) is not an \(\mathcal{O}_{E}\otimes_{\mathbb{Z}_{p}}W(k)\)-lattice, hence it does not belong to \(\mathcal{V}_{\varLambda}(k)\). Our goal is now to find a description in terms of Deligne-Lusztig varieties of the image of the map \(\mathcal{V}_{\varLambda}(k)\to S_{V}(k)\) above. Recall that the vector space \(C\) carries also a symmetric form, which is related to the alternating form by the formula \((x,y)=\langle\pi x,y\rangle\). As we have seen in Section 3, the duals of an \(\mathcal{O}_{E}\otimes W(k)\)-lattice \(M\) with respect to the two forms satisfy \(M^{\perp}=\pi^{-1}M^{\vee}\). In particular, if \(M\) is self-dual with respect to the alternating form, we have that \(M^{\perp}=\pi^{-1}M\). Hence, any lattice \(M\in\mathcal{V}_{\varLambda}(k)\) is contained in its dual with respect to the symmetric form. Similarly, observe that the condition \(\pi M+\pi\tau(M)\subset M\cap\tau(M)\) is equivalent to \[M+\tau(M)\subset\pi^{-1}(M\cap\tau(M))=\pi^{-1}(M+\tau(M))^{\vee}=(M+\tau(M)) ^{\perp},\] and we can reformulate the definition of \(\mathcal{V}_{\varLambda}(k)\) as \[\mathcal{V}_{\varLambda}(k)=\{M\subset\varLambda_{k}\mid M=M^{\vee},M+\tau(M )\subset(M+\tau(M))^{\perp},M\subset^{\leq 2}(M+\tau(M))\}. \tag{5.4}\] This reformulation turns out to be particularly useful for describing the image of the map of Lemma 5.3. Consider the six-dimensional \(\mathbb{F}_{p}\)-vector space \(W=\varLambda/\pi\varLambda\) and its base change \(W_{k}=W\otimes_{\mathbb{F}_{p}}k=\varLambda_{k}/\pi\varLambda_{k}\). We endow \(W\) with a symmetric bilinear form by setting \((x,y)\) as the image in \(\mathbb{F}_{p}\) of \(p(x^{\prime},y^{\prime})\) for two lifts \(x^{\prime},y^{\prime}\) in \(\varLambda\). We also extend this form \(k\)-linearly to \(W_{k}\). **Lemma 5.5**.: _The bilinear form on \(W_{k}\) defined above is well-defined, symmetric and non-degenerate._ Proof.: First, observe that for two elements \(x,y\in\varLambda\) the value of the bilinear form \(p(x,y)=p\langle\pi x,y\rangle=\langle\pi x,y^{-1}\pi^{2}y\rangle\) is in \(\mathbb{Z}_{p}\), since \(\eta^{-1}\pi^{2}y\in\pi^{2}\varLambda=\varLambda^{\vee}\), hence it makes sense to consider its image in \(\mathbb{F}_{p}\). Since \(\varLambda\) is a \(2\)-modular lattice we have \(\varLambda^{\perp}=\pi^{-1}\varLambda^{\vee}=\pi^{-1}(\pi^{2}\varLambda)=\pi\varLambda\). Hence, if \(x^{\prime}\in\pi\varLambda\), we have \((x^{\prime},y^{\prime})\in\mathbb{Z}_{p}\) for every \(y^{\prime}\in\varLambda\), and therefore the image of \(p(x^{\prime},y^{\prime})\) in \(\mathbb{F}_{p}\) is \(0\). This proves that the form is well-defined on the quotient \(W=\varLambda/\pi\varLambda\) and therefore on \(W_{k}\). It is also clear that it is symmetric. Assume there is an element \(x^{\prime}\in\varLambda\) such that for all \(y^{\prime}\in\varLambda\) the image of \(p(x^{\prime},y^{\prime})\) is zero in \(\mathbb{F}_{p}\). This means that \((x^{\prime},y^{\prime})\in\mathbb{Z}_{p}\) for all \(y^{\prime}\in\varLambda\), and therefore, \(x^{\prime}\in\varLambda^{\perp}=\pi\varLambda\). This proves that the form on \(W\), and consequently on \(W_{k}\) is non-degenerate. As we have already observed, the image of \(\pi\varLambda_{k}\) in \(V_{k}\) is a \(\varPhi\)-stable Lagrangian. Therefore, the quotient map \(V_{k}\to V_{k}/\pi\varLambda=W_{k}\) commutes with the Frobenius on \(V_{k}\) and \(W_{k}\). It follows that \(\tau\) induces again the identity on \(W\) and the Frobenius \(\varPhi\) on \(W_{k}\). Since \(W_{k}\) is a six-dimensional \(k\)-vector space endowed with a symmetric form, it is a natural question to ask whether it is split, _i.e._ whether there is a \(\varPhi\)-stable maximal isotropic subspace. **Lemma 5.6**.: _The symmetric form on \(W_{k}\) is split if and only if the Hermitian form \(h\) on \(C\) is split._ Proof.: In [14, Lemma 3.3] it is proved that the Hermitian form on the \(n\)-dimensional space \(C\) is split if and only if \(C\) contains a vertex lattice of type \(n\), that is, if and only if there is an \(\mathcal{O}_{E}\)-lattice \(\mathcal{L}\subset C\) such that \(\mathcal{L}^{\vee}=\pi\mathcal{L}\) or equivalently, such that \(\mathcal{L}^{\perp}=\mathcal{L}\). Since \(\pi\Lambda\) is self-dual, it is itself a vertex lattice of type \(0\). By the correspondence of [14, Prop. 3.4] between vertex lattices and the Bruhat-Tits simplicial complex of \(\mathrm{SU}(\mathrm{C})(\mathbb{Q}_{\mathrm{p}})\), if the form is split there exists a vertex lattice \(\mathcal{L}\) of maximal type containing \(\pi\Lambda\). Therefore, the Hermitian form \(h\) on \(C\) is split if and only if there is a vertex lattice of type \(n=6\) containing \(\pi\Lambda\). If such a vertex lattice \(\mathcal{L}\) exists, then from the fact that \(\mathcal{L}=\mathcal{L}^{\perp}\) and the definition of the orthogonal form on \(W_{k}\) it follows that the image of \(\mathcal{L}_{k}\) in \(W_{k}\) is a \(\tau\)-stable, isotropic subspace. Moreover, from the inclusions \(\pi\mathcal{L}=\mathcal{L}^{\vee}\subset\pi\Lambda\subset\mathcal{L}\) it follows that \(\pi\Lambda\) has index \(n/2=3\) in \(\mathcal{L}\). Therefore, the \(\Phi\)-stable isotropic subspace given by the image of \(\mathcal{L}_{k}\) in \(W_{k}\) has maximal dimension \(3\), and hence the form on \(W_{k}\) is split. On the other hand, if there is a \(\Phi\)-stable maximal isotropic subspace \(L\) in \(W_{k}\), we can lift it to a \(\tau\)-stable \(\mathcal{O}_{E}\otimes W(k)\)-lattice \(\pi\Lambda_{k}\subset^{3}\mathcal{L}\subset\Lambda_{k}\). Moreover, since \(L=L^{\perp}\), by the same argument as in the proof of Lemma 5.7 below we have that \(\mathcal{L}^{\perp}=\mathcal{L}\). By Lemma 3.18, since \(\mathcal{L}=\tau(\mathcal{L})\), it has a \(\tau\)-stable basis. Hence, we can consider the set of its \(\tau\)-fixed points \(\mathcal{L}^{\tau}\) and obtain a vertex lattice of type \(n=6\) in \(C\), from which it follows that the Hermitian form on \(C\) is split. Our goal now is to describe the points in \(\mathcal{V}_{\Lambda}(k)\) in terms of points of a Deligne-Lusztig variety for the orthogonal group \(\mathrm{SO}(W_{k})\). The first step in this direction is the following observation. **Lemma 5.7**.: _The map \(M\mapsto(M+\pi\Lambda_{k})/\pi\Lambda_{k}\) induces a map from \(\mathcal{V}_{\Lambda}(k)\) to the set_ \[\{U\subset W_{k}\mid U+\Phi(U)\subset(U+\Phi(U))^{\perp},U\subset^{\leq 2}U+ \Phi(U)\}.\] Proof.: First observe that the quotient map \(q:\Lambda_{k}\to W_{k}\) is compatible with taking the dual (respectively the orthogonal) with respect to the symmetric forms on both sides. Indeed, if \(M\subset\Lambda_{k}\) is a lattice in \(\mathcal{V}_{\Lambda}(k)\) with image \(U\subset W_{k}\), then by definition of the form on \(W_{k}\), the orthogonal space of \(U\) satisfies \[U^{\perp}=\{x\in\Lambda_{k}\mid p(x,y)\in pW(k),\text{for all }y\in q^{-1}(U)\}/ \pi\Lambda_{k}.\] This means that \(U^{\perp}\) is the image in \(W_{k}\) of the lattice \((M+\pi\Lambda_{k})^{\perp}=M^{\perp}\cap\Lambda_{k}\) with respect to the symmetric form on \(C\otimes W(k)_{\mathbb{Q}}\). It follows \[(U+\Phi(U))^{\perp} =U^{\perp}\cap\Phi(U)^{\perp}=q(M^{\perp}\cap\Lambda_{k})\cap q( \tau(M)^{\perp}\cap\Lambda_{k})\] \[=q(\pi^{-1}M\cap\Lambda_{k})\cap q(\pi^{-1}\tau(M)\cap\Lambda_{k})\] \[\supset q(\pi^{-1}M\cap\pi^{-1}\tau(M)\cap\Lambda_{k})\] \[\supset q(M+\tau(M)+\pi\Lambda_{k})=U+\Phi(U),\] where the second inclusion follows from (5.1). Observe that the set appearing in Lemma 5.7 above as the image of the quotient map resembles now the description of the \(k\)-valued points of some Deligne-Lusztig variety for the orthogonal group. What is still missing is the information on the dimension of the image \(U\) of \(M\) in \(W_{k}\). For example, if we restrict to \(\dim(U)=1\) we obtain the points of the generalized Deligne-Lusztig variety \(R_{W}\) introduced in 4.3, while for \(\dim(U)=2\) we recover the points of the variety \(Q_{W}\) of 4.3. We let \(\mathcal{V}_{\Lambda}^{(i)}(k)\) denote the subset of lattices \(M\in\mathcal{V}_{\Lambda}(k)\) such that \(\pi\Lambda_{k}\subset^{i}M+\pi\Lambda_{k}\). **Lemma 5.8**.: _The restriction of the map \(M\mapsto(M+\pi\Lambda_{k})/\pi\Lambda_{k}\) induces a surjective map_ \[\mathcal{V}_{\Lambda}^{(1)}(k)=\{M\in\mathcal{V}_{\Lambda}(k)\mid\pi\Lambda_{ k}\subset^{1}M+\pi\Lambda_{k}\}\longrightarrow R_{W}(k)\] _onto the \(k\)-valued points of the generalized Deligne-Lusztig variety \(R_{W}\) of Section 4.3 with fibers equal to \(\mathbb{A}^{1}(k)\)._ Proof.: By Lemma 5.7 above, if \(M\in\mathcal{V}_{A}^{(1)}(k)\) is mapped to a line \(l\) in \(W_{k}\), then \(l\) and \(l+\Phi(l)\) are both isotropic and therefore \(l\) is a point in the variety \(R_{W}(k)\) defined in the previous section. Observe that the map \(M\mapsto M+\pi\Lambda_{k}/\pi\Lambda_{k}\) factors through the map \(\mathcal{V}_{A}(k)\to S_{V}(k),M\mapsto M/\pi^{2}\Lambda_{k}\). As we have seen in Lemma 5.3 this latter map is injective but not surjective. In particular, its image is the proper subset \(S_{V\pi}\) of \(S_{V}(k)\) consisting of Lagrangian subspaces \(U\subset V_{k}\) such that \(\overline{\pi}(U)+\overline{\pi}(\Phi(U))\subset U\cap\Phi(U)\), where \(\overline{\pi}\) denotes again the rank-6 linear map on \(V_{k}\) induced by the action of \(\pi\) in \(\Lambda\). It is then enough to prove the statement for the map \(S_{V\pi}\to R_{W}\) induced by the quotient map \(q:V_{k}\to W_{k}\). Fix a Lagrangian complement \(\mathcal{L}\) of \(\overline{\pi\Lambda}\) in \(V\), that is a Lagrangian subspace of \(V\) such that \(V=\mathcal{L}\oplus\overline{\pi\Lambda}\). Then we can identify \(W\) with \(\mathcal{L}\) and a line \(l\in R_{W}(k)\) with a line \(l\) in \(\mathcal{L}_{k}\). Via the isomorphism \(W_{k}\cong\mathcal{L}_{k}\) we can define a symmetric form on \(\mathcal{L}_{k}\). By definition it satisfies \((v_{1},v_{2})=\langle v_{1},\overline{\pi}(v_{2})\rangle=-\langle\overline{ \pi}(v_{1}),v_{2}\rangle\) for all \(v_{1},v_{2}\in\mathcal{L}_{k}\). Recall that the restriction of \(\overline{\pi}\) induces a linear isomorphism between \(\mathcal{L}_{k}\) and \(\overline{\pi\Lambda}_{k}\). Consider a line \(l\in R_{W}\) and its preimage \(N=q^{-1}(l)=l\oplus\overline{\pi\Lambda}_{k}\subset V_{k}\) with orthogonal \(N^{\vee}\) with respect to the alternating form. Observe that since \(\overline{\pi\Lambda}_{k}\) is Lagrangian we have \(N^{\vee}=l^{\vee}\cap\overline{\pi\Lambda}_{k}\subset^{1}\overline{\pi\Lambda }_{k}\subset^{1}N=l\oplus\overline{\pi\Lambda}_{k}\). Let \(L\neq\overline{\pi\Lambda}_{k}\) be a six-dimensional subspace of \(V_{k}\) such that \[N^{\vee}\subset^{1}L\subset^{1}N. \tag{5.9}\] Clearly \(L+\overline{\pi\Lambda}_{k}=N\) is mapped by \(q\) to \(l\). We show that \(L\) is in \(S_{V\pi}\). By definition of \(R_{W}\) we have that \(l+\Phi(l)\) is isotropic with respect to the symmetric form. In other words we have that \(\langle l,\overline{\pi}(l)\rangle=\langle l,\overline{\pi}(\Phi(l))\rangle= \langle\Phi(l),\overline{\pi}(\Phi(l))\rangle=0\). This means that \(\overline{\pi}(l)+\overline{\pi}(\Phi(l))\subset l^{\vee}\cap\overline{\pi \Lambda}_{k}=N^{\vee}\subset L\). Similarly, \(\overline{\pi}(l)+\overline{\pi}(\Phi(l))\subset\Phi(l)^{\vee}\cap\overline{ \pi\Lambda}_{k}=\Phi(N)^{\vee}=\Phi(N^{\vee})\subset\Phi(L)\). Here the Frobenius commutes with taking the orthogonal, that is we have the equality \(\Phi(N)^{\vee}=\Phi(N^{\vee})\), because \(k\) is algebraically closed, hence \(\Phi\) preserves dimensions, and clearly \(\Phi(N^{\vee})\subset\Phi(N)^{\vee}\). We can conclude that \(\overline{\pi}(L)+\overline{\pi}(\Phi(L))=\overline{\pi}(l)+\overline{\pi}( \Phi(l))\subset N^{\vee}\cap\Phi(N^{\vee})\subset L\cap\Phi(L)\). It remains then to prove that \(L\in S_{V}\). Complete a basis of \(N^{\vee}\subset^{1}L\) to a basis of \(L\), in other words find an element \(x\in L\) such that \(L=\langle x\rangle\oplus N^{\vee}\). We already know that \(N^{\vee}\) is contained in its orthogonal \(N\) with respect to the alternating form. Since \(x\in L\subset N\) then \(\langle x,N^{\vee}\rangle=0\), hence \(L\) is isotropic and has dimension 6, from which it follows that it is Lagrangian. Consider \(L+\Phi(L)=\langle x,\Phi(x)\rangle+N^{\vee}+\Phi(N)^{\vee}\). Since \(N^{\vee}\subset^{1}\overline{\pi\Lambda}_{k}\) and \(\overline{\pi\Lambda}_{k}\) is \(\Phi\)-stable, we have that \(L+\Phi(L)\subset\langle x,\Phi(x)\rangle+\overline{\pi\Lambda}_{k}\) which has dimension at most eight, from which we can conclude that \(L\in S_{V\pi}\). We have proved that every subspace \(L\neq\overline{\pi\Lambda}_{k}\) such that \(N^{\vee}\subset^{1}L\subset^{1}N=l\oplus\overline{\pi\Lambda}_{k}\) is a preimage of \(l\) in \(S_{V\pi}\). It follows that the map \(\mathcal{V}_{A}^{(1)}(k)\to R_{W}(k)\) is surjective, and its fibers are in bijection with the \(k\)-points of \(\mathbb{P}(N/N^{\vee})\backslash\{\overline{\pi\Lambda}_{k}\}\) which we can identify with \(\mathbb{A}^{1}(k)\). **Lemma 5.10**.: _If the Hermitian form on \(C\) is split, the subset of lattices \(M\in\mathcal{V}_{A}(k)\) whose associated lattice \(\Lambda(M)\) is not a vertex lattice is contained in \(\mathcal{V}_{A}^{(1)}(k)\). In particular, it is the preimage of \(R_{W}\setminus Y_{\infty}\) under the map of Lemma 5.8._ Proof.: Fix \(M\in\mathcal{V}_{A}(k)\) and let \(U\) be the image in \(W_{k}\) of \(M+\pi\Lambda_{k}\). We argue by cases on the possible dimension of \(U\). By the previous lemma we know that \(U\) and \(U+\Phi(U)\) are isotropic, hence have dimension at most 3. Therefore, if \(U\) has dimension 3, it is \(\Phi\)-stable. This is possible as the form is split. It follows that \(M+\pi\Lambda_{k}\) is \(\tau\)-stable and contains \(M\). Moreover, it satisfies \((M+\pi\Lambda_{k})^{\vee}=M\cap\pi\Lambda_{k}\supset\pi M+\pi^{2}\Lambda_{k}\), which means that its intersection with \(C\) is a vertex lattice of type 6. By minimality, it contains \(\Lambda(M)\), which is then a vertex lattice itself, by Remark 3.20. Suppose that \(U\) has dimension 2. If \(U\) is \(\Phi\)-stable, then arguing as in the previous case, \(\Lambda(M)\) is a vertex lattice. Suppose that \(U\) is not \(\Phi\)-stable. Then since \(U+\Phi(U)\) is isotropic and properly contains \(U\), it has dimension 3. Consider the inclusion \(U\cap\Phi(U)\subset^{1}U\). If the intersection \(U\cap\Phi(U)\) is \(\Phi\)-stable, then we can consider the 4-dimensional space given by the quotient \(W^{\prime}_{k}=(U\cap\varPhi(U))^{\perp}/U\cap\varPhi(U)\). The symmetric form on \(W_{k}\) induces a well-defined, non-degenerate symmetric form on this space, and \(\varPhi\) induces again the Frobenius. In particular, the symmetric form on the quotient space \(W^{\prime}_{k}\) is still split (consider for example the image of a maximal \(\varPhi\)-stable isotropic subspace of \(W_{k}\) containing \(U\cap\varPhi(U)\)). The image of \(U\) in the quotient \(W^{\prime}_{k}\) is then an isotropic line \(l\) such that \(l+\varPhi(l)\) is isotropic. Since for a split, \(4\)-dimensional symmetric space the parameter \(a_{0}\) defined in Lemma 4.15 is \(1\), it follows that \(l+\varPhi(l)\) is \(\varPhi\)-stable. Therefore, \(U+\varPhi(U)\) is an isotropic \(\varPhi\)-stable subspace of \(W_{k}\). Its preimage \(\mathcal{L}\) in \(\varLambda_{k}\) is then a \(\tau\)-stable lattice containing \(M\) and such that \(\mathcal{L}\subset\mathcal{L}^{\perp}=\pi^{-1}\mathcal{L}^{\vee}\), hence \(\mathcal{L}^{\tau}\) and consequently \(\varLambda(M)\) are vertex lattices. Suppose now that the image \(U\) of \(M\) in \(W_{k}\) has dimension \(2\) and \(U\cap\varPhi(U)\) is not \(\varPhi\)-stable. Since the latter is one-dimensional, there is a vector \(v\in U\) such that \(U\cap\varPhi(U)=\langle\varPhi(v)\rangle\). Since \(U\cap\varPhi(U)\) is not \(\varPhi\)-stable, we have that \(\varPhi(v)\) and \(\varPhi^{2}(v)\) are linearly independent. The same holds then for \(v\) and \(\varPhi(v)\), so we have that \(U=\langle v,\varPhi(v)\rangle\) and \(U+\varPhi(U)=\langle v,\varPhi(v),\varPhi^{2}(v)\rangle\). Since the latter is isotropic, we have that \(v\) is orthogonal to \(\varPhi(v)\) as well as to \(\varPhi^{2}(v)\). Again by Lemma 4.15 we know that \(a_{0}=2\) for a split six-dimensional symmetric space, so it follows that \(U+\varPhi(U)\) is \(\varPhi\)-stable and isotropic. We can deduce as above that \(\varLambda(M)\) is a vertex lattice. This proves that if \(\varLambda(M)\) is not a vertex lattice, the image of \(M\) in \(W_{k}\) has dimension one, hence \(M\in\mathcal{V}^{(1)}_{\varLambda}(k)\). Last, since the largest isotropic subspace of \(W_{k}\) has dimension \(3\), observe that if \(M\in\mathcal{V}^{(1)}_{\varLambda}(k)\) is sent to a line \(l\in Y_{\infty}\), it means that \(l+\varPhi(l)+\varPhi^{2}(l)\) is \(\varPhi\)-stable. Then we can argue as in the beginning of this proof to see that \(\langle l+\varPhi(l)+\varPhi^{2}(l)\rangle+\overline{\pi}\varLambda_{k}\) lifts to a vertex lattice containing \(M\). Conversely, if \(l\in R_{W}(k)\setminus Y_{\infty}\) then \(l+\varPhi(l)+\varPhi^{2}(l)\) is not isotropic. On the other hand, the image of a vertex lattice in \(W_{k}\) is isotropic, hence it cannot contain \(l+\varPhi(l)+\varPhi^{2}(l)\) and hence it cannot contain \(M\). In the non-split case, as we are going to see, there are lattices \(M\in\mathcal{V}^{(2)}_{\varLambda}(k)\) whose associated lattice \(\varLambda(M)\) is not a vertex lattice. This is essentially a consequence of the different possible values of the parameter \(a_{0}\) introduced in Lemma 4.15. **Lemma 5.11**.: _If the Hermitian form on \(C\) is non-split, for every \(2\)-modular lattice \(\varLambda\)_ \[\mathcal{V}_{\varLambda}(k)=\{\pi\varLambda_{k}\}\sqcup\mathcal{V}^{(1)}_{ \varLambda}(k)\sqcup\mathcal{V}^{(2)}_{\varLambda}(k),\] _and the restriction of the map \(M\mapsto(M+\pi\varLambda_{k})/\pi\varLambda_{k}\) induces a surjective map_ \[\mathcal{V}^{(2)}_{\varLambda}(k)\longrightarrow Q_{W},\] _to the generalized Deligne-Lusztig variety \(Q_{W}\) of Section 4.3._ Proof.: Fix \(M\in\mathcal{V}_{\varLambda}(k)\) and let \(U\) denote its image in \(W_{k}\). Again we argue by cases on the dimension of \(U\). By Lemma 5.7 we know that \(U\) and \(U+\varPhi(U)\) are isotropic subspaces of \(W_{k}\). This already excludes the case \(\dim(U)=3\) as this would imply \(U=\varPhi(U)\), a contradiction to the fact that the symmetric form on \(W_{k}\) is non-split. Suppose now that \(U\) has dimension \(2\). Recall that the \(k\)-valued points of the variety \(Q_{W}\) are isotropic subspaces \(U\) of \(W_{k}\) of dimension \(2\) and such that \(U+\varPhi(U)\) is isotropic, too. Then by Lemma 5.7 it is clear that \(\mathcal{V}^{(2)}_{\varLambda}(k)\) is mapped to a point in \(Q_{W}(k)\). We show that this map is surjective. Consider the subset \(S_{V\pi}\) of \(S_{V}(k)\) as in the proof of Lemma 5.8. Fix again a Lagrangian complement \(\mathcal{L}\) of \(\overline{\pi\varLambda}\) in \(V\), which we identify with \(W\). Let \(U\subset\mathcal{L}_{k}\) be a \(2\)-dimensional subspace in \(Q_{W}(k)\), we show how to construct a preimage of \(U\) in \(S_{V\pi}(k)\), which means a preimage in \(\mathcal{V}^{(2)}_{\varLambda}(k)\). Consider the subspace \(N=U\oplus\overline{\pi\varLambda}_{k}\) and its orthogonal \(N^{\vee}\) with respect to the alternating form. Let \(L\) be the six-dimensional subspace \(L=U\oplus N^{\vee}\), then clearly \(L\) is sent to \(U\) by the quotient map \(V_{k}\to W_{k}\). We prove that \(L\in S_{V\pi}\). Since \(U\) is contained in the Lagrangian subspace \(\mathcal{L}\), it is an isotropic subspace, that is \(U\subset U^{\vee}\). Moreover, \(U\subset N\), from which it follows that \(\langle U,N^{\vee}\rangle=0\), and we can conclude that \(L=U\oplus N^{\vee}\) is Lagrangian. We need to prove that \(L+\varPhi(L)\) has dimension at most eight. Observe that since \(U+\varPhi(U)\) has dimension at most \(3\), we have \(\dim(N+\varPhi(N))=\dim((U+\varPhi(U))\oplus\overline{\pi\Lambda}_{k})\leq 9\) from which it follows, by \(\dim(N)=8\), that \(\dim(N\cap\varPhi(N))\geq 7\). By taking duals and observing, as in the proof of Lemma 5.8, that \(\varPhi(N^{\vee})=\varPhi(N)^{\vee}\) we obtain \(\dim(N^{\vee}+\varPhi(N^{\vee}))=12-\dim(N\cap\varPhi(N))\leq 5\). Hence, we conclude that \(L+\varPhi(L)=(U+\varPhi(U))\oplus(N^{\vee}+\varPhi(N^{\vee}))\) has dimension at most \(3+5=8\), hence \(L\) belongs to \(S_{V}(k)\). The fact that \(L\) belongs to \(S_{V\pi}\), that is \(\overline{\pi}(L)+\overline{\pi}(\varPhi(L))\subset L\cap\varPhi(L)\) follows from the fact that \(U+\varPhi(U)\) is isotropic with respect to the symmetric form on \(\mathcal{L}_{k}\) and by the same argument as in the proof of Lemma 5.8. This proves that the map \(\mathcal{V}_{\Lambda}^{(2)}(k)\to Q_{W}(k)\) is surjective. **Lemma 5.12**.: _Recall the stratification \(Q_{W}=\bigsqcup_{i=0}^{2}Z_{i}\) of Lemma 4.22. The map of Lemma 5.11 above sends a lattice \(M\in\mathcal{V}_{\Lambda}^{(2)}(k)\) to_ 1. _a point in_ \(Z_{0}(k)\) _if and only if_ \(\Lambda(M)\) _is a vertex lattice, moreover, in this case there is another_ \(2\)_-modular lattice_ \(\Lambda^{\prime}\) _such that_ \(M\in\mathcal{V}_{\Lambda^{\prime}}^{(1)}(k)\)_,_ 2. _a point in_ \(Z_{1}(k)\) _if and only if_ \(\Lambda(M)\) _is not a vertex lattice and there is another_ \(2\)_-modular lattice_ \(\Lambda^{\prime}\) _such that_ \(M\in\mathcal{V}_{\Lambda^{\prime}}^{(1)}(k)\)_,_ 3. _a point in_ \(Z_{2}(k)\) _if and only if_ \(\Lambda(M)\) _is not a vertex lattice and for every_ \(2\)_-modular lattice_ \(\Lambda^{\prime}\) _containing_ \(\Lambda(M)\)_, we have that_ \(M\in\mathcal{V}_{\Lambda^{\prime}}^{(2)}(k)\)_._ _In particular, this means that there exist lattices \(M\in\mathcal{V}_{\Lambda}^{(2)}(k)\) such that \(\Lambda(M)\) is not a vertex lattice._ Proof.: (i) Recall that in the definition of the stratification of \(Q_{W}=\bigsqcup_{i=0}^{2}Z_{i}\) given in Lemma 4.22 the closed points of \(Z_{0}\) correspond to isotropic, \(2\)-dimensional \(\varPhi\)-stable subspaces \(U\) of \(W_{k}\). Then we can argue as in the proof of Lemma 5.10 to see that if \(M\) is sent to \(Z_{0}\), then \(\Lambda(M)\) is a vertex lattice. Conversely, as we have seen in the proof of Lemma 5.10 the image in \(W_{k}\) of a vertex lattice \(L\subset\Lambda\) is always an isotropic subspace with respect to the symmetric form. Indeed, observe that \((L_{k}+\pi\Lambda_{k})^{\vee}=(L_{k})^{\vee}\cap\pi\Lambda_{k}\supset\pi \Lambda_{k}\), which means that the image of \(L_{k}+\pi\Lambda_{k}\) in \(W_{k}\) is an isotropic subspace. Let \(U\) be a point in \(Z_{1}(k)\sqcup Z_{2}(k)\), then \(U+\varPhi(U)\) is not \(\varPhi\)-stable, as it has dimension \(3\) and the form is non-split. Hence, \(U+\varPhi(U)+\varPhi^{2}(U)\) has dimension at least \(4\) so cannot be isotropic. Since the image of \(\Lambda(M)\) in \(W_{k}\) is \(\varPhi\)-stable and hence contains \(U+\varPhi(U)+\varPhi^{2}(U)\), it cannot be isotropic. Therefore, in this case \(\Lambda(M)\) cannot be a vertex lattice. This proves the first statement. Let \(M\) be in the preimage of \(Z_{0}\) and let \(L_{2}=(M+\pi\Lambda)^{\tau}\), which by the discussion above we know is a vertex lattice and since \(\pi\Lambda_{k}\subset^{2}M+\pi\Lambda_{k}\) it has type \(4\). Clearly, \(\pi\Lambda\), which is a vertex lattice of type \(0\), is contained in \(L_{2}\). Recall the simplicial complex \(\mathscr{L}\) of vertex lattices introduced in Proposition 3.21. In the non split case, \(\pi\Lambda\) and \(L_{2}\) are both vertices of this complex, which we know is connected and isomorphic to the Bruhat-Tits building for \(\mathrm{SU}(C)(\mathbb{Q}_{p})\). It follows that we can find a vertex lattice \(L_{1}\) of type \(2\) such that \(\pi\Lambda\subset^{1}L_{1}\subset^{1}L_{2}\). Consider the \(\mathbb{F}_{p}\)-vector space \(L_{1}/L_{1}^{\vee}\), then the image of \(\pi\Lambda\) in this quotient is a Lagrangian subspace of dimension \(1\). Consider a Lagrangian complement of \(\pi\Lambda\) in \(L_{1}/L_{1}^{\vee}\). Its preimage in \(C\) is again a self-dual \(\mathcal{O}_{E}\)-lattice contained in \(L_{1}\), hence by Proposition 3.22 we can identify it with a lattice of the form \(\pi\Lambda_{1}\) for some \(2\)-modular lattice \(\Lambda_{1}\). Moreover, since its image modulo \(L_{1}^{\vee}\) is a Lagrangian complement of \(\pi\Lambda\) we have \(L_{1}=\pi\Lambda+\pi\Lambda_{1}\). We show that if \(M\neq\pi\Lambda_{1}\otimes W(k)\) then \(M\in\mathcal{V}_{\Lambda_{1}}^{(1)}(k)\). Suppose this is not the case and \(M\in\mathcal{V}_{\Lambda_{1}}^{(2)}(k)\), which means \(\pi\Lambda_{1}\subset^{2}M+\pi\Lambda_{1}\), here we omit the subscript \(k\) for better readability. Since \(\pi\Lambda_{1}\subset L_{2}=M+\pi\Lambda\) it follows that \(M+\pi\Lambda_{1}\subset M+\pi\Lambda\), and since both contain \(M\) with index two, this inclusion is actually an equality. Let \(U\) be the image of \(M\) in \(V_{k}\) and consider the chain of subspaces in \(V_{k}\) \[\overline{\pi\Lambda_{1}}\subsetneq\overline{\pi\Lambda_{1}}+\overline{\pi \Lambda}\subsetneq U+\overline{\pi\Lambda}_{1}=U+\overline{\pi\Lambda},\] obtained as the image in \(V\) of the chain of inclusions of lattices \(\pi\Lambda_{1}\subset^{1}L_{1}=\pi\Lambda+\pi\Lambda_{1}\subset^{1}L_{2}=M+\pi \Lambda_{1}=M+\pi\Lambda\). Observe that the inclusions remain proper in \(V_{k}\) as \(\pi\Lambda_{1}\subset\Lambda\) and by duality \(\pi^{2}\Lambda\subset\pi\Lambda_{1}\). Since the image \(\overline{\pi\Lambda_{1}}\) of \(\pi\Lambda_{1}\) in \(V_{k}\) is contained with codimension \(2\) in \(U+\overline{\pi\Lambda_{1}}\) we can find two vectors \(u_{1},u_{2}\in U\) such that \(U+\overline{\pi\Lambda_{1}}=\langle u_{1},u_{2}\rangle\oplus\overline{\pi \Lambda_{1}}\). Moreover, by the inclusions above, we can actually choose these two vectors such that \(u_{2}\in\overline{\pi\Lambda}\cap U\). However, since \(U+\overline{\pi\Lambda_{1}}=U+\overline{\pi\Lambda}\) and all these spaces are Lagrangian, by taking the orthogonal on both sides \(U\cap\overline{\pi\Lambda_{1}}=U\cap\overline{\pi\Lambda}\). This means that \(u_{2}\in\overline{\pi\Lambda_{1}}\) which leads to a contradiction. (ii) Consider now \(M\in\mathcal{V}_{\Lambda}^{(2)}(k)\) with image \(U\) in \(Z_{1}(k)\). Denote by \(L\) the image of \(M\) in \(V\). By the definition of \(Z_{1}\) in Lemma 4.22, we know that \(U\cap\varPhi(U)\) is a one-dimensional, \(\varPhi\)-stable subspace of \(W_{k}\). Let \(x\in\mathcal{L}_{k}\cong W_{k}\) be a \(\varPhi\)-stable element such that \(U\cap\varPhi(U)=\langle x\rangle\) and let \(x_{L}\) be a lift in \(L\). Since \(x_{L}\) is \(\varPhi\)-stable modulo \(\overline{\pi\Lambda}_{k}\), there is an element \(\pi\lambda\in\overline{\pi\Lambda}_{k}\) such that \(\varPhi(x_{L})=x_{L}+\pi\lambda\). Consider the seven-dimensional subspace \(N_{x}=\langle x\rangle\oplus\overline{\pi\Lambda}_{k}\) and its orthogonal \(N_{x}^{\vee}\subset\overline{\pi\Lambda}_{k}\). Observe that \(N_{x}\) is \(\varPhi\)-stable. As we have observed several times in this section, \(\varPhi(N_{x}^{\vee})=\varPhi(N_{x})^{\vee}\), and therefore \(N_{x}^{\vee}\) is \(\varPhi\)-stable as well. We show that \(X=\langle x_{L}\rangle\oplus N_{x}^{\vee}\) is the space we are looking for, that is, it corresponds to another \(2\)-modular lattice \(\Lambda^{\prime}\) such that \(M\in\mathcal{V}_{\Lambda^{\prime}}^{(1)}\). First, observe that \(X\) is Lagrangian, since \(N_{x}^{\vee}\subset^{1}X\subset^{1}N_{x}\) and hence we can argue as in the proof of Lemma 5.8. We need to prove that \(X\) is \(\varPhi\)-stable. First, we note that since \(N_{x}=U\cap\varPhi(U)\oplus\overline{\pi\Lambda}_{k}=(L+\overline{\pi \Lambda}_{k})\cap(\varPhi(L)+\overline{\pi\Lambda}_{k})\) it follows that \(N_{x}^{\vee}=(L\cap\overline{\pi\Lambda}_{k})+(\varPhi(L)\cap\overline{\pi \Lambda}_{k})\) as all summands appearing are Lagrangian. Complete \(x\) to a basis \(\{x,y\}\) of \(U\subset\mathcal{L}_{k}\) and denote by \(y_{L}\) an element in \(L\) such that \(L=\langle x_{L},y_{L}\rangle\oplus(L\cap\overline{\pi\Lambda}_{k})\). Then we have \[L+\varPhi(L)=\langle x_{L},y_{L},\varPhi(y_{L})\rangle\oplus(\langle\pi \lambda\rangle+(L\cap\overline{\pi\Lambda}_{k})+(\varPhi(L)\cap\overline{\pi \Lambda}_{k}))=\langle x_{L},y_{L},\varPhi(y_{L})\rangle\oplus(\langle\pi \lambda\rangle+N_{x}^{\vee}).\] Since the image of \(\langle x_{L},y_{L},\varPhi(y_{L})\rangle\) in \(W_{k}\) is \(U+\varPhi(U)\) and has dimension \(3\), these elements are linearly independent. Moreover, we already know that \(N_{x}^{\vee}\) has dimension \(5\). Since \(L\in S_{V}\), the dimension of \(L+\varPhi(L)\) cannot be larger than eight, so we have that \(\pi\lambda\in N_{x}^{\vee}\). We conclude that \[\varPhi(X)=\langle\varPhi(x_{L})\rangle\oplus\varPhi(N_{x}^{\vee})=\langle x _{L}+\pi\lambda\rangle\oplus N_{x}^{\vee}=\langle x_{L}\rangle\oplus N_{x}^{ \vee}=X.\] It remains to prove that \(X\) can be lifted to a lattice in \(C\otimes W(k)_{\mathbb{Q}}\). Since \(U=\langle x,y\rangle\) is isotropic with respect to the symmetric form on \(W_{k}\cong\mathcal{L}_{k}\) we have that \(\overline{\pi}(U)\subset U^{\vee}\cap\overline{\pi\Lambda}_{k}\subset\langle x \rangle^{\vee}\cap\overline{\pi\Lambda}_{k}=N_{x}^{\vee}\). It follows that \(\overline{\pi}(X)=\overline{\pi}(x)\subset\overline{\pi}(U)\subset N_{x}^{ \vee}\subset X\). In particular \(X\in S_{V\pi}\), hence we can lift it to a \(\tau\)-stable, self-dual \(W(k)\otimes\mathcal{O}_{E}\)-lattice \(X\). Since \(\overline{\pi}(L)\subset X\) we have \(\pi M\subset X\). By Proposition 3.22 we know that \(\Lambda^{\prime}=\pi^{-1}X^{\tau}\) is a \(2\)-modular lattice, which then contains \(\Lambda(M)\). Moreover, since \(X\cap L\subset^{1}L\), we have that \(\pi\Lambda^{\prime}_{K}\subset^{1}M+\pi\Lambda^{\prime}_{k}\), in other words \(M\in\mathcal{V}_{\Lambda^{\prime}}^{(1)}(k)\). It remains to prove the "if" part of the second statement. Suppose \(M\) is a lattice in \(\mathcal{V}(k)\) such that there are two \(2\)-modular lattices \(\Lambda_{1,2}\subset C\) such that \(M\subset\Lambda_{1}\cap\Lambda_{2}\) and \(M\subset^{i}M+\pi\Lambda_{i}\), (here we omit the subscript \(k\) to ease the notation). Moreover, \(\Lambda(M)\) is not a vertex lattice. We want to prove that \(M\) is mapped to a point in \(Z_{1}(k)\) by the map \(M\mapsto M+\pi\Lambda_{2}\) associated to \(\Lambda_{2}\). By definition of \(Z_{1}\) this is equivalent to showing that \((M+\pi\Lambda_{2})\cap(\tau(M)+\pi\Lambda_{2})\) is \(\tau\)-stable. It suffices to prove that \(\pi\Lambda_{1}+\pi\Lambda_{2}\subset M+\pi\Lambda_{2}\). Indeed, if this is the case by \(\tau\)-stability \(\pi\Lambda_{1}+\pi\Lambda_{2}\subset(M+\pi\Lambda_{2})\cap(\varPhi(M)+\pi \Lambda_{2})\). Since \(M+\pi\Lambda_{2}\) is not \(tau\)-stable, otherwise \(\Lambda(M)\) would be a vertex lattice, it follows that the inclusion above is an equality. Consider the inclusions \[M\subset M+(\pi\Lambda_{1}\cap\Lambda_{2})\subset M+\pi\Lambda_{1}.\] Since the index of \(M\) in \(M+\pi\Lambda_{1}\) is \(1\) we have that one of the inclusions above is actually an equality. If \(M=M+(\pi\Lambda_{1}\cap\Lambda_{2})\) or equivalently \(\pi\Lambda_{1}\cap\Lambda_{2}\subset M\), by taking duals on both sides we have \(M\subset\pi\Lambda_{1}+\pi^{2}\Lambda_{2}\). The latter is a \(\tau\)-stable lattice such that \(\pi\Lambda_{1}\cap\Lambda_{2}=(\pi\Lambda_{1}+\pi^{2}\Lambda_{2})^{\vee}\subset\pi \Lambda_{1}+\pi^{2}\Lambda_{2}\), where the first inclusion follows from the fact that \(\pi^{2}\Lambda_{1,2}\subset M\subset\Lambda_{1,2}\). Therefore, in this case \(M\) is contained in a \(\tau\)-stable vertex lattice, which contradicts the assumption on \(\Lambda(M)\). It follows that the second inclusion above is an equality, that is \(M+\pi\Lambda_{1}=M+(\pi\Lambda_{1}\cap\Lambda_{2})\subset\Lambda_{2}\) from which it follows that \(\pi\Lambda_{1}\subset\Lambda_{2}\). Observe that by taking duals we also have \(\pi^{2}\Lambda_{2}\subset\pi\Lambda_{1}\) from which we conclude that \(\pi\Lambda_{1}+\pi\Lambda_{2}\subset\Lambda_{1}\cap\Lambda_{2}\). Observe that \(\pi\Lambda_{1}+\pi\Lambda_{2}\) is a \(\tau\)-stable vertex lattice. Finally, consider the inclusions \[M\cap\pi\Lambda_{1}\subset(M\cap\pi\Lambda_{1})+(\pi\Lambda_{1}\cap\pi \Lambda_{2})\subset\pi\Lambda_{1}.\] Again by the fact that \(M\cap\pi\Lambda_{1}\) has index one in \(\pi\Lambda_{1}\), one of the inclusions above is an equality. If the first inclusion is an equality, which is equivalent to \(\pi\Lambda_{1}\cap\pi\Lambda_{2}\subset M\cap\pi\Lambda_{1}\subset M\) by taking duals we have \(M\subset\pi\Lambda_{1}+\pi\Lambda_{2}\), and we have just observed that the latter is a \(\tau\)-stable vertex lattice, which contradicts the assumption on \(\Lambda(M)\). It follows that the second inclusion is an equality, hence \(\pi\Lambda_{1}\subset(M\cap\pi\Lambda_{1})+(\pi\Lambda_{1}\cap\pi\Lambda_{2}) \subset M+\pi\Lambda_{2}\) and therefore \(\pi\Lambda_{2}\subsetneq\pi\Lambda_{1}+\pi\Lambda_{2}\subsetneq M+\pi\Lambda _{2}\). Here the first inclusion is proper as \(\Lambda_{1,2}\) are distinct, while the second is because \(M+\pi\Lambda_{2}\) is not \(\tau\)-stable. Since \(\pi\Lambda_{1}+\pi\Lambda_{2}\) is a \(\tau\)-stable lattice contained in \(M+\pi\Lambda_{2}\), it follows that its image modulo \(\pi\Lambda_{2}\) is a \(\Phi\)-stable subspace contained in the image \(U\) of \(M\). By \(\Phi\)-stablity it is contained in the intersection \(U\cap\Phi(U)\) which has dimension \(1\). It follows that \(U\cap\Phi(U)\) is \(\Phi\)-stable and therefore \(M\) is sent to a point of \(Z_{1}(k)\). (iii) The last statement follows directly from the previous two. **Lemma 5.13**.: _We denote by \(\mathcal{V}^{(2)^{\circ}}_{\Lambda}(k)\) the preimage of \(Z_{2}(k)\), that is the set of lattices \(M\in\mathcal{V}^{(2)}_{\Lambda}(k)\) such that \(\Lambda(M)\) is not a vertex lattice and \(M\in\mathcal{V}^{(2)}_{\Lambda^{\prime}}\) for every \(2\)-modular lattice \(\Lambda(M)\subset\Lambda^{\prime}\). Then the restriction of the map of Lemma 5.11 induces a surjective map_ \[\mathcal{V}^{(2)^{\circ}}_{\Lambda}(k)\longrightarrow Z_{2},\] _with fibers equal to \(\mathbb{A}^{2}(k)\)._ Proof.: It remains to study the fibers of the map \(\mathcal{V}^{(2)^{\circ}}_{\Lambda}(k)\longrightarrow Z_{2}(k)\). Recall the isomorphism of Lemma 4.22 between \(Z_{2}\) and the union of Deligne-Lusztig variety \(X_{B}(t_{2}t_{1})\cup X_{B}(t_{3}t_{1})\). On closed points it gives a bijection \(U\mapsto U\cap\Phi(U)\subset U\), compare Lemma 4.22. Fix \(U\in Z_{2}\), and let \(l=U\cap\Phi(U)\subset\mathcal{L}_{k}\). We have already seen that \(L=U\oplus N^{\vee}\) is a preimage in \(S_{V\pi}\) (or equivalently it produces a preimage in \(\mathcal{V}^{(2)}_{\Lambda}\)) of \(U\). Fix a basis \(l=\langle u\rangle\) and let \(l^{\prime}=\langle u+\pi\lambda\rangle\) be another lift of \(l\) in \(V_{k}\). If \(L^{\prime}\in S_{V\pi}\) is another preimage of \(U\) and contains \(l^{\prime}\), then it is of the form \[L^{\prime}=\langle u+\pi\lambda,\Phi^{-1}(u)+\pi\lambda_{2}\rangle\oplus N^{ \vee},\] for some \(\pi\lambda_{2}\) in the \(2\)-dimensional space \(\overline{\pi\Lambda}/N^{\vee}\). Consider \[L^{\prime}+\Phi(L^{\prime}) =\langle u+\pi\lambda,\Phi^{-1}(u)+\pi\lambda_{2},\Phi(u)+\Phi(\pi \lambda),u+\Phi(\pi\lambda_{2})\rangle\oplus(N^{\vee}+\Phi(N^{\vee}))\] \[=\langle u+\pi\lambda,\Phi^{-1}(u)+\pi\lambda_{2},\Phi(u)+\Phi( \pi\lambda)\rangle\oplus(\langle\pi\lambda-\Phi(\pi\lambda_{2})\rangle+N^{ \vee}+\Phi(N^{\vee})).\] Since \(L^{\prime}+\Phi(L^{\prime})\) has to have dimension eight, and its subspace \[\langle u+\pi\lambda,\Phi^{-1}(u)+\pi\lambda_{2},\Phi(u)+\Phi(\pi\lambda) \rangle\oplus(N^{\vee}+\Phi(N^{\vee}))\] already has dimension \(3+5=8\) we have that \(\Phi((\pi\lambda_{2}))\in\pi\lambda+(N^{\vee}+\Phi(N^{\vee}))\), which means that \(\pi\lambda_{2}\) belongs to the one-dimensional (affine) subspace \(\Phi^{-1}(\pi\lambda)+(N^{\vee}+\Phi^{-1}(N^{\vee}))/N^{\vee}\subset\overline{ \pi\Lambda}_{k}/N^{\vee}\). Moreover, since \(L^{\prime}\) has to be Lagrangian, we have to impose another linear condition \[\langle\pi\lambda,\Phi^{-1}(u)\rangle=\langle\pi\lambda_{2},u\rangle.\] Observe that \(N^{\vee}+\Phi(N^{\vee})\subset\langle u\rangle^{\vee}\cap\overline{\pi\Lambda }_{k}\) as \(u\) is contained in both the Lagrangian spaces \(L\) and \(\Phi(L)\). By comparing dimension we have equality and since \(\Phi^{-1}(N^{\vee})\) is not contained in \(N^{\vee}+\Phi(N^{\vee})\) it follows that \(\Phi^{-1}(N^{\vee})\) is not orthogonal to \(u\). Therefore, the linear condition on above is non-trivial and determines a unique point in \(\varPhi^{-1}(\pi\lambda)+(N^{\vee}+\varPhi^{-1}(N^{\vee}))/N^{\vee}\subset\overline {\pi\Lambda}_{k}/N^{\vee}\). It follows that a preimage \(L^{\prime}\) of \(U\) is uniquely determined by how it lifts the subspace \(U\cap\varPhi(U)\), that is by a unique element in the \(2\)-dimensional space \(\overline{\pi\Lambda}_{k}/N^{\vee}\cong\mathbb{A}^{2}(k)\). ## 6. Geometry of \(\bar{\mathcal{N}}^{0}\) In this section we study the irreducible components of the reduced scheme underlying \(\bar{\mathcal{N}}^{0}\). Recall that the irreducible components of the analogous scheme for \(\operatorname{GU}(1,n-1)\) are indexed over the set of vertex lattices of maximal type, as proved in [10], and are isomorphic to generalized Deligne-Lusztig varieties. In our case, if the form is split, we are going to see that in addition to components analogous to those of _loc.cit._, a second type of irreducible components appears. These components originate from \(2\)-modular lattices, and are universally homeomorphic to line bundles over a Deligne-Lusztig variety. In the non-split case, we are going to prove that there are again two types of irreducible components, which both originate from \(2\)-modular lattices and are universally homeomorphic to line bundles over a generalized Deligne-Lusztig variety, respectively to the closure of vector bundles of rank \(2\) over a classical Deligne-Lusztig variety of Coxeter type. ### The subscheme \(\mathcal{N}_{\Lambda}\) Let \(k\) be a perfect field containing \(\mathbb{F}\). Let \(\Lambda\) be a vertex lattice or a \(2\)-modular lattice in \(C\). We write again \(\Lambda_{k}\) for \(\Lambda\otimes_{\mathbb{Z}_{p}}W(k)\). We first define the closed subfunctor \(\mathcal{N}_{\Lambda}\) of \(\bar{\mathcal{N}}^{0}\) associated to \(\Lambda\), whose \(\mathbb{F}\)-points are in bijection with \(\mathcal{V}_{\Lambda}(\mathbb{F})=\{M\in\mathcal{V}(\mathbb{F})\mid\Lambda(M )\subset\Lambda\}\). The construction is similar to that of [10, Sec. 6] and [11, Sec. 4], we recall here the main ideas and point out the differences, that are due to the fact that now we have to consider \(2\)-modular lattices as well. **Lemma 6.1**.: _Let \(\Lambda^{+}=\Lambda_{\mathbb{F}}\) and \(\Lambda^{-}=\Lambda_{\mathbb{F}}^{\vee}\). They correspond to two \(p\)-divisible groups \(X_{\Lambda^{\pm}}\) with quasi-isogenies \(\rho_{\Lambda^{\pm}}:X_{\Lambda^{\pm}}\to\mathbb{X}\)._ Proof.: If \(\Lambda\) is a vertex lattice, it is proved in [10, Lem. 6.1] that both \(\Lambda^{+}\) and \(\Lambda^{-}\) are stable under \(\pi,F,V\) from which the claim follows by Dieudonne theory. If \(\Lambda\) is a \(2\)-modular lattice, we know that \(\Lambda\) is \(\pi\)- and \(\tau\)-stable, and consequently so is \(\Lambda^{+}\). We then have \[F\Lambda^{+}=pV^{-1}\Lambda^{+}=\pi^{2}V^{-1}\Lambda^{+}=\pi\tau\Lambda^{+}= \pi\Lambda^{+}\subset\Lambda^{+}.\] Similarly, \(V\Lambda^{+}=\pi\tau^{-1}\Lambda^{+}=\pi\Lambda^{+}\subset\Lambda^{+}\). As in the proof of [10, Lem. 6.1], we observe that for \(x\in\Lambda^{-}=(\Lambda^{+})^{\vee}\) and \(y\in\Lambda\) we have \(\langle Fx,y\rangle=\langle x,Vy\rangle^{\sigma}\) which is integral, since \(Vy\) is again in \(\Lambda^{+}\), as we have just seen. Therefore, \(Fx\in\Lambda^{-}\). In the same way one proves that \(\Lambda^{-}\) is stable under \(V\). To prove that it is stable under \(\pi\) it is enough to recall that \(\Lambda^{-}=\pi^{2}\Lambda^{+}\), as \(\Lambda\) is a \(2\)-modular lattice, and the statement follows from \(\pi\)-stability of \(\Lambda^{+}\). Recall \(N\), the rational Dieudonne module of \(\mathbb{X}\). From the inclusions \(\Lambda^{\pm}\subset C\otimes_{\mathbb{Q}_{p}}W(\mathbb{F})_{\mathbb{Q}}\cong N\), again by means of Dieudonne theory, we obtain the quasi-isogenies to \(\mathbb{X}\). As in [10, Sec. 6] we define the subfunctor \(\widetilde{\mathcal{N}}_{\Lambda}\) of \(\bar{\mathcal{N}}^{0}\) consisting of the tuples \((X,\rho,\lambda,\iota)\) over an \(\mathbb{F}\)-scheme \(S\), such that \(\rho_{X,\Lambda^{+}}:=(\rho_{\Lambda^{+}})_{S}^{-1}\circ\rho\) or equivalently \(\rho_{X,\Lambda^{-}}:=\rho^{-1}\circ(\rho_{\Lambda^{-}})_{S}\) is an isogeny. Observe that as in _loc.cit._ there is a commutative diagram where \(\lambda\) is the isogeny induced by the duality of lattices \(\Lambda^{-}=(\Lambda^{+})^{\vee}=(\Lambda^{+})^{\sharp}\) and \({}^{t}\rho_{\Lambda^{-}}\) is the dual of the quasi-isogeny \(\rho_{\Lambda^{+}}\). Since, by definition of \(\bar{\mathcal{N}}^{0}\), the height of \(\rho_{X}\) is zero, it follows that the height of the isogenies \(\rho_{X,\Lambda^{\pm}}\), is half of the type \(t(\Lambda)\) of \(\Lambda\), _i.e._ of the index of \(\Lambda^{\vee}\subset\Lambda\). The next lemma is proved in the same way as [10, Lem. 6.2] and [11, Lem. 4.2], as the arguments there do not make use of the fact that \(\varLambda\) is a vertex lattice, or that the extension \(\mathbb{Q}_{p}\subset E\) is (un)ramified. **Lemma 6.2**.: _The functor \(\widetilde{\mathcal{N}}_{\varLambda}\) is representable by a projective \(\mathbb{F}\)-scheme, and it is a closed subscheme of \(\bar{\mathcal{N}}^{0}\). _ Denote by \(\mathcal{N}_{\varLambda}\) the reduced scheme underlying \(\widetilde{\mathcal{N}}_{\varLambda}\). Our goal is to extend to a morphism of schemes the bijection, respectively surjection, we have described in the previous section between the \(k\)-valued points of \(\mathcal{N}_{\varLambda}\) and the Deligne-Lusztig varieties \(S_{V}\), respectively \(R_{W}\) or \(Q_{W}\). The first step in this direction is given by defining a morphism of \(\widetilde{\mathcal{N}}_{\varLambda}\) into a Grassmannian variety. As in the previous section we let \(V\) denote the \(t(\varLambda)\)-dimensional vector space \(V=\varLambda^{+}/\varLambda^{-}\). Consider the Grassmannian functor \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \ closed. Universally injective is equivalent to the diagonal morphism being a bijection on \(k\)-valued points for any field \(k\). Since a morphism of projective schemes is proper, hence separated, the diagonal morphism is already injective as it is a closed immersion. Moreover, for a scheme \(X\) of finite type over an algebraically closed field \(k\), the set of \(k\)-valued points is very dense in \(X\), see [10, Prop. 3.35]. Therefore, a closed subscheme \(Y\subset X\) coincides with \(X\) if and only if it contains all \(k\)-valued points. This means that surjectiveness of the diagonal \(\varDelta_{f}\), which is equivalent to injectiveness of \(f\), can be tested on \(k\)-points for \(k\) algebraically closed. Last, a morphism is finite if and only if it is proper and quasi-finite. By [10, Rem. 12.16] it is sufficient to see if the map has finite fibers on \(k\)-valued points, with \(k\) algebraically closed, which is already implied by being injective. Then we can conclude with Lemma 5.2. Recall that if \(\varLambda\) is a \(2\)-modular lattice, the action of \(\pi\) induces a linear map \(\overline{\pi}\) of rank \(6\) on the \(12\)-dimensional vector space \(V=\varLambda^{+}/\varLambda^{-}=\varLambda^{+}/\pi^{2}\varLambda^{+}\). The image and kernel of \(\overline{\pi}\) both coincide with the \(6\)-dimensional subspace \(\overline{\pi}\varLambda\) given by the image of \(\pi\varLambda\) in \(V\). Consider then the closed subscheme \(S_{V\pi}\) of the variety \(S_{V}\), given by the Lagrangian subspaces \(U\in S_{V}\) such that \(\overline{\pi}(U)+\overline{\pi}(\varPhi(U))\subset U\) (observe that this is equivalent to the condition \(\overline{\pi}(U)+\overline{\pi}(\varPhi(U))\subset U\) originally given in the definition of \(S_{V\pi}\)). Recall \(S_{V\pi}\) has already been introduced in the proof of Lemma 5.8, where we have proven that it is the image of the map \(\mathcal{V}_{\varLambda}(k)\to S_{V}(k)\) for \(k\) an algebraically closed field. **Proposition 6.5**.: _Let \(\varLambda\) be a \(2\)-modular lattice in \(C\) and denote by \(\mathcal{N}_{\varLambda}\) the reduced scheme underlying \(\widetilde{\mathcal{N}}_{\varLambda}\). The map \(f:\mathcal{N}_{\varLambda}\to S_{V\pi}\) that sends \((X,\lambda,\rho,\iota)\) to \(E(X)\coloneqq\operatorname{Ker}(D(\rho_{X,\varLambda^{-}}))\) is a universal homeomorphism of projective schemes._ Proof.: As in the proof of Proposition 6.4, since we are working with reduced projective schemes over the algebraically closed field \(\mathbb{F}\) it is enough to check that the map on \(k\)-valued points is a bijection, for any algebraically closed field \(k\). Then we can conclude with Lemma 5.8. The remainder of this section is dedicated to combining the results on the geometric points of \(\widetilde{\mathcal{N}}_{\varLambda}\) proved in the previous section, see Lemmas 5.2 and 5.8, with the construction of Lemma 6.5 of the universal homeomorphism \(f\) onto the variety \(S_{V\pi}\). Our goal is to obtain a description of the irreducible components of \(\widetilde{\mathcal{N}}_{\mathrm{red}}^{0}\) in terms of Deligne-Lusztig varieties. Again the split and non-split cases are rather different and deserve to be treated separately. ### Irreducible components in the split case Assume that the Hermitian form on \(C\) is split. As we have seen in Lemma 5.10, if a lattice \(M\in\widetilde{\mathcal{N}}^{0}(k)\) is not contained in a vertex lattice, then it is contained in a \(2\)-modular lattice \(\varLambda\) such that \(\pi\varLambda_{k}\subset^{1}M+\pi\varLambda_{k}\). These two cases will correspond to two types of irreducible components of \(\widetilde{\mathcal{N}}_{\mathrm{red}}^{0}\). _Remark 6.6_.: We have already seen that if \(\mathcal{L}\) is a vertex lattice, then \(\mathcal{N}_{\mathcal{L}}\) is universally homeomorphic to the generalized Deligne-Lusztig variety \(S_{V}\). Let \(\mathcal{L}\) be a vertex lattice of type \(6\) in \(C\), which exists as we are considering the split case. Then \(V=\mathcal{L}/\mathcal{L}^{\vee}\) is a symplectic \(\mathbb{F}\)-vector space of dimension \(6\). It follows from Lemma 4.10 that \(\mathcal{N}_{\mathcal{L}}\) is irreducible and has dimension \(5\). Moreover, it contains the open and dense subscheme \(\mathcal{N}_{\mathcal{L}}^{\circ}\) corresponding to the open stratum \(X_{B}(s_{3}s_{2}s_{1})\sqcup X_{B}(s_{3}s_{2}s_{3}s_{1})\sqcup X_{B}(s_{3}s_{2} s_{3}s_{1}s_{2})\) in the stratification of \(S_{V}\) given in (4.11). In terms of lattices, \(\mathcal{N}_{\mathcal{L}}^{\circ}\) corresponds to those lattices \(M\) in \(N\) such that \(\varLambda(M)=\mathcal{L}\). A similar stratification holds for lattices of smaller type, too. In particular, by (4.11) for any vertex lattice \(\mathcal{L}\), the corresponding scheme \(\mathcal{N}_{\mathcal{L}}\) contains \(\mathcal{N}_{\mathcal{L}}^{\circ}\) as an open and dense subscheme. Moreover, by Lemma 4.13 and Proposition 6.4 the closure of \(\mathcal{N}_{\mathcal{L}}^{\circ}\) is the union of the subschemes \(\mathcal{N}_{\mathcal{L}^{\prime}}^{\circ}\) for all vertex lattices \(\mathcal{L}^{\prime}\subset\mathcal{L}\). Let now \(\varLambda\) be a \(2\)-modular lattice in \(C\) and denote again by \(\mathcal{N}_{\varLambda}\) the reduced scheme underlying \(\widetilde{\mathcal{N}}_{\varLambda}\). In Lemma 5.10 we have seen that if \(M\) is a lattice in \(\mathcal{V}_{\varLambda}(k)\) such that the corresponding minimal \(\tau\)-stable lattice \(\varLambda(M)\) is not a vertex lattice, then the index of \(\pi\varLambda_{k}\) in \(M+\pi\varLambda_{k}\) is \(1\). Therefore, we are interested in the closed subscheme of \(S_{V\pi}\) given by \[S_{V\pi}^{\leq 1}(R)\coloneqq\{U\in S_{V\pi}(R)\mid U+\overline{\pi\varLambda}_{R} \text{ is a direct summand of }V_{R}\text{ with }\operatorname{rk}(\overline{\pi\varLambda}+U)\leq 7\}, \tag{6.7}\] where \(\overline{\pi\varLambda}_{R}\) is the image of \(\pi\varLambda_{R}\) in \(V_{R}=\varLambda_{R}/\pi^{2}\varLambda_{R}\). Consider the open subscheme \(S_{V\pi}^{(1)}\) defined by the condition on the rank being an equality. We denote by \(\mathcal{N}_{\varLambda}^{\leq 1}\) and \(\mathcal{N}_{\varLambda}^{(1)}\) their schematic preimage in \(\mathcal{N}_{\varLambda}\) under the morphism \(f\). Since \(f\) is a universal homeomorphism, \(\mathcal{N}_{\varLambda}^{\leq 1}\) is closed in \(\mathcal{N}_{\varLambda}\) and contains \(\mathcal{N}_{\varLambda}^{(1)}\) as an open subscheme. **Lemma 6.8**.: _The subscheme \(\mathcal{N}_{\varLambda}^{(1)}\) is open and dense in \(\mathcal{N}_{\varLambda}^{\leq 1}\)._ Proof.: Observe that the complement of \(S_{V\pi}^{(1)}\) in \(S_{V\pi}^{\leq 1}\) consists only of the point \(\overline{\pi\varLambda}\). It follows that the complement of \(\mathcal{N}_{\varLambda}^{(1)}\) in \(\mathcal{N}_{\varLambda}^{\leq 1}\) consists only of the \(p\)-divisible group \(X_{\pi\varLambda}\) corresponding via Dieudonne theory and Lemma 6.1 to the lattice \(\pi\varLambda_{\mathbb{F}}\). Our goal is to show that it belongs to the closure of \(\mathcal{N}_{\varLambda}^{(1)}\). Let \(\mathcal{L}\subset C\) be a vertex lattice of type \(2\) containing \(\pi\varLambda\). Such a lattice exists since \(\pi\varLambda\) is a vertex lattice of type \(0\) and the simplicial complex \(\mathscr{L}\) of vertex lattices is connected, compare Proposition 3.21. Then, using the fact that \(\pi\varLambda\) is self-dual and by definition of vertex lattices we have \(\pi\mathcal{L}\subset\mathcal{L}^{\vee}\subset\pi\varLambda\subset^{1} \mathcal{L}\), from which follows that \(\mathcal{L}\subset\varLambda\). Observe that if \(M\in\mathcal{V}_{\mathcal{L}}^{\circ}(k)\), where \(k\) is an algebraically closed field, we have that \(M\in\mathcal{N}_{\varLambda}^{(1)}(k)\). Indeed, given such a lattice \(M\), we know that \(\varLambda(M)=\mathcal{L}\) and therefore \(M\) is not \(\tau\)-stable. Since both \(M\) and \(\pi\varLambda_{k}\) are self-dual lattices, if \(M\) is contained in \(\pi\varLambda\) then it is equal to it. Since \(M\) is not \(\tau\)-stable, this is not possible. Hence, we have inclusions \(\pi\varLambda_{k}\subsetneq\pi\varLambda_{k}+M\subset\mathcal{L}_{k}\) and since \(\pi\varLambda_{k}\) has index \(1\) in \(\mathcal{L}\) the claim follows. It follows that there is an inclusion of reduced schemes \(\mathcal{N}_{\mathcal{L}}^{\circ}\subset\mathcal{N}_{\varLambda}^{(1)}\). By Remark 6.6 we know that \(\mathcal{N}_{\pi\varLambda}\) is contained in the closure of \(\mathcal{N}_{\mathcal{L}}^{\circ}\), hence its only element \(X_{\pi\varLambda}\) belongs to the closure of \(\mathcal{N}_{\varLambda}^{(1)}\). We shortly recall the _universal vector bundle on the Grassmannian_, for more details we refer to [10, Ex. 11.9]. Let \(\operatorname{Grass}_{n}(W)\) be the Grassmannian variety parametrizing subspaces of dimension \(m\) in a given vector space \(W\). Then the universal vector bundle over \(\operatorname{Grass}_{m}(W)\) is a locally trivial vector bundle of rank \(m\). Its \(k\)-valued points, for any field \(k\), consist of pairs \((U,v)\), where \(U\) is a subspace belonging to \(\operatorname{Grass}_{m}(W)\) and \(v\) is a vector in \(U\). Roughly speaking, one identifies the fiber of the universal vector bundle over a subspace \(U\) with \(U\) itself. In this section we are in particular interested in the universal line bundle \(\mathcal{O}(1)\) over the projective space \(\mathbb{P}(W)\), where \(W\) denotes again the six-dimensional \(\mathbb{F}\)-vector space \(\varLambda/\pi\varLambda\). We also consider \[\mathcal{H}^{1}\coloneqq\mathscr{H}_{\text{\it cm}}(\mathcal{O}(1),\mathcal{ O}(-1))=\mathcal{O}(-2). \tag{6.9}\] In order to study \(S_{V\pi}^{(1)}\) we first need to study the subscheme \(\mathcal{S}^{(1)}\) of \(\operatorname{Grass}(V)\) parametrizing the Lagrangian subspaces \(U\subset V\) such that the dimension of the subspace \(\overline{\pi\varLambda}+U\) is equal to \(7\). In other words, \(\mathcal{S}^{(1)}\) is the intersection of the Lagrangian Grassmannian \(\mathcal{L}\operatorname{Grass}(V)\) with a Schubert cell. It is also clear that \(S_{V\pi}^{(1)}\) is a closed subscheme of \(\mathcal{S}^{(1)}\). **Lemma 6.10**.: _The quotient map \(q:V=\varLambda/\pi^{2}\varLambda\longrightarrow W=\varLambda/\pi\varLambda\) induces a morphism from \(\mathcal{S}^{(1)}\) to the line bundle \(\mathcal{H}^{1}\) over \(\mathbb{P}(W)\)._ Proof.: We first observe that the quotient map \(q\) induces a map \(\mathcal{S}^{(1)}\to\mathbb{P}(W)\). This follows directly from the definition of \(\mathcal{S}^{(1)}\) as intersection of the Lagrangian Grassmannian and a Schubert cell. An \(R\)-point \(U\in\mathcal{S}^{(1)}(R)\) is sent by \(q\) to the direct summand \((U+\overline{\pi\varLambda}_{R})/\overline{\pi\varLambda}_{R}\) of \(W_{R}\). If is a morphism of \(\mathbb{F}\)-algebras, since \(\overline{\pi}\varLambda_{R}\) is a free submodule of \(V_{R}\) the quotient by \(\overline{\pi}\varLambda_{R}\) commutes with the tensor product \(\cdot\otimes_{R}R^{\prime}\). In other words we have \[(U\otimes_{R}R^{\prime}+\overline{\pi}\varLambda_{R^{\prime}})/\overline{\pi} \varLambda_{R^{\prime}}=((U+\overline{\pi}\varLambda)\otimes_{R}R^{\prime})/ \overline{\pi}\varLambda_{R^{\prime}}=\big{(}(U+\overline{\pi}\varLambda_{R}) /\overline{\pi}\varLambda_{R}\big{)}\otimes_{R}R^{\prime},\] which proves that the map induced by \(q\) commutes with base change by \(\mathbb{F}\)-algebras. It follows that \(q\) induces a morphism of projective \(\mathbb{F}\)-schemes \(\mathcal{S}^{(1)}\to\mathbb{P}(W)\). Our aim is to construct a morphism of \(\mathbb{P}(W)\)-schemes \(\mathcal{S}^{(1)}\to\mathcal{H}^{1}\). By Yoneda's Lemma, it is enough to give a map \(\mathcal{S}^{(1)}(R)\to\mathcal{H}^{1}(R)\) for every \(\mathbb{F}\)-algebra \(R\), and then prove that this map commutes with tensor products, compare [1, Cor. 4.7]. In other words, our goal is to associate to any Lagrangian \(U\) in \(\mathcal{S}^{(1)}(R)\) that is sent by \(q\) to \(l\in\mathbb{P}(W)\) an \(R\)-linear map \(\psi_{U}:l\to l^{*}\). We have already seen this in the proof of Lemma 5.10 for \(R=k\), however, the proof there requires fixing a basis for \(l\), which may not exist in general, for example when \(R\) is not a local ring. We give here another construction that is independent of the choice of a basis. Fix a Lagrangian complement \(\mathcal{L}\) of \(\overline{\pi}\varLambda\), _i.e._ a Lagrangian subspace of \(V\) such that \(V=\mathcal{L}\oplus\overline{\pi}\varLambda\). Observe that for any \(\mathbb{F}\)-algebra \(R\), since the form on \(V_{R}\) is just the \(R\)-linear extension of that of \(V\), the tensor product \(\mathcal{L}_{R}\) remains a Lagrangian complement of \(\overline{\pi}\varLambda_{R}\) in \(V_{R}\). We identify \(\mathcal{L}\cong W\). Let \(l\in\mathbb{P}(\mathcal{L})(R)\) for an \(\mathbb{F}\)-algebra \(R\). As in the previous section we denote by \(N=l\oplus\overline{\pi}\varLambda_{R}\) its preimage in \(V_{R}\) under \(q\) and by \(N^{\vee}=l^{\vee}\cap\overline{\pi}\varLambda_{R}\) its orthogonal. Since \(N^{\vee}\subset\overline{\pi}\varLambda_{R}\subset N\) the subspace \[U_{0}=l\oplus N^{\vee}\] is a submodule of \(V_{R}\) that is sent to \(l\) by the quotient map \(q\). Observe that for any \(x\in l\) and any \(v+\pi\lambda\in l\oplus\overline{\pi}\varLambda_{R}\), since \(l\) is contained in the Lagrangian \(\mathcal{L}\), we have \(\langle x,v+\pi\lambda\rangle=\langle x,\pi\lambda\rangle\). It follows that the orthogonal of \(U_{0}\) satisfies \[U_{0}^{\vee}=l^{\vee}\cap N=l^{\vee}\cap(l\oplus\overline{\pi}\varLambda_{R}) =l\oplus(l^{\vee}\cap\overline{\pi}\varLambda_{R})=l\oplus N^{\vee}=U_{0},\] from which it follows that \(U_{0}\) is Lagrangian. We claim that \(U_{0}\) is a direct summand of \(V_{R}\). Indeed, since \(l\) is a direct summand of \(\mathcal{L}_{R}\) it is also a direct summand of \(V_{R}=\mathcal{L}_{R}\oplus\pi\varLambda_{R}\). It follows that \(N=l\oplus\overline{\pi}\varLambda_{R}\) is a direct summand of \(V_{R}\), for example one can take as complement the complement of \(l\) in \(\mathcal{L}_{R}\). Let \(Q\subset\mathcal{L}_{R}\) denote such a complement. Observe that since the alternating form is non-degenerate, we have that \(V_{R}^{\vee}=\{0\}\). Since \(Q\) is a complement of \(N\), we have \(V_{R}=N+Q\) and \(\{0\}=N\cap Q\). By taking the duals of these equalities we obtain \(\{0\}=N^{\vee}\cap Q^{\vee}\) and \(V_{R}=N^{\vee}+Q^{\vee}\). It follows that \(N^{\vee}\) is a direct summand of \(V_{R}\) and \(Q^{\vee}\) is a complement. Observe that since \(\overline{\pi}\varLambda_{R}\subseteq N\) by taking duals we have \(N^{\vee}\subset\overline{\pi}\varLambda_{R}\). We want to show that \(N^{\vee}\) is also a direct summand of \(\overline{\pi}\varLambda_{R}\). Let \(\pi\lambda\in\overline{\pi}\varLambda_{R}\). By the previous observation, there exist unique \(n\in N^{\vee}\) and \(q\in Q^{\vee}\) such that \(\pi\lambda=n+q\). Since \(N^{\vee}\subset\overline{\pi}\varLambda_{R}\) it follows that \(q\in Q^{\vee}\cap\overline{\pi}\varLambda_{R}\). Then \(Q^{\vee}\cap\overline{\pi}\varLambda_{R}\) is the complement of \(N^{\vee}\) in \(\overline{\pi}\varLambda_{R}\). Then \(U_{0}=l\oplus N^{\vee}\) is a direct summand of \(V_{R}\), for example, we can take as a complement the submodule \(Q+Q^{\vee}\cap\overline{\pi}\varLambda_{R}\). Let \(U\) be a Lagrangian subspace of \(V\) such that \(q(U)=l\) and consider the linear map \(\phi_{U}\) obtained as the composition of the canonical isomorphisms \[U_{0}/N^{\vee}=U_{0}/(U_{0}\cap\overline{\pi}\varLambda_{R})\xrightarrow{ \sim}(U_{0}+\overline{\pi}\varLambda_{R})/\overline{\pi}\varLambda_{R}=l=(U+ \overline{\pi}\varLambda_{R})/\overline{\pi}\varLambda_{R}\xrightarrow{\sim}U/(U \cap\overline{\pi}\varLambda_{R})=U/N^{\vee}.\] This induces a morphism of submodules of \(N/N^{\vee}\) \[\psi_{U}:U_{0}/N^{\vee} \longrightarrow\overline{\pi}\varLambda_{R}/N^{\vee}\] \[u \mapsto u-\phi_{U}(u).\] Observe that there is an \(R\)-linear map from the module \(\overline{\pi}\varLambda_{R}/N^{\vee}\) into the dual space \(l^{*}=\operatorname{Hom}_{R}(l,R)\) given by the alternating form \[\overline{\pi}\varLambda_{R}/N^{\vee}\longrightarrow l^{*}\] \[x\mapsto\begin{pmatrix}l\longrightarrow R\\ v\mapsto\langle v,x^{\prime}\rangle\end{pmatrix},\] where \(x^{\prime}\in\overline{\pi\Lambda}_{R}\) is any lift of \(x\). Indeed, we have already observed that \(l^{\vee}\cap\overline{\pi\Lambda}_{R}=N^{\vee}\), from which it follows that the map above is well-defined and injective (in particular bijective when \(R\) is a field). It follows that we can identify the map \(\psi_{U}:l\cong U_{0}/N^{\vee}\to\overline{\pi\Lambda}_{R}/N^{\vee}\hookrightarrow l ^{*}\) with an element of \(\operatorname{Hom}_{R}(l,l^{*})\). The assignment \(U\mapsto(l=q(U),\psi_{U}:l\to l^{*})\) gives the desired map of sets Since we have been working exclusively with projective, hence flat, modules (as direct summands of free modules), all quotients considered above commute with base change to another \(\mathbb{F}\)-algebra \(R\to R^{\prime}\). It follows that the map \(U\mapsto\psi_{U}\) commutes with base change, too, and therefore induces a morphism of \(\mathbb{F}\)-projective schemes. Our next step is to restrict the morphism constructed in the previous lemma to the closed subscheme \(S^{(1)}_{V\pi}\) of \(\mathcal{S}^{(1)}\). We have already seen in Section 5 that there is a bijection between the closed points of \(S^{(1)}_{V\pi}\) with those of the restriction of the line bundle \(\mathcal{H}^{1}\) to the variety \(R_{W}\subset\mathbb{P}(W)\). Recall that the latter has been defined in 4.16 and is the closure of some generalized Deligne-Lusztig variety for the orthogonal group. **Lemma 6.11**.: _The morphism of Lemma 6.10 induces an isomorphism from \(S^{(1)}_{V\pi}\) to the restriction of \(\mathcal{H}^{1}\) to the variety \(R_{W}\subset\mathbb{P}(W)\) studied in Section 4.3. It follows that \(S^{(1)}_{V\pi}\) is normal, irreducible and of dimension \(4\)._ Proof.: As \(\mathcal{S}^{(1)}\to\mathcal{H}^{1}\) is a morphism of projective schemes it is proper. We first show it is a monomorphism. Suppose \(U_{1},U_{2}\in\mathcal{S}^{(1)}(R)\) are both sent by \(q\) to \(l\in\mathbb{P}(W_{R})\), and that we also have \(\psi_{U_{1}}=\psi_{U_{2}}\in\operatorname{Hom}_{R}(l,l^{*})\). Then by definition of \(\psi_{U_{i}}\), the quotients \(U_{i}/N^{\vee}\) are the image under \(\psi_{U_{i}}\) of the fixed element \(U_{0}\) constructed in the proof of Lemma 6.10. It follows that \(U_{1}/N^{\vee}=U_{2}/N^{\vee}\) as submodules of \(N/N^{\vee}\). Since \(U_{1},U_{2}\) are both contained in the preimage \(N=l+\overline{\pi\Lambda}_{R}\) of \(l\) and are Lagrangian, they both contain the orthogonal \(N^{\vee}\), hence \(U_{1}=U_{2}\). The morphism \(\mathcal{S}^{(1)}\to\mathcal{H}^{1}\) is then injective on \(R\)-points for any \(\mathbb{F}\)-algebra \(R\). Therefore, it is a proper monomorphism, and by Zariski's main theorem it is a closed immersion, compare [1, Cor. 12.92, Prop. 12.94]. We restrict this closed immersion to the reduced closed subscheme \(S^{(1)}_{V\pi}\hookrightarrow\mathcal{S}^{(1)}\). In the previous section, see the proof of Lemma 5.8, we have seen that this morphism induces a bijection between the \(k\)-valued points of \(S^{(1)}_{V\pi}\) and those of the restriction of \(\mathcal{H}^{1}\) to the variety \(R_{W}\), for any algebraically closed field \(k\). Since we are working with reduced schemes it follows that the closed immersion \(S^{(1)}_{V\pi}\to\mathcal{H}^{1}_{|R_{W}}\) is actually an isomorphism. The remaining properties follow from the corresponding statement for \(R_{W}\), see Lemma 4.17, and the fact that line bundles preserve normality and irreducibility, while they increase the dimension by one. The following result is an immediate consequence of the previous lemma, the definition of \(\mathcal{N}^{(1)}_{\Lambda}\) and the fact that it is dense in \(\mathcal{N}^{\leq 1}_{\Lambda}\) by Lemma 6.8. **Corollary 6.12**.: \(\mathcal{N}^{(1)}_{\Lambda}\) _is universally homeomorphic to a locally trivial line bundle over \(R_{W}\). It follows that its closure \(\mathcal{N}^{\leq 1}_{\Lambda}\) is irreducible and has dimension \(4\)._ We are now ready to prove the first part of Theorem 1.2. **Proposition 6.13**.: _Assume that the Hermitian form over \(C\) is split. Then \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) has irreducible components of two types._ 1. _For every maximal vertex lattice_ \(\mathcal{L}\)_, there is an irreducible component_ \(\mathcal{N}_{\mathcal{L}}\)_, which is universally homeomorphic to the generalized Deligne-Lusztig variety_ \(S_{V}\) _for the symplectic group_ \(\mathrm{Sp}_{6}\) _and has dimension_ \(5\)_._ 2. _For every_ \(2\)_-modular lattice_ \(\Lambda\)_, there is an irreducible component_ \(\mathcal{N}^{\leq 1}_{\bar{\Lambda}}\)_. It contains the dense subscheme_ \(\mathcal{N}^{(1)}_{\Lambda}\)_, which is universally homeomorphic to a locally trivial line bundle over the generalized Deligne-Lusztig variety_ \(R_{W}\)_. These components have dimension_ \(4\)_._ Proof.: We have seen in the previous section that for \(k\) algebraically closed, if a lattice \(M\in\mathcal{V}(k)\) is not contained in a vertex lattice, then \(\pi\Lambda_{k}\subset^{1}M+\pi\Lambda_{k}\) for some \(2\)-modular lattice \(\Lambda\). Therefore, the union of the subsets \(\mathcal{N}_{\mathcal{L}}\) for \(\mathcal{L}\) running over the set of vertex lattices of maximal type, together with \(\mathcal{N}^{\leq 1}_{\bar{\Lambda}}\) for \(\Lambda\) running over the set of \(2\)-modular lattices, contains \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\). Again we are using the fact that a reduced scheme over \(\mathbb{F}\) is determined by its closed points. For a maximal vertex lattice \(\mathcal{L}\), we have seen that the irreducible scheme \(\mathcal{N}_{\mathcal{L}}\) contains the open and dense subscheme corresponding to the stratum \(X_{B}(s_{3}s_{2}s_{1})\sqcup X_{B}(s_{3}s_{2}s_{3}s_{1})\sqcup X_{B}(s_{3}s_{2 }s_{3}s_{1}s_{2})\) in the decomposition (4.11) of \(S_{V}\). Its \(k\)-points correspond to those lattices \(M\in\mathcal{V}(k)\) such that \(\Lambda(M)=\mathcal{L}\) and which are therefore not contained in \(\mathcal{N}_{\mathcal{L}^{\prime}}\) for any other maximal vertex lattice \(\mathcal{L}^{\prime}\). Similarly, observe that the irreducible subscheme \(\mathcal{N}^{(1)}_{\Lambda}\) contains an open and dense subscheme whose \(k\)-points are the lattices \(M\) such that \(\Lambda(M)=\Lambda\). This subscheme corresponds to the dense subvariety of the Deligne-Lusztig variety \(Y_{a_{0}}\) introduced in the discussion of Remark 4.21. Its \(k\)-valued points are therefore not contained in any \(\mathcal{N}^{\leq 1}_{\bar{\Lambda}^{\prime}}\) for any other \(2\)-modular lattice \(\Lambda^{\prime}\) nor in \(\mathcal{N}_{\mathcal{L}}\) for a maximal vertex lattice \(\mathcal{L}\). Since for any vertex lattice there is a \(2\)-modular lattice containing it, we need to check if for some vertex lattice \(\mathcal{L}\) the corresponding component \(\mathcal{N}_{\mathcal{L}}\) is contained in the union of the components \(\mathcal{N}^{\leq 1}_{\Lambda}\). Since the dimension of any \(\mathcal{N}_{\mathcal{L}}\) is \(5\) and that of any \(\mathcal{N}^{\leq 1}_{\bar{\Lambda}}\) is \(4\), this is not possible, hence we can conclude that these are exactly the irreducible components of \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\). The following result will be relevant in the next section for a comparison with the decomposition given by the set of admissible elements on the generalized affine Deligne-Lusztig variety \(X(\mu,b)\) associated to our problem. Recall that we have proven in Lemma 4.13 that the variety \(S_{V}\) has a stratification in terms of varieties \(S_{V^{\prime}}\) for smaller dimensional symplectic vector spaces \(V^{\prime}\). Similarly, we have seen in Lemma 4.19 that \(R_{W}\) has a stratification in terms of the generalized Deligne-Lusztig varieties \(Y_{a}\) of Definition 4.14. In particular, since in this section \(W\) is a split orthogonal space of dimension \(6\), by Lemma 4.15\(R_{W}\) has only two strata \(R_{W}=Y_{\infty}\sqcup Y_{2}\). **Corollary 6.14**.: _The irreducible components of \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) are stratified as follows._ 1. _For_ \(\mathcal{L}\) _a vertex lattice in_ \(C\) _of type_ \(6\)_, the corresponding irreducible component_ \(\mathcal{N}_{\mathcal{L}}\) _can be decomposed as_ \[\mathcal{N}_{\mathcal{L}}=\bigsqcup_{\mathcal{L}^{\prime}\subset\mathcal{L}} \mathcal{N}^{\circ}_{\mathcal{L}^{\prime}},\] _where the union runs over the vertex lattices of_ \(C\) _contained in_ \(\mathcal{L}\) _and each_ \(\mathcal{N}^{\circ}_{\mathcal{L}^{\prime}}\) _is universally homeomorphic to the generalized Deligne-Lusztig variety_ \(S_{V^{\prime}}\)_, with_ \(V^{\prime}\) _the symplectic vector space_ \(\mathcal{L}^{\prime}/\mathcal{L}^{\prime\vee}\)_. The strata are then given by the union over the vertex lattices of a fixed type and the closure of each stratum is given by the strata corresponding to smaller type._ _._ 2. _For_ \(\Lambda\) \(a\) \(2\)_-modular lattice in_ \(C\)_, the corresponding irreducible component_ \(\mathcal{N}_{\Lambda}^{\leq 1}\) _can be decomposed as_ \[\mathcal{N}_{\Lambda}^{\leq 1}=\mathcal{N}_{\Lambda}^{(0)}\sqcup\mathcal{N}_{ \Lambda,\infty}\sqcup\mathcal{N}_{\Lambda,2},\] _and the closure of each stratum is the union of the strata preceding it. Here,_ \(\mathcal{N}_{\Lambda}^{(0)}\) _is defined in an analogous way as_ \(\mathcal{N}_{\Lambda}^{(1)}\) _and its only point is the_ \(p\)_-divisible group_ \(X_{\pi\Lambda}\) _associated to the lattice_ \(\pi\Lambda\otimes_{\mathcal{O}_{E}}W(\mathbb{F})\) _contained in_ \(N\)_. The other two strata are universally homeomorphic to the restriction of the line bundle over_ \(R_{W}\) _to its strata_ \(R_{W}=Y_{\infty}\sqcup Y_{2}\)_. In particular the closed subscheme_ \(\mathcal{N}_{\Lambda}^{(0)}\sqcup\mathcal{N}_{\Lambda,\infty}\) _is contained in the union of the irreducible components_ \(\mathcal{N}_{\mathcal{L}}\)_, for all vertex lattices_ \(\mathcal{L}\subset\Lambda\)_._ Proof.: The first statement follows from the universal homeomorphism \(f:\mathcal{N}_{\mathcal{L}}\longrightarrow S_{V}\) and the stratification of \(S_{V}\) proved in Lemma 4.13. Recall that the irreducible components of the smaller dimensional strata of \(S_{V}\) are indexed over the isotropic subspaces \(U\) of \(V=\mathcal{L}/\mathcal{L}^{\vee}\), and that the component corresponding to \(U\) is again a generalized Deligne-Lusztig variety \(S_{V^{\prime}}\) for \(V^{\prime}=U^{\vee}/U\). One can then observe that the isotropic subspaces of \(V\) are in bijection with the vertex lattices \(\mathcal{L}^{\prime}\) of \(C\) that are contained in \(\mathcal{L}\). For a \(2\)-modular lattice \(\Lambda\), it is clear that \(\mathcal{N}_{\Lambda}^{\leq 1}=\mathcal{N}_{\Lambda}^{(0)}\sqcup\mathcal{N}_{ \Lambda}^{(1)}\). In particular, \(\mathcal{N}_{\Lambda}^{(0)}\) is the preimage under \(f\) of \(S_{V\pi}^{(0)}\), the closed subscheme of \(S_{V\pi}^{\leq 1}\) consisting of Lagrangian submodules \(U\) such that the rank of \(U+\overline{\pi\Lambda}_{R}\) is \(6\), which is equivalent to \(U=\overline{\pi\Lambda}_{R}\). Observe that \(\pi\Lambda\otimes W(\mathbb{F})\) is a \(\tau\)-stable, self-dual lattice, hence it belongs to \(\mathcal{N}_{\mathcal{L}}(\mathbb{F})\) for some vertex lattice of type \(6\) contained in \(\Lambda\), and it corresponds to a \(p\)-divisible group \(X_{\pi\Lambda}\in\mathcal{N}_{\Lambda}(\mathbb{F})\). The open and dense subscheme \(\mathcal{N}_{\Lambda}^{(1)}\) is universally homeomorphic by the previous proposition to a line bundle over \(R_{W}\). Then the stratification follows from the decomposition of \(R_{W}\) given in Lemma 4.19. We have seen in Lemma 5.10 that the closed points of \(\mathcal{N}_{\Lambda}^{(1)}\) that are mapped by \(q\) into \(Y_{\infty}\) correspond to lattices \(M\) such that \(\Lambda(M)\) is a vertex lattice, from which the last statement follows. _Remark 6.15_.: If we compare the previous proposition with the analogous results [14, Prop. 6.6] for signature \((1,n-1)\), we see that irreducible components homeomorphic to Deligne-Lusztig varieties for the symplectic group appear in both cases. However, the existence of a second family of irreducible components, those homeomorphic to the line bundle, is a new specific feature of signature \((2,4)\). _Remark 6.16_.: Another difference to signature \((1,n-1)\) is that the intersection pattern is now quite hard to describe in terms of the Bruhat-Tits building. For example, even if we know there is a point \(M\in\mathcal{N}_{\Lambda_{1}}^{(1)}(k)\cap\mathcal{N}_{\Lambda_{2}}^{(1)}(k)\), for two distinct \(2\)-modular lattices, it is in general not true that the whole fiber over the image of \(M\) in \(R_{W}\) is contained in the intersection. Indeed, this would be the case if and only if \(\pi\Lambda_{1}\subset\Lambda_{2}\), which is not true in general. On the other hand, for vertex lattices, the intersection pattern can be described also in our case in terms of the Bruhat-Tits building for \(\operatorname{SU}(C)(\mathbb{Q}_{p})\) by Proposition 3.21 and the previous corollary. The intersection of two components corresponding to different types of lattices is also not easy to describe. Recall the decomposition of the closed subvariety \(Y_{\infty}\) of \(R_{W}\) given in Remark 4.20. For dimension \(6\) and split symmetric form, \(Y_{\infty}\) can be decomposed as a union of three strata \[Y_{\infty}=X_{P_{1}}(1)\sqcup X_{P_{2}}(t_{1})\sqcup X_{B}(t_{1}t_{2}).\] As we have seen in Lemma 4.19 the closed points of \(X_{B}(t_{1}t_{2})\) are those \(l\in\mathbb{P}(W)(k)\) such that \(l+\Phi(l)+\Phi^{2}(l)\) is \(\Phi\)-stable and has dimension \(3\). By the discussions in the previous chapter, see in particular the proof of Lemma 5.10, these points correspond to lattices \(M\in\mathcal{N}_{\Lambda}^{(1)}(k)\) such that \(\Lambda(M)\) is a vertex lattice of type \(6\). This means that for some vertex lattice \(\mathcal{L}\) of type \(6\) 6, the intersection \(\mathcal{N}_{\varLambda}^{\leq 1}\cap\mathcal{N}_{\mathcal{L}}^{\circ}\) is non-empty, but by dimension reasons it does not contain the whole stratum \(\mathcal{N}_{\mathcal{L}}^{\circ}\). Similarly, the subvariety of \(Y_{\infty}\) that in Lemma 4.19 is identified with \(X_{P_{2}}(t_{1})\) corresponds to those lattices \(M\) such that \(\varLambda(M)\) is a vertex lattice of type 4. Therefore, for some vertex lattices \(\mathcal{L}\) of type 4 the intersection of the stratum \(\mathcal{N}_{\mathcal{L}}^{\circ}\) with \(\mathcal{N}_{\varLambda}^{(1)}\) is non-empty. In particular, this intersection is contained in a subscheme of \(\mathcal{N}_{4\infty}\) universally homeomorphic to the restriction of the line bundle \(\mathcal{H}^{1}\) to the subvariety \(X_{P_{2}}(t_{1})\) of \(Y_{\infty}\). This subscheme has then dimension 2, while the stratum \(\mathcal{N}_{\mathcal{L}}^{\circ}\) has dimension 3, compare Lemma 4.13, and therefore is not contained in \(\mathcal{N}_{\varLambda}^{(1)}\). On the other hand, we have seen in the proof of Lemma 6.8 that if \(\mathcal{L}\) is a vertex lattice of type 2 such that \(\pi\varLambda\subset\mathcal{L}\) then \(\mathcal{N}_{\mathcal{L}}\) is contained in \(\mathcal{N}_{\varLambda}^{\leq 1}\). _Remark 6.17_.: As we have already observed, for a vertex lattice \(\mathcal{L}\), the stratum \(\mathcal{N}_{\mathcal{L}}^{\circ}\) corresponding to lattices \(M\) such that \(\varLambda(M)=\mathcal{L}\) is open and dense in \(\mathcal{N}_{\mathcal{L}}\). It is interesting to notice that for a 2-modular lattice \(\varLambda\), the subscheme \(\mathcal{N}_{\varLambda}^{\circ}\) is open and dense in \(\mathcal{N}_{\varLambda}^{(1)}\), but it is not dense in the whole \(\mathcal{N}_{\varLambda}\), as \(\mathcal{N}_{\varLambda}^{\leq 1}\) is closed in \(\mathcal{N}_{\varLambda}\). This, together with the fact that the intersection pattern is harder to describe, and that the stratification of Corollary 6.14 of the irreducible components do not extend to a stratification of the whole \(\bar{\mathcal{N}}_{\mathrm{red}}^{0}\) in terms of Deligne-Lusztig varieties, can all be seen as consequences of the fact that the underlying group-theoretical datum is not fully Hodge-Newton decomposable. ### Irreducible components in the non-split case Assume now that the Hermitian form on \(C\) is non-split. We have seen in the previous section, compare Lemma 5.11, that any lattice \(M\) in \(\mathcal{V}(k)\) is contained in some 2-modular lattice \(\varLambda_{k}\) such that \(\pi\varLambda_{k}\subset^{\leq 2}M+\pi\varLambda_{k}\). We are going to see in this section that there are two families of irreducible components of \(\bar{\mathcal{N}}_{\mathrm{red}}^{0}\) and this time both are indexed over the set of 2-modular lattices. Roughly speaking, these components are characterized by the index of the inclusion \(\pi\varLambda_{k}\subset^{\leq 2}M+\pi\varLambda_{k}\). Again our strategy is to use the universal homeomorphism \(f:\mathcal{N}_{\varLambda}\longrightarrow S_{V\pi}\) and the results on closed points of the previous section to describe the irreducible components of \(\bar{\mathcal{N}}_{\mathrm{red}}^{0}\) in terms of Deligne-Lusztig varieties. Observe that for a 2-modular lattice \(\varLambda\subset C\) the corresponding subscheme \(\mathcal{N}_{\varLambda}\) contains the closed subscheme \(\mathcal{N}_{\varLambda}^{\leq 1}\) defined in the same way as in the split case. Observe that the proofs of Lemma 6.11 and its corollary do not make use of the fact that the Hermitian form over \(C\) is split, therefore one can show in the same way the following result. **Lemma 6.18**.: _Let \(\mathcal{N}_{\varLambda}^{(1)}\) be the preimage under the universal homeomorphism \(f\) of the locally closed subscheme \(S_{V\pi}^{(1)}\) defined as in the split case. Then \(\mathcal{N}_{\varLambda}^{(1)}\) is universally homeomorphic to the restriction of the line bundle \(\mathcal{H}^{1}\) to the variety \(R_{W}\). It follows that its closure \(\mathcal{N}_{\varLambda}^{\leq 1}\) is irreducible and has dimension \(4\)._ Similarly to the split case, we also consider the open subscheme \(S_{V\pi}^{(2)}\) of \(S_{V\pi}\) given by \[S_{V\pi}^{(2)}(R)\coloneqq\{U\in S_{V\pi}(R)\mid U+\overline{\pi\varLambda}_{R} \text{ is a direct summand of }V_{R}\text{ with }\operatorname{rk}(\overline{\pi\varLambda}_{R}+U)=8\}. \tag{6.19}\] We denote by \(\mathcal{N}_{\varLambda}^{(2)}\) its preimage under the universal homeomorphism \(f\). It is again an open subscheme of \(\mathcal{N}_{\varLambda}\). Recall that by Lemma 5.11 all lattices in \(\mathcal{N}_{\varLambda}(k)\) are such that \(\pi\varLambda_{k}\subset^{\leq 2}M+\pi\varLambda_{k}\), and since \(f\) is a bijection on closed points, we have that all Lagrangian subspaces \(U\) in \(S_{V\pi}(k)\) satisfy \(\dim(U+\overline{\pi\varLambda}_{k})\leq 8\). However, as we are going to see in Proposition 6.24, \(\mathcal{N}_{\varLambda}^{(2)}\) is not dense in \(\mathcal{N}_{\varLambda}\), and its closure will be one type of irreducible components of this scheme. The other irreducible component will be \(\mathcal{N}_{\varLambda}^{\leq 1}\). Again, let \(W\) be the six-dimensional \(\mathbb{F}\)-vector space given by \(\varLambda/\pi\varLambda\). Recall that it is endowed with a non-split symmetric form. In Section 4.3 we have studied the variety \(Q_{W}\subset\operatorname{Grass}_{2}(W)\). Recall that it is the closure of some generalized Deligne-Lusztig variety for the non-split orthogonal group of rank 6. Moreover, we have proven in Lemma 4.22 that there is a stratification \(Q_{W}=Z_{0}\sqcup Z_{1}\sqcup Z_{2}\). Since the form is non-split, the open dense subvariety \(Z_{2}\) is isomorphic to the union \(X_{B}(t_{2}t_{1})\sqcup X_{B}(\varPhi(t_{2}t_{1}))\) and has therefore two irreducible components. **Lemma 6.20**.: _The map \(S^{(2)^{\circ}}_{V\pi}\to\operatorname{Grass}_{2}(W)\) induced by \(q:V\to W=V/\overline{\pi\Lambda}\) is a morphism of projective schemes. It sends \(S^{(2)}_{V\pi}\) to the projective scheme \(Q_{W}\) of Section 4.3._ Proof.: The fact that the map induced by \(q\) is a morphism of projective schemes is proved in the same way as in Lemma 6.10. In order to find the image of \(S^{(2)}_{V\pi}\) under this map, it is enough to consider its closed points. Then the statement follows from Lemma 5.11. Denote by \(S^{(2)^{\circ}}_{V\pi}\) the preimage of the open subscheme \(Z_{2}\subset Q_{W}\) under the morphism of Lemma 6.20. As in the split case, our next goal is to construct a morphism from \(S^{(2)^{\circ}}_{V\pi}\) to a vector bundle over \(Z_{2}\). We have seen in Remark 4.23 that there is a morphism \(Z_{2}\cong X_{B}(t_{2}t_{1})\sqcup X_{B}(t_{3}t_{1})\to\mathcal{F}l(W)\), where \(\mathcal{F}l(W)\) is the partial flag variety parametrizing flags of the form \(U_{1}\subset U_{2}\) with \(\dim(U_{i})=i\). Consider now the maps \(\pi_{i}:\mathcal{F}l(W)\to\operatorname{Grass}_{i}(W)\) sending a flag to its term of dimension \(i\). We denote by \(\mathcal{U}_{i}\) the pullback of the universal vector bundle on \(\operatorname{Grass}_{i}(W)\) along the map \(\pi_{i}\). Then we consider \(\mathcal{H}^{2}\) the rank-\(2\), locally trivial vector bundle on \(\mathcal{F}l(W)\) obtained as the homomorphism bundle \[\mathcal{H}^{2}\coloneqq\mathscr{H}_{\text{\tiny{even}}}(\mathcal{U}_{1}, \mathcal{U}_{2}^{*}). \tag{6.21}\] **Lemma 6.22**.: _The morphism \(g:S^{(2)^{\circ}}_{V\pi}\overset{q}{\longrightarrow}Z_{2}\to\mathcal{F}l(W)\) induces a universal homeomorphism from \(S^{(2)^{\circ}}_{V\pi}\) to the pullback along \(Z_{2}\to\mathcal{F}l(W)\) of the rank-\(2\) vector bundle \(\mathcal{H}^{2}\)._ Proof.: As in the proof of Lemma 6.11, we start with defining maps \(S^{(2)^{\circ}}_{V\pi}(R)\to\mathcal{H}^{2}(R)\) for any \(\mathbb{F}\)-algebra \(R\). Let \(U\in S^{(2)^{\circ}}_{V\pi}(R)\) and denote by \(l\subset T\) its image in \(\mathcal{F}l(W)\). Observe that by definition of \(g\) above we have that \(T=q(U)\), where \(q:V_{R}\to W_{R}=V_{R}/\overline{\pi\Lambda}_{R}\). We fix again a Lagrangian complement \(\mathcal{L}\) of \(\overline{\pi\Lambda}\) in \(V\), and we identify \(W\) and \(\mathcal{L}\), so that we can consider the image of \(U\) as a flag \(l\subset T\subset\mathcal{L}\). Consider \(N=T\oplus\overline{\pi\Lambda}_{R}\) and its orthogonal \(N^{\vee}=T^{\vee}\cap\overline{\pi\Lambda}_{R}\). Then, as in the proof of Lemma 6.10 one shows that the submodule \(U_{0}=T\oplus N^{\vee}\) is a Lagrangian direct summand of \(V_{R}\), and it is sent by \(q\) to \(T\in Z_{2}\) and therefore by \(g\) to \(l\subset T\). Observe again that since \(U\) is Lagrangian and it is contained in \(N\), we have \(N^{\vee}\subset U\subset N\). It follows that the preimage under \(q\) of \(l\) in \(U\) is a submodule of the form \(l_{U}\oplus N^{\vee}\) for a rank-one submodule \(l_{U}\subset U\) such that \(l_{U}\subset l\oplus\overline{\pi\Lambda}_{R}\). Since \(N=T\oplus\overline{\pi\Lambda}_{R}=U+\overline{\pi\Lambda}_{R}\) it follows that \(N^{\vee}=U\cap\overline{\pi\Lambda}_{R}\), as \(U\) and \(\overline{\pi\Lambda}_{R}\) are Lagrangian direct summands. Therefore, the intersection of \(U\) with the preimage in \(V_{R}\) of \(l\) satisfies \(U\cap(l\oplus\overline{\pi\Lambda}_{R})=l_{U}\oplus N^{\vee}\), and is then again a direct summand of \(V\). We have an isomorphism of submodules of \(N/N^{\vee}\) given by the second isomorphism theorem \[\phi_{U}:(l\oplus N^{\vee})/N^{\vee}\cong l\cong(l_{U}\oplus N^{\vee})/N^{ \vee}=(U\cap(l\oplus\overline{\pi\Lambda}_{R}))/N^{\vee}\] which gives again a morphism of submodules of \(N/N^{\vee}\) \[\psi_{U}:l \longrightarrow\overline{\pi\Lambda}_{R}/N^{\vee}\] \[v \mapsto v-\phi_{U}(v)\] By the same arguments as in the proof of Lemma 6.10, there is an injective morphism \[\overline{\pi\Lambda}_{R}/N^{\vee} \longrightarrow T^{*}\] \[x \mapsto\begin{pmatrix}T\longrightarrow R\\ v\mapsto\langle v,x^{\prime}\rangle\end{pmatrix},\] which is an isomorphism when \(R\) is a field. Again, we have been working only with direct summands of the free module \(V_{R}\), hence with projective modules. Therefore, taking quotients and intersections commutes with tensoring with another \(\mathbb{F}\)-algebra \(R\to R^{\prime}\). It follows that the map \(S^{(2)^{\circ}}_{V\pi}(R)\longrightarrow\mathcal{H}^{2}(R)\) sending a Lagrangian \(U\) to \((g(U)=l\subset T,\psi_{U}:l\to T^{*})\) commutes with tensor product by \(\mathbb{F}\)-algebras and hence it gives a morphism of projective \(\mathbb{F}\)-schemes \(S^{(2)^{\circ}}_{V\pi}\rightarrow\mathcal{H}^{2}\). We have so constructed a morphism making the diagram commute. It follows that there is a morphism \(S^{(2)^{\circ}}_{V\pi}\) to the rank-\(2\) vector bundle \(\mathcal{H}^{2}_{|Z_{2}}\) obtained as the pullback \(\mathcal{H}^{2}\times_{\mathcal{F}l(W)}Z_{2}\). Last, as we have already seen, to prove that a projective morphism is a universal homeomorphism it suffices to show that it induces a bijection on the sets of \(k\)-valued points, for any algebraically closed field \(k\). Then we can conclude with Lemma 5.11. _Remark 6.23_.: Observe that the morphism above is a universal homeomorphism but not an isomorphism. Indeed, the construction of Lemma 5.11 giving the bijection on closed points, involves taking the Frobenius, and therefore cannot be extended to a generic \(\mathbb{F}\)-algebra \(R\). We denote by \(\mathcal{N}^{(2)^{\circ}}_{\varLambda}\) the open subscheme of \(\mathcal{N}^{(2)}_{\varLambda}\) defined as the preimage of \(S^{(2)^{\circ}}_{V\pi}\) under the universal homeomorphism \(f\). Observe that by the previous lemma, \(\mathcal{N}^{(2)^{\circ}}_{\varLambda}\) has two irreducible components corresponding to the two irreducible components of \(Z_{2}\cong X_{B}(t_{2}t_{1})\cup X_{B}(t_{3}t_{1})\). We can now conclude the proof of Theorem 1.2. **Proposition 6.24**.: _Assume the Hermitian form over \(C\) is non-split. Then \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) has irreducible components of two types._ 1. _For every_ \(2\)_-modular lattice_ \(\varLambda\) _there is an irreducible component_ \(\mathcal{N}^{\leq 1}_{\varLambda}\)_. It contains the dense subscheme_ \(\mathcal{N}^{(1)}_{\varLambda}\)_, which is universally homeomorphic to a locally trivial line bundle over the generalized Deligne-Lusztig variety_ \(R_{W}\)_._ 2. _For every_ \(2\)_-modular lattice_ \(\varLambda\) _there are two irreducible components contained in_ \(\overline{\mathcal{N}^{(2)^{\circ}}_{\varLambda}}\)_. Each of them is the closure of one of the irreducible components of the open subscheme_ \(\mathcal{N}^{(2)^{\circ}}_{\varLambda}\) _and is universally homeomorphic to a rank-_\(2\) _vector bundle over the classical Deligne-Lusztig variety_ \(X_{B}(t_{2}t_{1})\)_, respectively_ \(X_{B}(\varPhi(t_{2}t_{1}))=X_{B}(t_{3}t_{1})\)_._ _It follows that \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) is pure of dimension \(4\)._ Proof.: By Lemma 6.18 and by Lemma 6.22 we know that \(\mathcal{N}^{\leq 1}_{\varLambda}\) and the two components of the closure of \(\mathcal{N}^{(2)^{\circ}}_{\varLambda}\) are irreducible and have dimension \(4\). From this it also follows that \(\mathcal{N}^{\leq 1}_{\varLambda}\) is not contained in the closure of \(\mathcal{N}^{(2)^{\circ}}_{\varLambda}\). Moreover, we have seen in Lemma 5.12 that a point \(M\in\mathcal{N}_{\varLambda}(k)\) is either contained in \(\mathcal{N}^{(2)^{\circ}}_{\varLambda}(k)\) or there exists another \(\varLambda^{\prime}\) such that \(M\in\mathcal{N}^{\leq 1}_{\varLambda^{\prime}}(k)\). Again, since we are working with reduced schemes over the algebraically closed field \(\mathbb{F}\), this implies that \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\) is contained in the union \(\bigcup_{\varLambda}\mathcal{N}^{\leq 1}_{\varLambda}\sqcup\bigsqcup_{ \varLambda}\mathcal{N}^{(2)^{\circ}}_{\varLambda}\) running over all \(2\)-modular lattices \(\varLambda\). Last, observe that each subscheme \(\mathcal{N}^{(1)}_{\varLambda}\), respectively \(\mathcal{N}^{(2)^{\circ}}_{\varLambda}\), contains the preimage of the open and dense subset of \(R_{W}\), respectively \(Z_{2}\) mentioned in Remark 4.21. In other words, it contains the open subscheme whose \(k\)-valued points corresponds to lattices \(M\) such that \(\varLambda(M)=\varLambda\) and therefore are not contained in any other subscheme \(\mathcal{N}_{\varLambda^{\prime}}\) _Remark 6.25_.: Observe that in the proof of Lemma 6.5, which is the key ingredient for the proof in the split case of Theorem 1.2, we adopt the same strategy as [14, Sec. 6] to construct a map from the Rapoport-Zink space into the Grassmannian variety \(\operatorname{Grass}(V)\). It is stated in _loc.cit._ that this gives an isomorphism between \(\bar{\mathcal{N}}^{0}_{\operatorname{red}}\) and the closed subvariety \(S_{V}\) of the Grassmannian. The proof of [14, Prop. 6.7] relies on the previous result [13, Thm. 4.8], which however is only true up to a Frobenius twist, as noted by R. Chen1. It seems that the Frobenius twist does not really affect the map in the ramified case, so that one still expects to have an isomorphism. This is yet still open and probably requires a careful analysis of the corresponding Zink's windows for displays. On the other hand, our construction of the homeomorphism of Lemma 6.22, on which the proof of Theorem 1.2 for the non-split case is based, involves the relative Frobenius morphism and hence is not an isomorphism. Footnote 1: Private communication with R. Chen, M. Rapoport and T. Wedhorn _Remark 6.26_.: The following observations will be relevant in the next section for a comparison with the decomposition given by the set of admissible elements on the generalized affine Deligne-Lusztig variety \(X(\mu,b)\). Recall the stratification of \(R_{W}\) given in Lemma 4.19. In the non-split case there are three strata \(R_{W}=Y_{\infty}\sqcup Y_{3}\sqcup Y_{2}\). It follows from Lemma 6.18 that \(\mathcal{N}^{(1)}_{\varLambda}\) has a stratification \[\mathcal{N}^{(1)}_{\varLambda}=\mathcal{N}_{\varLambda,\infty}\sqcup \mathcal{N}_{\varLambda,3}\sqcup\mathcal{N}_{\varLambda,2},\] where each stratum is universally homeomorphic to a line bundle over the corresponding stratum of \(R_{W}\). Moreover, the closure of each stratum is the union of the ones preceding it. Consider a vertex lattice \(\mathcal{L}\subset\varLambda\). Since the form is non-split, \(\mathcal{L}\) has type at most \(4\). By Proposition 6.4 the corresponding closed subscheme \(\mathcal{N}_{\mathcal{L}}\) is universally homeomorphic to the generalized Deligne-Lusztig variety \(S_{V}\) for the symplectic group of rank \(\leq 4\). Moreover, it has a stratification in terms of vertex lattices of smaller type as we have already seen in Corollary 6.14. Recall that the \(k\)-valued points of \(\mathcal{N}_{\varLambda,\infty}\) correspond to lattices \(M\) such that \(\varLambda(M)\) is a vertex lattice and \(\pi\varLambda_{k}\subset^{\leq 1}M+\pi\varLambda_{k}\). We show that for every vertex lattice \(\mathcal{L}\) there is a \(2\)-modular lattice \(\varLambda\) such that \(\mathcal{N}^{\circ}_{\mathcal{L}}\subset\mathcal{N}^{\leq 1}_{\varLambda}\). If \(\varLambda(M)\) has type \(0\), then by Proposition 3.22\(\pi^{-1}\varLambda(M)\) is a \(2\)-modular lattice and by Lemma 6.8\(M\) belongs to \(\mathcal{N}^{\leq 1}_{\pi^{-1}\varLambda(M)}\). We have also seen in the proof of Lemma 6.8, that if \(\mathcal{L}\) has type \(2\) and contains \(\pi\varLambda\), then \(\mathcal{N}^{\circ}_{\mathcal{L}}\subset\mathcal{N}^{(1)}_{\varLambda}\). Observe that by the correspondence between the complex of vertex lattices \(\mathscr{L}\) and the Bruhat-Tits building for \(\operatorname{SU}(C)(\mathbb{Q}_{p})\), for every vertex lattice of type \(2\), there is self-dual lattice contained in it, which is equivalent by Proposition 3.22 to the existence of a \(2\)-modular lattice \(\varLambda\) such that \(\pi\varLambda\subset\mathcal{L}\). Therefore, \(\mathcal{N}^{\circ}_{\mathcal{L}}\subset\mathcal{N}_{\varLambda,\infty} \subset\mathcal{N}^{(1)}_{\varLambda}\) for a suitable \(2\)-modular lattice \(\varLambda\). Last, arguing as in the proof of the first part of Lemma 5.12 we can see that if \(\mathcal{L}\) is a vertex lattice of type \(4\) there is a \(2\)-modular lattice \(\varLambda\) such that \(\mathcal{N}_{\mathcal{L}}\subset\mathcal{N}_{\varLambda,\infty}\subset \mathcal{N}^{(1)}_{\varLambda}\). Let \(\mathtt{V}_{d}\) and \(\mathtt{M}\) denote respectively the set of vertex lattices of type \(d\) and of \(2\)-modular lattices in \(C\). Combining the previous observations we obtain a decomposition \[\bigsqcup_{\varLambda\in\mathtt{M}}\mathcal{N}^{\leq 1}_{\varLambda}=\bigsqcup_{ \mathcal{L}\in\mathtt{V}_{0}}\mathcal{N}^{\circ}_{\mathcal{L}}\sqcup \bigsqcup_{\mathcal{L}\in\mathtt{V}_{2}}\mathcal{N}^{\circ}_{\mathcal{L}} \sqcup\bigsqcup_{\mathcal{L}\in\mathtt{V}_{4}}\mathcal{N}^{\circ}_{\mathcal{L}} \sqcup\bigsqcup_{\varLambda\in\mathtt{M}}\mathcal{N}_{\varLambda,3}\sqcup \bigsqcup_{\varLambda\in\mathtt{M}}\mathcal{N}_{\varLambda,2}. \tag{6.27}\] Moreover, by the previous discussion, the decomposition on the right is actually a stratification where the closure of each stratum is the union of the strata preceding it. _Remark 6.28_.: We also have a decomposition of \(\bar{\mathcal{N}}^{0}_{\operatorname{red}}\). \[\bar{\mathcal{N}}^{0}_{\operatorname{red}}=\bigsqcup_{\mathcal{L}\in\mathtt{V }_{0}}\mathcal{N}^{\circ}_{\mathcal{L}}\sqcup\bigsqcup_{\mathcal{L}\in \mathtt{V}_{2}}\mathcal{N}^{\circ}_{\mathcal{L}}\sqcup\bigsqcup_{\mathcal{L} \in\mathtt{V}_{4}}\mathcal{N}^{\circ}_{\mathcal{L}}\sqcup\bigsqcup_{\varLambda \in\mathtt{M}}\mathcal{N}_{\varLambda,3}\sqcup\bigsqcup_{\varLambda\in \mathtt{M}}\mathcal{N}_{\varLambda,2}\sqcup\bigsqcup_{\varLambda\in\mathtt{M}} \mathcal{N}_{\varLambda}^{(2)^{\circ}}. \tag{6.29}\] It is enough to check that the \(k\)-valued points of \(\bar{\mathcal{N}}^{0}_{\mathrm{red}}\), for \(k\) an algebraically closed field, are all contained in the union on the right. We have seen in Lemma 5.11 that every lattice \(M\in\bar{\mathcal{N}}^{0}(k)=\bar{\mathcal{N}}^{0}_{\mathrm{red}}(k)\) is contained in a \(2\)-modular lattice \(\varLambda_{k}\) and \(\pi\varLambda_{k}\subset^{\leq 2}M+\pi\varLambda_{k}\). If \(M\in\mathcal{N}^{(2)}_{\varLambda}\), if \(M\) does not belong to \(\mathcal{N}^{(2)^{\circ}}_{\varLambda}\) it either belongs to \(\mathcal{N}_{\mathcal{L}}\) for some vertex lattice \(\mathcal{L}\) or to \(\mathcal{N}^{(1)}_{\varLambda^{\prime}}\) for another \(2\)-modular lattice \(\varLambda^{\prime}\), see Lemma 5.12. If \(M\in\mathcal{N}_{\varLambda,\infty}\), then by the same argument as in Corollary 6.14 there is a vertex lattice \(\mathcal{L}\subset\varLambda\) such that \(M\in\mathcal{N}_{\mathcal{L}}\). It would be interesting to give a description of the closure of \(\mathcal{N}^{(2)^{\circ}}_{\varLambda}\) and hence to prove or disprove that (6.29) is a stratification. This is also tightly related to the problem of describing the intersection pattern between components of type \(\mathcal{N}^{(2)^{\circ}}_{\varLambda}\) and components of type \(\mathcal{N}^{\leq 1}_{\varLambda}\). ## 7. Affine Deligne-Lusztig varieties ### Reminder on affine Deligne-Lusztig varieties Affine Deligne-Lusztig varieties were first introduced in [10], in this section we collect some definitions and results before we present the group-theoretical datum associated to our problem. We follow the exposition in [11, Sec. 2] and [12, Sec. 1-3], and refer there for further details. Let \(F\) be a non-Archimedean local field, and denote by \(\tilde{F}\) the completion of its maximal unramified extension in a fixed algebraic closure \(\bar{F}\). The field \(F\) can have the same characteristic as its residue field, in which case it is a field of formal Laurent power series \(F=\mathbb{F}_{q}(\!(t)\!)\), or it can have characteristic zero, _i.e._ it is a finite extension of \(\mathbb{Q}_{p}\). Fix a connected reductive group \(G\) over \(F\). We denote by \(\sigma\) both the Frobenius map on \(\tilde{F}\) and the map induced on \(G(\tilde{F})\). Let \(I\) be a \(\sigma\)-invariant Iwahori subgroup of \(G(\tilde{F})\). Let \(T\) be a maximal torus in \(G\) such that the alcove corresponding to \(I\) lies in the apartment of the Bruhat-Tits building attached to \(T\). To this data we attach the extended affine Weyl group \(\widetilde{W}=N_{T}(\tilde{F})/T(\tilde{F})\cap I\), where \(N_{T}\) is the normalizer of \(T\) in \(G\). In the following we often write \(w\in\widetilde{W}\) for both an element in the extended affine Weyl group and a representative in \(N_{T}(\tilde{F})\). Recall that fixing a special vertex in the base alcove gives a decomposition \(\widetilde{W}=X_{*}(T)_{\varGamma}\rtimes W_{0}\), where \(W_{0}=N_{T}(\tilde{F})/T(\tilde{F})\) is the finite Weyl group, \(X_{*}(T)\) is the coweight lattice of \(T\), and \(\varGamma\) denotes the Galois group \(\mathrm{Gal}(\tilde{F}/F^{\mathrm{un}})\). For a cocharacter \(\mu^{\vee}\) we denote by \(t^{\mu^{\vee}}\) the corresponding element in the extended affine Weyl group. The choice of the base alcove determines also a set \(\widetilde{\mathrm{S}}\) of simple affine reflections generating the affine Weyl group \(W_{a}\subset\widetilde{W}\). Both \(\widetilde{W}\) and \(\widetilde{\mathrm{S}}\) are equipped with an action of \(\sigma\). Denote by \(\varOmega\) the set of elements of \(\widetilde{W}\) normalizing the base alcove. Recall that the affine Weyl group \(W_{a}\) is an infinite Coxeter group. There is a decomposition \(\widetilde{W}=W_{a}\rtimes\varOmega\), which allows us to extend to \(\widetilde{W}\) the notion of length on \(W_{a}\) by setting to zero the length of any element in \(\varOmega\). Similarly, the Bruhat order can be extended from \(W_{a}\) to \(\widetilde{W}\) by setting \(w\tau\leq w^{\prime}\tau^{\prime}\) if and only if \(\tau=\tau^{\prime}\in\varOmega\) and \(w\leq w^{\prime}\) in \(W_{a}\). For any subset \(J\subset\widetilde{\mathrm{S}}\) we denote by \(W_{J}\) the subgroup of \(W_{a}\) generated by the reflections in \(J\) and by \({}^{J}\widetilde{W}\) the set of minimal length representatives for the cosets \(W_{J}\backslash\widetilde{W}\). Two elements \(b,b^{\prime}\in G(\tilde{F})\) are \(\sigma\)-conjugate if there exists \(g\in G(\tilde{F})\) such that \(b=g^{-1}b^{\prime}\sigma(g)\). Denote by \(B(G)\) the set of \(\sigma\)-conjugacy classes in \(G(\tilde{F})\). A class \([b]\in B(G)\) is completely determined by its Newton point \(\nu_{b}\in X_{*}(T)_{\operatorname{Q,dom}}^{\varGamma}\), and its image under the Kottwitz map \(\kappa:B(G)\to\pi_{1}(G)_{\varGamma}\), compare [11, 12] and [13]. Here the fundamental group \(\pi_{1}(G)\) is defined as the quotient of \(X_{*}(T)\) by the coroot lattice. We consider the restriction of \(\sigma\)-conjugation to \(\widetilde{W}\) and study the set of conjugacy classes \(B(\widetilde{W})\) with respect to this restricted action. It is proved in [11, Sec. 3] that the inclusion \(N_{T}\hookrightarrow G\) gives a surjection from the set \(B(\widetilde{W})\) of \(\sigma\)-conjugacy classes of \(\widetilde{W}\) to the set of \(\sigma\)-conjugacy classes \(B(G)\). This map becomes a bijection if we restrict it to classes in \(B(\widetilde{W})\) containing a \(\sigma\)-straight element of \(\widetilde{W}\), compare [11, Thm. 3.3]. Recall that an element \(w\in\widetilde{W}\) is said to be \(\sigma\)-straight if it satisfies \(\ell((w\sigma)^{n})=n\ell(w)\) for all integers \(n\). By [10, Lem. 1.1], this is equivalent to \(\ell(w)=\langle\nu_{w},2\rho\rangle\), where \(\rho\) denotes half the sum of all positive roots and \(\nu_{w}\) is the Newton point of \(w\). An example of \(\sigma\)-straight elements is given by \(\sigma\)-Coxeter elements, that are elements in \(\widetilde{W}\) given by the product of one reflection for each \(\sigma\)-orbit in \(\widetilde{\mathbb{S}}\), compare [11, Prop. 3.1]. For any \(w\in\widetilde{W}\) there is an integer \(n\) such that \[(w\sigma)^{n}=w\sigma(w)\cdots\sigma^{n}(w)=t^{\mu^{\vee}} \tag{7.1}\] for some cocharacter \(\mu^{\vee}\). Then the Newton point of \(w\) is the unique dominant element \(\nu_{w}\in X_{*}(T)\otimes\mathbb{Q}\) lying in the \(W_{0}\)-orbit of \(\frac{1}{n}\mu^{\vee}\). One can see that this does not depend on the choice of the exponent \(n\). Moreover, by [11, Lem 1.2] there is a bijection between the fundamental group \(\pi_{1}(G)_{\varGamma}\) and the subgroup of length zero elements \(\varOmega\). With this bijection one can identify the Kottwitz map on \(B(\widetilde{W})\) with the projection \(\widetilde{W}\to\varOmega\). For \(b\in G(\tilde{F})\) and \(w\in\widetilde{W}\) the corresponding _affine Deligne-Lusztig variety_ is defined as \[X_{w}(b)=\{g\in G(\tilde{F})/I\mid g^{-1}b\sigma(g)\in IwI\},\] where we are identifying the element \(w\) in the extended affine Weyl group with a representative in \(N_{T}(\tilde{F})\). In the following, we are going to study some so-called _fine affine Deligne-Lusztig varieties_, compare [1, Sec. 3.4]. First, let \(\mu^{\vee}\) be a minuscule coweight in \(X_{*}(T)_{\varGamma}\), the _admissible set_ associated to \(\mu^{\vee}\) is \[\operatorname{Adm}(\mu^{\vee})=\{w\in\widetilde{W}\mid w\leq t^{x(\mu^{\vee} )}\text{ for some }x\in W_{0}\}.\] Fix a subset \(J\subset\widetilde{\mathbb{S}}\) and denote by \(P_{J}\) the corresponding parahoric subgroup of \(G(\tilde{F})\). For \(w\in{}^{J}\widetilde{W}\) and \(b\in G(\tilde{F})\) the associated fine affine Deligne-Lusztig variety is \[X_{J,w}(b)=\{g\in G(\tilde{F})/P_{J}\mid g^{-1}b\sigma(g)\in P_{J}\cdot_{ \sigma}IwI\}.\] In other words, it is the image of the affine Deligne-Lusztig variety for \(I,b,w\) under the map \(G/I\to G/P_{J}\). For a minuscule cocharacter \(\mu\) we also consider the union \[X(\mu,b)_{J}=\bigcup_{w\in\operatorname{Adm}(\mu)}\{g\in G(\tilde{F})/P_{J} \mid g^{-1}b\sigma(g)\in P_{J}wP_{J}\}.\] The varieties appearing in the union above are called _coarse_ affine Deligne-Lusztig varieties, and it is proved in [1, Thm. 4.1.2] that \(X(\mu,b)_{J}\) can be actually written as a union of _fine_ affine Deligne-Lusztig varieties as follows \[X(\mu,b)_{J}=\bigsqcup_{w\in\operatorname{Adm}(\mu)\cap{}^{J}\widetilde{W}} \{g\in G(\tilde{F})/P_{J}\mid g^{-1}b\sigma(g)\in P_{J}\cdot_{\sigma}IwI\}= \bigsqcup_{w\in\operatorname{Adm}(\mu)\cap{}^{J}\widetilde{W}}X_{J,w}(b). \tag{7.2}\] The reason why we are interested in \(X(\mu,b)_{J}\) is that it naturally arises in the study of Rapoport-Zink spaces. One can associate to a Rapoport-Zink space a quadruple \((G,\mu,b,J)\), as explained in [11, Def. 3.8] and the corresponding union of affine Deligne-Lusztig varieties \(X(\mu,b)_{J}\) over \(F\) a mixed characteristic field. If the axioms of [11, Sec. 5] are satisfied, there is an isomorphism of perfect schemes \[\mathcal{N}^{0,\mathrm{perf}}\cong X(\mu,\mathrm{id})_{J},\] compare [11, Prop. 3.11] and [12, Sec.7]. The axioms of [11, Sec. 5] have been shown to hold for ramified unitary groups in odd dimension in [11, Prop. 0.4], but are still to be proven in even dimension. In any case, by [10] there is in general a bijection between the \(\mathbb{F}\)-valued points of \(\mathcal{N}^{0}\) and those of the corresponding \(X(\mu,b)_{J}\), again defined over a field of mixed characteristic. Before describing the group theoretical datum attached to our specific problem, we recall some more general results that we need in the sequel. We start with the reduction method a la Deligne and Lusztig as stated and proved in [10]. **Theorem 7.3**.: _[_10_, Prop. 3.3.1]_ _Let \(w\in\widetilde{W}\), \(s\in\widetilde{\mathbb{S}}\) and \(b\in G(\breve{F})\) and assume \(F\) has equal characteristic._ 1. _If_ \(\ell(sw\sigma(s))=\ell(w)\) _then there is a universal homeomorphism_ \(X_{w}(b)\to X_{sw\sigma(s)}(b)\)_._ 2. _If_ \(\ell(sw\sigma(s))=\ell(w)-2\)_, then_ \(X_{w}(b)=X_{1}\sqcup X_{2}\)_, with_ \(X_{1}\) _open and universally homeomorphic to a Zariski-locally trivial_ \(\mathbb{G}_{m}\)_-bundle over_ \(X_{sw}(b)\)_, while_ \(X_{2}\) _is closed and universally homeomorphic to a Zariski-locally trivial_ \(\mathbb{A}^{1}\)_-bundle over_ \(X_{sw\sigma(s)}(b)\)_._ _If \(F\) has mixed characteristic the statements above still hold, provided one replaces \(\mathbb{G}_{m}\) and \(\mathbb{A}^{1}\) with their perfections._ Applying the reduction method repeatedly delivers a decomposition of an affine Deligne-Lusztig variety \(X_{w}(b)\) into pieces homeomorphic to sequences of one-dimensional fiber bundles over affine Deligne-Lusztig varieties for elements in the Weyl group that have minimal length in their \(\sigma\)-conjugacy class. Recall that for an element \(x\) of minimal length in its \(\sigma\)-conjugacy class the affine Deligne-Lusztig variety \(X_{x}(b)\) is non-empty if and only if \(x\in[b]\), see [12, Thm. 3.2]. For an element \(w\) in the affine Weyl group \(W_{a}\), we denote by \(\operatorname{supp}(w)\) the support of \(w\), _i.e._ the subset of affine reflections in \(\widetilde{\mathbb{S}}\) appearing in a reduced expression for \(w\). For \(w\tau\in\widetilde{W}=W_{a}\rtimes\Omega\) following [11, Sec. 4.3] we define the \(\sigma\)-support as \[\operatorname{supp}_{\sigma}(w\tau)=\bigcup_{n\in\mathbb{Z}}(\tau\sigma)^{n}( \operatorname{supp}(w)).\] For \(w\in\widetilde{W}\) and a subset \(J\) of the affine reflections \(\widetilde{\mathbb{S}}\) we define \(I(w,J,\sigma)\) to be the maximal subset of \(J\) that is stable under \(\operatorname{Ad}(w)\sigma\), where \(\operatorname{Ad}(w)\) is just the conjugation action of \(w\), compare [1, 3.1]. **Theorem 7.4**.: _[_10_, Thm. 4.1.2]_ _For any \(J\subset\widetilde{\mathbb{S}}\) and \(w\in{}^{J}\widetilde{W}\) and \(b\in G(\breve{F})\), the fine affine Deligne-Lusztig variety satisfies_ \[X_{J,w}(b)\cong\{gP_{I(w,J,\sigma)}\mid g^{-1}b\sigma(g)\in P_{I(w,J,\sigma)} wP_{I(w,J,\sigma)}\}.\] For \(b\in G(\breve{F})\) we consider the \(\sigma\)-centralizer \(\mathbb{J}_{b}=\{g\in G(\breve{F})\mid g^{-1}b\sigma(g)=b\}\), and its action on the affine Deligne-Lusztig variety \(X_{w}(b)\). Combining [10, Thm 4.1.1-2] we obtain the following result. **Theorem 7.5**.: _Let \(J\subset\widetilde{\mathbb{S}}\) and \(w\in{}^{J}\widetilde{W}\cap W_{a}\tau\) be such that \(W_{\operatorname{supp}_{\sigma}(w)\cup I(w,J,\sigma)}\) is finite. Then_ \[X_{J,w}(\tau)\cong\bigcup_{i\in\mathbb{J}_{\tau}/(\mathbb{J}_{\tau}\cap P_{ \operatorname{supp}_{\sigma}(w)\cup I(w,J,\sigma)})}iX_{I(w,J,\sigma)}(w),\] _where \(X_{I(w,J,\sigma)}(w)=\{g\in P_{\operatorname{supp}_{\sigma}(w)\cup I(w,J, \sigma)}/P_{J}\mid g^{-1}\tau\sigma(g)\in P_{I(w,J,\sigma)}wP_{I(w,J,\sigma)}\}\) is a classical Deligne-Lusztig variety in the partial flag variety \(P_{\operatorname{supp}_{\sigma}(w)\cup I(w,J,\sigma)}/P_{I(w,J,\sigma)}\)._ We conclude with two simple results on the non-emptiness pattern, which are surely known to experts, but for which we could not find any reference in the literature. **Proposition 7.6**.: _Let \(b\) be a basic element, that is \(b\in[\tau]\) for a length-zero element \(\tau\in\Omega\). Let \(w\tau\in\widetilde{W}=W_{a}\rtimes\Omega\) be a minimal length element in its \(\sigma\)-conjugacy class. Then \(X_{w\tau}(b)\neq\emptyset\) if and only if \(\operatorname{supp}_{\sigma}(w\tau)\) generates a finite subgroup of \(W_{a}\)._ Proof.: By [14, Thm. 2.3] there is a set \(J\subset\widetilde{\mathbb{S}}\), a \(\sigma\)-straight element \(x\in{}^{J}\widetilde{W}^{\sigma(J)}\), and an element \(u\) with \(\sigma\)-support in the finite subgroup \(\widetilde{W}_{J}\) such that \(w\tau=ux\) and \(\operatorname{Ad}(x)\sigma(J)=J\). By [14, Thm. 3.2]\(X_{ux}(b)\) is non-empty if and only if \(X_{x}(b)\) is non-empty. Since \(b\) is \(\sigma\)-conjugate to \(\tau\), \(X_{x}(b)\) is non-empty if and only if the same is true for \(X_{x}(\tau)\). For a \(\sigma\)-straight \(x\) the affine Deligne-Lusztig variety \(X_{x}(\tau)\) is non-empty, if and only if \(x\) is \(\sigma\)-conjugate to \(\tau\), compare [14, Prop. 4.5]. Since both \(x\) and \(\tau\) are \(\sigma\)-straight and \(\sigma\)-conjugate, \(x\) has length zero, too. As we have seen, the set of length-zero elements \(\varOmega\) is in bijection with the image of the Kottwitz map, which can then be identified with the projection \(\widetilde{W}\to\varOmega\). Since \(\sigma\)-conjugate elements have the same image under the Kottwitz map, if \(x\) and \(\tau\) are \(\sigma\)-conjugate and both have length zero they are actually equal. It follows that \(X_{w\tau}(\tau)\) is non-empty if and only if \(w\tau=u\tau\). This means that \(\operatorname{supp}_{\sigma}(w\tau)=\operatorname{supp}_{\sigma}(u\tau)\), which is finite by definition of \(u\) and \(x=\tau\). Assume \(\operatorname{supp}_{\sigma}(w\tau)\) is finite. Since the elements \((\tau\sigma)^{i}(w)\) belong to the finite subgroup of \(\widetilde{W}\) generated by \(\operatorname{supp}_{\sigma}(w)\), there is an integer \(n\) such that \(w(\tau\sigma)(w)\cdots(\tau\sigma)^{n}(w)=1\). It is easy to see that \((w\tau\sigma)^{n}=w(\tau\sigma)(w)\cdots(\tau\sigma)^{n}(w)\tau^{n}=\tau^{n}\). By the formula (7.1) above to compute the Newton point of elements of \(\widetilde{W}\), it follows that \(w\tau\) and \(\tau\) have the same Newton point. As we have seen, the Kottwitz map can be identified with the projection \(\widetilde{W}\to\varOmega\), from which it follows that \(\kappa(w\tau)=\kappa(\tau)\) and therefore \(w\tau\) and \(\tau\) are \(\sigma\)-conjugate. **Lemma 7.7**.: _If \(\ell(w)\leq 2\langle\nu_{w},\rho\rangle+1\) then \(w\) has minimal length in its \(\sigma\)-conjugacy class._ Proof.: Observe that \(\sigma\)-conjugation preserves the parity of the length. If for some \(v\in\widetilde{W}\) we have \(\ell(vw\sigma(v^{-1}))<\ell(w)\) it follows that \(\ell(vw\sigma(v)^{-1})\leq\ell(w)-2\leq 2\langle\nu_{w},\rho\rangle-1\) which is smaller than the length of a \(\sigma\)-straight element with same Newton point. By [14, Thm. 2.3] this is a contradiction. ### The group-theoretical datum associated to \(\operatorname{GU}(2,4)\) over a ramified prime As we have mentioned above, we can associate to our Rapoport-Zink space a group-theoretical datum \((W_{a},J,\sigma,\mu)\) and study the corresponding union of fine affine Deligne-Lusztig varieties \(X(\mu,b)_{J}\). As explained in [14, Ex. 2.2] the extended affine Weyl group associated to the ramified unitary group in even dimension \(2m\) is the same in both split and non-split case. It has affine Dynkin diagram of type \(BC_{m}\) (or \({}^{2}BC_{m}\), in the non-split case, which only differs in the orientation, which is irrelevant for the Weyl group), as depicted below. By looking at the Dynkin diagram we immediately see that the subsets of reflections \(\widetilde{\mathbb{S}}\setminus\{s_{0}\}\) and \(\widetilde{\mathbb{S}}\setminus\{s_{1}\}\) generate two finite Weyl groups of type \(C_{m}\), while the reflections in \(\widetilde{\mathbb{S}}\setminus\{s_{m}\}\) generate a finite group of type \(D_{m}\). Following [14, Ex. 2.2] we observe that there is exactly one symmetry of the Dynkin diagram, namely the transformation given by exchanging \(s_{0}\) and \(s_{1}\). It follows that the length-zero subset \(\varOmega\) consists of exactly two elements. The action of \(\sigma\) on the extended affine Weyl group is then given by the adjoint action of one of these two elements. If the form is split then the action of \(\sigma\) is trivial, if the form is non-split, then the Frobenius is given by the action of the non-trivial element \(\tau\in\varOmega\). The choice of the subset \(J\) of affine simple reflections is determined by the level structure. As in [15] the level structure for our Rapoport-Zink space is given by the parahoric subgroup stabilizing a lattice in the vector space \(C\) which is self-dual with respect to the Hermitian form. By [14, Sec. 7.4] this parahoric level structure corresponds to the subset \(J=\{s_{0},s_{1},\dots,s_{m-1}\}\) Last, the cocharacter \(\mu^{\vee}\) corresponds to the choice of the signature. In our case \(\mu^{\vee}\) is then the fundamental coweight \(\omega_{2}^{\vee}\) corresponding to the simple root \(\alpha_{2}\). Observe that for \(m\geq 3\) the data \((BC_{m},\widetilde{\mathrm{S}}\setminus\{s_{m}\},1,\omega_{2}^{\vee})\) and \(({}^{2}BC_{m},\widetilde{\mathrm{S}}\setminus\{s_{m}\},\tau,\omega_{2}^{\vee})\) are not among those appearing in [1, Sec. 3]. This means that the corresponding union \(X(\omega_{2}^{\vee},1)_{J}\) of affine Deligne-Lusztig varieties is not fully Hodge-Newton decomposable, which is the main source of difference with the case studied in [14]. For example, we cannot expect a decomposition of \(X(\omega_{2}^{\vee},1)_{J}\) as a disjoint union of classical Deligne-Lusztig varieties, which matches our results in Section 6. #### 7.2.1. The split case Consider the group-theoretical datum \((BC_{3},J=\{0,1,2\},\mathrm{id},\omega_{2}^{\vee})\) associated to the group \(\mathrm{GU}(2,4)\) ramified over \(p\) and such that the Hermitian form on \(C\) is split. We first need to compute the admissible set and its representatives in \({}^{J}\!W\). Let \(\mathrm{Adm}(\omega_{2}^{\vee})^{J}=\mathrm{Adm}(\omega_{2}^{\vee})\cap{}^{J }\widetilde{W}\) denote the set of minimal length representatives in \({}^{J}\!\widetilde{W}\) of the admissible elements. This set can be easily computed with the mathematical software Sagemath [15] and a code can be found in Appendix C. We obtain \[\mathrm{Adm}(\omega_{2}^{\vee})^{J}= \{1,s_{3},s_{3}s_{2},s_{3}s_{2}s_{1},s_{3}s_{2}s_{3},s_{3}s_{2}s _{3}s_{1},s_{3}s_{2}s_{3}s_{1}s_{2},s_{3}s_{2}s_{0},s_{3}s_{2}s_{1}s_{0},\] \[s_{3}s_{2}s_{3}s_{0},s_{3}s_{2}s_{1}s_{0}s_{2},s_{3}s_{2}s_{3}s_{ 0}s_{2},s_{3}s_{2}s_{3}s_{1}s_{0},s_{3}s_{2}s_{3}s_{0}s_{2}s_{1},s_{3}s_{2}s_{3 }s_{1}s_{0}s_{2},\] \[s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{1},s_{3}s_{2}s_{3}s_{1}s_{2}s_{ 0},s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{0},s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{1}s_{0}\}.\] In the following proposition we show that the \(J\)-admissible elements can be grouped into three families, corresponding to three different behaviors of the affine Deligne-Lusztig variety \(X_{J,w}(1)\). **Proposition 7.8**.: _Consider the group theoretical datum \((BC_{3},J=\{0,1,2\},\mathrm{id},\omega_{2}^{\vee})\) associated to ramified \(\mathrm{GU}(2,4)\). Then \(w\in\mathrm{Adm}(\omega_{2}^{\vee})^{J}\) satisfies one of the following properties._ 1. \(w\) _has finite support in a subgroup of type_ \(C_{r}\) _of_ \(W_{a}\) _with_ \(r\leq 3\)_. In this case the affine Deligne-Lusztig variety_ \(X_{J,w}(1)\) _has irreducible components isomorphic to (generalized) Deligne-Lusztig varieties for the symplectic group_ \(\mathrm{Sp}_{2r}\)_. The set of irreducible components is in bijection with the set of vertex lattices of type_ \(2r\)_._ 2. \(w\) _has full_ \(\sigma\)_-support and can be reduced by one step of Deligne and Lusztig's reduction method. In this case the affine Deligne-Lusztig variety_ \(X_{J,w}(1)\) _has irreducible components universally homeomorphic to_ \(\mathbb{A}^{1}\)_-bundles over a classical Deligne-Lusztig variety for_ \(\mathrm{SO}_{6}\)_. The set of irreducible components of_ \(X_{J,w}(1)\) _is in bijection with the set of_ \(2\)_-modular lattices._ 3. \(w\) _has full_ \(\sigma\)_-support and minimal length in its_ \(\sigma\)_-conjugacy class, in which case_ \(X_{J,w}(1)\) _is empty._ Proof.: (i) We first consider the elements of \(\mathrm{Adm}(\omega_{2}^{\vee})^{J}\) with finite \(\sigma\)-support \[1,s_{3},s_{3}s_{2},s_{3}s_{2}s_{3},s_{3}s_{2}s_{1},s_{3}s_{2}s_{0},s_{3}s_{2}s _{3}s_{1},s_{3}s_{2}s_{3}s_{0},s_{3}s_{2}s_{3}s_{1}s_{2},s_{3}s_{2}s_{3}s_{0}s_ {2}.\] It is clear that their support generates a subgroup of type \(C_{r}\), compare also the Dynkin diagram above. As we have recalled in Theorem 7.5 the corresponding fine affine Deligne-Lusztig variety \(X_{J,w}(1)\) can be decomposed as a disjoint union of classical Deligne-Lusztig varieties for the group \(\mathrm{Sp}_{2r}\). Since the support of \(w\) generates the Weyl group \(C_{r}\), it satisfies the hypothesis of Theorem 4.2 and hence the corresponding (classical) Deligne-Lusztig variety is irreducible. By Theorem 7.5 the index set of the disjoint decomposition of \(X_{J,w}(1)\) depends on the set of reflections \(\mathrm{supp}_{\sigma}(w)\cup I(w,\sigma,J)\). If \(w=1\) the set \(\mathrm{supp}_{\sigma}(w)\cup I(1,\sigma,J)\) coincides with \(J\). If \(w=s_{3}\) it is \(\{s_{0},s_{1},s_{3}\}\). If the reflection \(s_{2}\) appears in a reduced expression of \(w\), then \(I(w,\sigma,J)\) is empty. Observe that the subset \(\mathrm{Adm}(\omega_{1}^{\vee})^{J}=\{1,s_{3},s_{3}s_{2},s_{3}s_{2}s_{1},s_{3}s_{ 2}s_{0}\}\), which corresponds to the admissible set for \(\mathrm{GU}(1,5)\), produces the same collection of sets \(\mathrm{supp}_{\sigma}(w)\cup I(w,\sigma,J)\). These were studied in [1, Ex. 7.4.1]. In particular, it is proved there that the index set \(\mathbbm{J}_{1}/\mathbbm{J}_{1}\cap P_{\mathrm{supp}_{\sigma}(w)\cup I(w,J, \sigma)}\) in the decomposition of \(X_{J,w}(1)\) is in bijection with the set of vertex lattices of type \(0,2,4\) or \(6\), respectively. These observations are summarized in the following table. \begin{tabular}{|l|l|l|} \hline Elements & \(\operatorname{supp}_{\sigma}(w)\cup I(w,\sigma,J)\) & Type \\ \hline 1 & \(J=\{s_{0},s_{1},s_{2}\}\) & 0 \\ \hline \(s_{3}\) & \(\{s_{0},s_{1},s_{3}\}\) & 2 \\ \hline \(s_{3}s_{2},\quad s_{3}s_{2}s_{3}\) & \(\{s_{2},s_{3}\}\) & 4 \\ \hline \(s_{3}s_{2}s_{1},\quad s_{3}s_{2}s_{3}s_{1},\quad s_{3}s_{2}s_{3}s_{1}s_{2}\) & \(\{s_{1},s_{2},s_{3}\}\) & 6 \\ \hline \(s_{3}s_{2}s_{0},\quad s_{3}s_{2}s_{3}s_{0},\quad s_{3}s_{2}s_{3}s_{0}s_{2}\) & \(\{s_{0},s_{2},s_{3}\}\) & 6 \\ \hline \end{tabular} Since \(\sigma=1\) we actually have two \(\sigma\)-stable subgroups of type \(C_{3}\) in \(W_{a}\), one is generated by \(\{s_{1},s_{2},s_{3}\}\) and the other by \(\{s_{0},s_{2},s_{3}\}\). This corresponds to the fact that if the form is split there are two orbits of self-dual lattices in \(C\), as remarked in [10, Ex. 7.4], and explains why the elements above come in pairs. Observe that the elements appearing in the list above are exactly the same elements in the stratification (4.11) of \(S_{V}\), and consequently in the stratification of the irreducible components of type \(\mathcal{N}_{\mathcal{L}}\) of Proposition 6.13. (ii) There is only one element in \(\operatorname{Adm}(\omega_{2}^{\vee})^{J}\) with full support that can be reduced via Deligne and Lusztig's method. Indeed, by conjugating \(s_{3}s_{2}s_{3}s_{1}s_{0}\) with \(s_{3}\) we obtain the shorter element \(s_{2}s_{1}s_{0}\) that is a Coxeter element for the finite subgroup of type \(D_{3}\) generated by \(\{s_{0},s_{1},s_{2}\}\). The other element produced by the reduction method is \(s_{3}s_{2}s_{1}s_{0}\) which is a \(\sigma\)-Coxeter element for \(W_{a}\), and it is therefore \(\sigma\)-straight with non-basic Newton point \((\frac{1}{2},\frac{1}{2},0)\). It follows that \(X_{s_{3}w}(1)\) is empty. By Theorem 7.4 the fine affine Deligne-Lusztig variety \(X_{J,w}(b)\) is isomorphic to the affine Deligne-Lusztig variety \(X_{w}(1)\), as \(I(J,\sigma,w)=\emptyset\). By the reduction method and the previous observations, the latter is universally homeomorphic to a line bundle over the affine Deligne-Lusztig variety \(X_{s_{3}ws_{3}}(1)\). Using Theorem 7.5, we obtain the disjoint decomposition of \(X_{s_{3}ws_{3}}(1)\) into classical Deligne-Lusztig varieties for \(\operatorname{SO}_{6}\). Again, since \(s_{3}ws_{3}=s_{2}s_{1}s_{0}\) has full support in the finite subgroup of type \(D_{3}\), the classical Deligne-Lusztig varieties \(X_{B}(s_{3}ws_{3})\) are irreducible. It follows that they are the irreducible components of \(X_{s_{3}ws_{3}}(1)\). Last, observe that \(\operatorname{supp}(s_{3}ws_{3})\cup I(s_{3}ws_{3},J,\sigma)=\{s_{0},s_{1},s_{ 2}\}=J\). We have seen in the proof of (i) that in this case the index set \(\mathbb{J}_{1}/\mathbb{J}_{1}\cap P_{\operatorname{supp}_{\sigma}(w)\cup I(w,J,\sigma)}\) of the decomposition of \(X_{s_{3}ws_{3}}(1)\) into classical Deligne-Lusztig varieties is in bijection with the set of vertex lattices of type \(0\). By Proposition 3.22 these are in bijection with the set of \(2\)-modular lattices. (iii) Last, by Proposition 7.6, in order to prove that \(X_{J,w}(1)\) is empty for the remaining elements, it is enough to prove that these elements have minimal length in their \(\sigma\)-conjugacy classes. By the formula (7.1), we can compute their Newton points, the corresponding SageMath code can be found in Appendix C, \begin{tabular}{|l|c|} \hline Element & Newton point \\ \hline \(s_{3}s_{2}s_{1}s_{0}\) & \((\frac{1}{2},\frac{1}{2},0)\) \\ \hline \(s_{3}s_{2}s_{1}s_{0}s_{2}\) & \((1,0,0)\) \\ \hline \(s_{3}s_{2}s_{3}s_{1}s_{2}s_{0},\quad s_{3}s_{2}s_{3}s_{0}s_{2}s_{1}\) & \((\frac{2}{3},\frac{2}{3},\frac{2}{3})\) \\ \hline \(s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}\) & \((1,0,0)\) \\ \hline \(s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{0},\quad s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{1}\) & \((1,\frac{1}{2},\frac{1}{2})\) \\ \hline \(s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{1}s_{0}\) & \((1,1,0)\) \\ \hline \end{tabular} Recall that for an affine Weyl group of type \(\widetilde{B}_{3}\), the half-sum of the positive roots \(\rho\) is \((\frac{5}{2},\frac{3}{2},\frac{1}{2})\)-It is then straightforward to see that all elements in the list above, except for \(w=s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}\) are \(\sigma\)-straight. Observe that the remaining element has length \(6\) and Newton point \((1,0,0)\). It satisfies then the hypothesis \(\ell(w)=2\langle\rho,\nu_{w}\rangle+1\) of Lemma 7.7 which implies that it has minimal length in its \(\sigma\)-conjugacy class. #### 7.2.2. The non-split case Consider now the group-theoretical datum \((^{2}BC_{3},J=\{0,1,2\},\sigma,\omega_{2}^{\vee})\) associated to the group \(\operatorname{GU}(2,4)\) over a ramified prime and with non-split Hermitian form on \(C\). Recall that in this case the Frobenius \(\sigma\) on the extended affine Weyl group exchanges the reflections \(s_{0}\) and \(s_{1}\). The admissible set \(\operatorname{Adm}(\omega_{2}^{\vee})^{J}\) does not depend on \(\sigma\), hence it coincides with the admissible set computed for the split case. The following proposition is the analogue of Proposition 7.8 for the non-split case. **Proposition 7.9**.: _Consider the group theoretical datum \((^{2}BC_{3},J=\{0,1,2\},\sigma,\omega_{2}^{\vee})\) associated to the non-split ramified group \(\operatorname{GU}(2,4)\). Then \(w\in\operatorname{Adm}(\omega_{2}^{\vee})^{J}\) satisfies one of the following properties._ 1. \(w\) _has_ \(\sigma\)_-support in a finite subgroup of_ \(W_{a}\) _of type_ \(C_{r}\)_, with_ \(r\leq 2\)_. In this case the affine Deligne-Lusztig variety_ \(X_{J,w}(1)\) _has irreducible components isomorphic to (generalized) Deligne-Lusztig varieties for the symplectic group_ \(\operatorname{Sp}_{2r}\)_. The set of irreducible components of_ \(X_{J,w}(1)\) _is in bijection with the set of vertex lattices of type_ \(2r\)_._ 2. \(w\) _has full_ \(\sigma\)_-support, and can be reduced with Deligne and Lusztig's method to an element with finite_ \(\sigma\)_-support in a subgroup of type_ \(D_{3}\) _of_ \(W_{a}\)_. In this case the affine Deligne-Lusztig variety_ \(X_{J,w}(1)\) _has irreducible components universally homeomorphic to vector bundles of dimension_ \(1\) _or_ \(2\) _over a classical Deligne-Lusztig variety for_ \(\operatorname{SO}_{6}\)_. The set of irreducible components of_ \(X_{J,w}(1)\) _is in bijection with the set of_ \(2\)_-modular lattices._ 3. \(w\) _has full_ \(\sigma\)_-support and minimal length in its_ \(\sigma\)_-conjugacy class, in which case_ \(X_{J,w}(1)\) _is empty._ Proof.: (i) We first consider the elements of \(\operatorname{Adm}(\omega_{2}^{\vee})^{J}\) with finite \(\sigma\)-support \[1,s_{3},s_{3}s_{2},s_{3}s_{2}s_{3}.\] It is clear that their support generates a subgroup of type \(C_{r}\), compare also the Dynkin diagram above. By Theorem 7.5 the corresponding fine affine Deligne-Lusztig variety \(X_{J,w}(1)\) can be decomposed as a disjoint union of classical Deligne-Lusztig varieties for the group \(\operatorname{Sp}_{2r}\). Since the \(\sigma\)-support of \(w\) generates the Weyl group \(C_{r}\), it satisfies the hypothesis of Theorem 4.2 and hence the corresponding Deligne-Lusztig variety is irreducible. By Theorem 7.5 the index set of the decomposition of \(X_{J,w}(1)\) depends on the set of reflections \(\operatorname{supp}_{\sigma}(w)\cup I(w,\sigma,J)\). If \(w=1\) this coincides with \(J\), if \(w=s_{3}\) this is \(\{s_{0},s_{1},s_{3}\}\), otherwise it coincides with the support of \(w\), so it is \(\{s_{2},s_{3}\}\). Again by [1, Ex. 7.4.2] we know that the index set \(\mathbb{J}_{1}/\mathbb{J}_{1}\cap P_{\operatorname{supp}(w)\cup I(w,J,\sigma)}\) in the decomposition of \(X_{J,w}(1)\) is in bijection with vertex lattices of type \(0,2\) or \(4\), respectively. If we compare this with the first part of Proposition 7.8, we see that the elements corresponding to vertex lattices of type \(6\) are missing. This is due to the fact that if the Hermitian form on \(C\) is non-split, such vertex lattices do not exist, as we have recalled in Section 3. Last, observe that the elements appearing in the list above are exactly the same elements as in the stratification (4.11) of \(S_{V}\), and consequently in the stratification of the irreducible closed subschemes \(\mathcal{N}_{\mathcal{L}}\) as in (6.29). (ii) There are five elements in \(\operatorname{Adm}(\omega_{2}^{\vee})^{J}\) with full \(\sigma\)-support that can be reduced via Deligne and Lusztig's method, namely \[s_{3}s_{2}s_{3}s_{1},s_{3}s_{2}s_{3}s_{0},s_{3}s_{2}s_{3}s_{1}s_{0},s_{3}s_{2} s_{3}s_{1}s_{2}s_{0},s_{3}s_{2}s_{3}s_{0}s_{2}s_{1}.\] Indeed, by \(\sigma\)-conjugating the first three with \(s_{3}\) we obtain the shorter elements \(s_{2}s_{1}\), \(s_{2}s_{0}\) and \(s_{2}s_{1}s_{0}\), respectively. The first two are \(\sigma\)-Coxeter elements for the finite \(\sigma\)-stable subgroup of \(W_{a}\) of type \(D_{3}\) generated by \(\{s_{0},s_{1},s_{2}\}\), the third still has full \(\sigma\)-support in this subgroup. The three elements of the form \(ws_{3}\) produced by the reduction method are \(s_{3}s_{2}s_{1}\), \(s_{3}s_{2}s_{0}\) and \(s_{3}s_{2}s_{1}s_{0}\), respectively. The first two are \(\sigma\)-Coxeter elements for \(W_{a}\) and therefore \(\sigma\)-straight with non-basic with Newton point \((\frac{1}{3},\frac{1}{3},\frac{1}{3})\). The latter has Newton point \((\frac{1}{2},\frac{1}{2},0)\) and length \(4\), so it is again \(\sigma\)-straight. For \(w\) one of these three elements \(\{s_{3}s_{2}s_{3}s_{1},s_{3}s_{2}s_{3}s_{0},s_{3}s_{2}s_{3}s_{1}s_{0}\}\), by Theorem 7.4 the fine affine Deligne-Lusztig varieties \(X_{J,w}(b)\) is isomorphic to the affine Deligne-Lusztig variety \(X_{w}(1)\). By the reduction method the latter is then universally homeomorphic to a line bundle over the affine Deligne-Lusztig variety \(X_{s_{3}ws_{3}}(1)\). Using Theorem 7.5, we further obtain a disjoint decomposition of \(X_{s_{3}ws_{3}}(1)\) into classical Deligne-Lusztig varieties for \(\mathrm{SO}_{6}\). Again, since \(s_{3}ws_{3}\) has full support in the finite subgroup of type \(D_{3}\), the varieties \(X(s_{3}ws_{3})\) are irreducible. It follows that they are the irreducible components of \(X_{s_{3}ws_{3}}(1)\). Last, observe that \(\mathrm{supp}_{\sigma}(w)\cup I(w,J,\sigma)=\{s_{0},s_{1},s_{2}\}=J\). We have already seen that in this case the index set \(\mathbb{J}_{1}/\mathbb{J}_{1}\cap P_{J}\) in the decomposition of \(X_{s_{3}ws_{3}}(1)\) into classical Deligne-Lusztig varieties is in bijection with the set of \(2\)-modular lattices. Consider now \(w=s_{3}s_{2}s_{3}s_{1}s_{2}s_{0}\) and observe that it is \(\sigma\)-conjugate to \(s_{3}s_{2}s_{3}s_{0}s_{2}s_{1}\) by the length-zero element \(\tau\). Therefore, it is enough to study \(X_{J,w}(1)\) as the two are universally homeomorphic. The reduction method consists first of two length-preserving \(\sigma\)-conjugations, namely by \(s_{1}\) and \(s_{3}\). We obtain the element \(s_{2}s_{3}s_{1}s_{2}s_{3}s_{2}\) that can be reduced via \(\sigma\)-conjugation by \(s_{2}\) to the shorter element \(s_{3}s_{1}s_{2}s_{3}\). Another conjugation by \(s_{3}\) brings us to \(s_{1}s_{2}\), which has finite \(\sigma\)-support in a subgroup of type \(D_{3}\). It remains to check the other two elements produced by the two length-decreasing steps of the reduction method, namely \(s_{3}s_{1}s_{2}s_{3}s_{2}\) and \(s_{3}s_{1}s_{2}\). The latter is \(\sigma\)-Coxeter as we already remarked, hence \(\sigma\)-straight, so the corresponding affine Deligne-Lusztig variety is empty. We compute the Newton point of \(s_{3}s_{1}s_{2}s_{3}s_{2}\) by taking \(\sigma\)-powers and obtain \((\frac{1}{2},\frac{1}{2},0)\). Then we can see that this element satisfies the hypothesis of Lemma 7.7 and hence has minimal length in its \(\sigma\)-conjugacy class and the corresponding affine Deligne-Lusztig variety is again empty. By the reduction method it follows that \(X_{w}(1)\) is universally homeomorphic to a \(2\)-dimensional vector bundle over \(X_{s_{1}s_{2}}(1)\), whose irreducible components are the classical Deligne-Lusztig varieties \(X_{B}(t_{2}t_{1})\) in the notation of Section 4. We have then obtained the analogous decomposition as in Proposition 6.24 and Remark 6.28. (iii) Last, we need to prove that the remaining admissible elements have minimal length in their conjugacy classes. Using SageMath, we first compute their Newton points \begin{tabular}{|l|l|} \hline Element & Newton point \\ \hline \(s_{3}s_{2}s_{1},s_{3}s_{2}s_{0}\) & \((\frac{1}{3},\frac{1}{3},\frac{1}{3})\) \\ \hline \(s_{3}s_{2}s_{1}s_{0}\) & \((\frac{1}{2},\frac{1}{2},0)\) \\ \hline \(s_{3}s_{2}s_{1}s_{0}s_{2}\) & \((1,0,0)\) \\ \hline \(s_{3}s_{2}s_{1}s_{3}s_{2},s_{3}s_{2}s_{0}s_{3}s_{2}\) & \((\frac{1}{2},\frac{1}{2},0)\) \\ \hline \(s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}\) & \((1,0,0)\) \\ \hline \(s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{0},\quad s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{1}\) & \((1,0,0)\) \\ \hline \(s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{1}s_{0}\) & \((1,1,0)\) \\ \hline \end{tabular} One can easily check that the elements the first three rows, together with the last one are \(\sigma\)-straight. They are followed by three elements that satisfy the hypothesis of Lemma 7.7 and therefore have minimal length in their \(\sigma\)-conjugacy class. It remains to check the length \(7\) elements \(s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{0}\) and \(s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{1}\), which are \(\sigma\)-conjugate to each other by the length \(0\) element \(\tau\). Hence, it is enough to prove the statement for one of them. We can \(\sigma\)-conjugate \(w=s_{3}s_{2}s_{3}s_{1}s_{0}s_{2}s_{0}\) first by \(s_{2}\) and then by \(s_{0}\) to obtain \(w^{\prime}=s_{0}s_{2}s_{3}s_{2}s_{0}s_{3}s_{2}\), which still has length seven. Observe that the subword \(x=s_{0}s_{2}s_{3}s_{2}s_{0}\) of \(w^{\prime}\) is \(\sigma\)-straight. Indeed, it is obtained by \(\sigma\)-conjugation from \(s_{3}s_{2}s_{1}s_{0}s_{2}\), which we have already seen is \(\sigma\)-straight. Moreover, one can directly check that \(x\) fixes the reflections \(\{s_{2},s_{3}\}\). It follows that \(w^{\prime}=xs_{3}s_{2}\) is factored as the product of a straight element and an element of finite order fixed by \(x\), which by [14, Thm. 2.3] implies that \(w^{\prime}\) is in the class of \(x\) in \(B(G)\). Suppose \(w^{\prime}\) does not have minimal length in its \(\sigma\)-conjugacy class in \(B(\widetilde{W})\). Then since \(\sigma\)-conjugation preserves the parity of the length, \(w^{\prime}\) is conjugate to an element of the same length as \(x\), which means to a \(\sigma\)-straight element with the same Newton point as \(w^{\prime}\). If follows that \(w^{\prime}\) has to be conjugate to \(x\), by the bijection of [14, Thm. 3.3] between conjugacy classes in \(B(G)\) and classes in \(B(\widetilde{W})\) containing a \(\sigma\)-straight element. Observe that the reflection \(s_{3}\) appears once in any reduced expression for \(x\) and twice in \(w^{\prime}\). Then these two cannot be \(\sigma\)-conjugate by the next lemma, and we can conclude that \(w^{\prime}\), and therefore \(w\) has minimal length in its \(\sigma\)-conjugacy class. **Lemma 7.10**.: _Let \(W_{a}\) be the affine Weyl group of type \(\widetilde{B_{m}}\). Let \(n_{m}(w)\) be the number of times the reflection \(s_{m}\) appears in any reduced expression of \(w\). Then \(n_{m}(w)\) is well-defined and its parity is preserved by \(\sigma\)-conjugation._ Proof.: Recall that two reduced expression for \(w\) are connected by a sequence of so-called _braid moves_, see [1, Thm. 3.3.1]. The only braid move involving the reflection \(s_{m}\) consists of substituting the subword \(s_{m}s_{m-1}s_{m}s_{m-1}\) with the subword \(s_{m-1}s_{m}s_{m-1}s_{m}\), and therefore it does not change the number of times \(s_{m}\) appears in an expression for \(w\). It follows that \(n_{m}(w)\) is well-defined. It is enough to prove the second statement for \(s_{i}w\sigma(s_{i})\) where \(s_{i}\) is a reflection in \(W_{a}\). If \(\ell(s_{i}w\sigma(s_{i}))=\ell(w)+2\), since \(s_{m}\) is fixed by \(\sigma\), the number \(n_{m}(s_{i}w\sigma(s_{i}))\) is either equal to \(n_{m}(w)\) or, if \(s_{i}=s_{m}\), it increases by \(2\), hence the parity is preserved. By the exchange property of Coxeter groups, see [1, Sec. 1.5], if \(\ell(s_{i}w)<\ell(w)\) then \(s_{i}w\) has a reduced expression obtained by deleting one reflection \(s_{j}\) from a reduced expression for \(w\). Moreover, in this case \(s_{j}\) and \(s_{i}\) are conjugate. By [1, Ex. 1.16], the only reflection conjugate to \(s_{m}\) is \(s_{m}\) itself, so the only case to consider is \(s_{i}=s_{m}\). If \(\ell(s_{m}ws_{m})=\ell(w)\) it means that multiplication on the left with \(s_{m}\) deletes one instance of \(s_{m}\) and the multiplication on the right replaces it. Therefore, the number of times \(s_{m}\) appears does not change. If \(\ell(s_{m}ws_{m})=\ell(w)-2\) it means that \(s_{m}\) gets deleted twice from a reduced expression of \(w\), and again parity is preserved. _Remark 7.11_.: In [14] the authors study a family of elements in \(\widetilde{W}\) called _of finite Coxeter part_. For such elements the corresponding affine Deligne-Lusztig varieties can be decomposed via the reduction method as iterated fibrations over classical Deligne-Lusztig varieties for Coxeter elements. The \(J\)-admissible elements we have just studied give fibrations over classical Deligne-Lusztig varieties, too, which are however not of Coxeter type, in general. For example the Deligne-Lusztig variety \(X_{B}(t_{1}t_{2}t_{3})\) for the non-split orthogonal group found in the proof of Proposition 6.24 is not of Coxeter type. ## Appendix A The Grobner basis of Proposition 2.7 We list here the polynomials of the Grobner basis \(G\) used in the proof of Proposition 2.7. To make the notation more readable and the lexicographic order more intuitive we have substituted the variables \(x_{ij}\) of Proposition 2.4 with the twenty-one letters of the Italian alphabet. The symmetric matrix \(X\) used in the proof of Proposition 2.7 becomes in the new notation the following matrix with entries in \(\mathbb{F}_{p}[a,b,\ldots,z]\) \[X=\left(\begin{array}{cccccccc}a&b&c&d&e&f\\ b&g&h&i&l&m\\ c&h&n&o&p&q\\ d&i&o&r&s&t\\ e&l&p&s&u&v\\ f&m&q&t&v&z\end{array}\right).\] The monomial order is then simply the usual alphabetical order. We list here the elements of the Grobner basis used in the proof of Proposition 2.7 and already divide them into the subsets \(G_{ij}\) according to Lemma 2.12. We also underline the distinguished generators used in the last part of the proof. \begin{tabular}{|l|l|} \hline \(G_{11}=G_{a}\) & \(a+g+n+r+u+z\) \\ \hline \(G_{12}=G_{b}\) & \(b^{2}+g^{2}+h^{2}+i^{2}+l^{2}+m^{2}\) \\ & \(bc+gh+hn+io+lp+mq\) \\ & \(bd+gi+ho+ir+ls+mt\) \\ & \(be+gl+hp+is+lu+mv\) \\ & \(bf+gm+hq+it+lv+mz\) \\ & \(bh-cg-cr-cu-cz+do+ep+fq\) \\ & \(bi+co-dg-dn-du-dz+es+ft\) \\ & \(bl+cp+ds-eg-en-er-ez+fv\) \\ & \(bm+cq+dt+ev-fg-fn-fr-fu\) \\ & \(bn+br+bu+bz-ch-di-el-fm\) \\ & \(bo^{2}+br^{2}+bs^{2}+bt^{2}-cio-dho+din-dir+du+diz-dls-dmt-eis-fit\) \\ & \(bop+brs+bsu+btv-clo-dhp-dis+dln+eiz-els-emt-fiv\) \\ & \(bqq+brt+btu+btz-cmo-dhq-dit+dmn-elt-fmt\) \\ & \(bos-bpr-dhs+dip+ehr-eio\) \\ & \(bot-bqr-dht+diq+fhr-fio\) \\ & \(bou-bps-dhu+dlp+ehs-elo\) \\ & \(bov-bqs-dhv+dmp+eiq-emo+fhs-fip\) \\ & \(boz-bqt-dhz+dmq+fht-fmo\) \\ & \(bp^{2}+bs^{2}+bu^{2}+bv^{2}-clp-dls-ehp-eis+eln+elr-elu+elz-emv-flv\) \\ & \(bpq+bst+buv+bvz-cmp-dms-ehq-eit-elv+emn+emr-fmv\) \\ & \(bpt-bqs-eht+eiq+fhs-fip\) \\ & \(bpv-bqu-ehv+elq+fhu-flp\) \\ & \(bpz-bqv-ehz+emq+fhv-fmp\) \\ & \(bq^{2}+bt^{2}+bv^{2}bz^{2}-cmq-dmt-emv-fhq-fit-flv+fmn+fmr+fmu-fmz\) \\ & \(bru-bs^{2}-diu+dls+eis-elr\) \\ & \(brv-bst-div+dms+eit-emr\) \\ & \(brz-bt^{2}-diz+dmt+fit-fmr\) \\ & \(bsv-btu-eiv+elt+fiu-fls\) \\ & \(bsz-btv-eiz+emt+fiv-fms\) \\ & \(buz-bv^{2}-elz+emv+flv-fmu\) \\ \hline \(G_{13}=G_{c}\) & \(c^{2}+h^{2}+n^{2}+o^{2}+p^{2}+q^{2}\) \\ & \(cd+hi+no+or+ps+qt\) \\ & \(ce+hl+np+os+pu+qv\) \\ & \(cf+hm+nq+ot+pv+qz\) \\ & \(cgi+cho+cir+cls+emt-dgh-dhn-dio-dlp-dmq\) \\ \hline \end{tabular} \[\begin{array}{l}\vspace{0.2cm}cgl+chp+clr+clr+clu+cmv+dhs-dlo-egh-ehn-ehr-elp-emq\\ cgm+chq+cmr+cmu+cmz+dht-dmo+ehv-emp-fgh-fhn-fhr-\\ fhu-fmq\\ chi+cno+cor+cps+cqt-dh^{2}-dn^{2}-do^{2}-dp^{2}-dq^{2}\\ chl+cnp+cpr+cpu+cqv+dns-dop-eh^{2}-en^{2}-enr-ep^{2}-eq^{2}\\ chm+cnq+cqr+cqu+cqz+dnt-doq+env-epq-fh^{2}-fn^{2}-fnr-fnu-fq^{2}\\ cho^{2}+chr^{2}+chs^{2}+cht^{2}-cino-cior-clpr-cmqr-dhno-dhor-2dhps-2dhqt+din^{2}+dio^{2}-dis^{2}- dit^{2}-diu^{2}-2div^{2}-diz^{2}+2dlop+dlrs+dlsu+\\ 2dmoq+dmrt+2dmtu+dmtz+ehpr+eirs+eisu+2eitv-elo^{2}-elr^{2}-els^{2}-elt^{2}+emr-2 emst+fhqr+firt+fitz+flrv-fmo^{2}-fmr^{2}-\\ 2fmru+fms^{2}-fmt^{2}\\ chop+chrs+chsu+chtv-clno-clor-clps-cmqs-dhnp-dhpr-dhpu-dhqv-dhqv-dhqv-dhqv-dhqv-dhq +dln^{2}+dlnr+dlp^{2}+dmpq-ehqt-eit^{2}-eiv^{2}-eiz^{2}+emq+emr+emr+emsv+emtz+ fhqs+fist+fivv+fivz-fmop-fmrs-fmsu-fmtv\\ choq+chrt+chtu+chttz-cnno-cmor-cmps-cmqt-dhnq-dhqr-dhqu-dhq-dhq-dhq-dhq-dhq+dhnq+ dmnr+dmp^{2}+dmp^{2}+dmq^{2}-elnt+eloq+emns-emop\\ dhq^{2}+chs^{2}+chu^{2}+chv^{2}-clnp-clpr-clpu-cmuq-dlns+dlop-ehnp-eibr-ephv-ephv- ephu-2ehqv-einq+eln^{2}+2elnr-elo^{2}+elp^{2}-elt^{2}-elv^{2}-elz^{2}+2empq+emst+emuv+emvz+fhuq+flst+fluv+f luv+flvz-fmp^{2}-fms^{2}-fmu^{2}-fmv^{2}\\ chpq+chst+chuv+chvz-cmnp-cmpr-cmpu-cmqv-dmns\\ +dmp-ehnq-ehqr-ehqu-ehqz-eint+eiqq-elnv+elpq\\ +emn^{2}+2emmr+emnu-emo^{2}+emq^{2}\\ chq^{2}+cht^{2}+chv^{2}+chz^{2}-cmnq-cmqr-cmuq-cmqz-dmnt+dmoq-emv+emvq-fhnq-fhfqr-fhuq- fhqz-fint+fioq-flnv+flpq+fmn^{2}+2fmnr+2fmnu-fmo^{2}-fmp^{2}+fmq^{2}\\ ci^{2}+co^{2}+cr^{2}+cs^{2}+ct^{2}-dhi-dno-dor-dps-dqt\\ cil+cop+crs+csu+ctv-ehi-eno-eor-eps-eqt\\ cim+coq+crt+ctu+ctz+eov-ept-fhi-fno-for-fou-fqt\\ cip-clo-dhp+dln+eho-ein\\ ciq-cmo-dhaq+dmn+fho-fin\\ cis-clr-dhs+dlo+ehr-eio\\ cit-cmr-dht+dmo+fhr-fio\\ ciu-cls-dhu+dlp+ehs-eip\\ civ-cms-dhv+dmp+fhs-fip\\ ciz-cmt-dhz+dmq+fht-fiq\\ cd^{2}+cp^{2}+cs^{2}+cu^{2}+cv^{2}-ehl-enp-eos-epu-eqv\\ clm+cpq+cst+cuv+cvz-fhl-fnp-fos-fpu-fqv\\ clq-cmp-ehq+emn+fhp-fln\\ clt-cms-eht+emo+fhs-flo\\ clv-cmu-ehv+emp+fhu-flp\\ clz-cmv-ehz+emq+fhv-flq\\ cm^{2}+cq^{2}+ct^{2}+cv^{2}+cz^{2}-fhm-fnq-fot-fpv-fqz\\ cos-cpr-dns+dop+enr-eo^{2}\\ cot-cqr-dnt+dodq+fnr-ffo^{2}\\ cou-cps-dnu+dp^{2}+ens-eop\\ cov-cqs-dnv+dpq+fns-fop\\ coz-cqt-dnz+dq^{2}+fnt-fou\\ cpt-cqs-ent+eoq+fns-fop\\ cpv-cqu-env+epq+fnu-fp^{2}\end{array}\] \begin{tabular}{|l|l|} \(cpz-cqv-enz+eq^{2}+fnv-fpq\) \\ \(cru-cs^{2}-dou+dps+eos-epr\) \\ \(crv-cst-dov+dqs+eot-eqr\) \\ \(crz-ct^{2}-doz+dqt+fot-fqr\) \\ \(csv-ctu-eov+ept+fou-fps\) \\ \(csz-ctv-eoz+eqt+fov-fqs\) \\ \hline \(\begin{array}{l}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \end{array}\) \\ \hline \(G_{14}=G_{d}\) & \(d^{2}+i^{2}+o^{2}+r^{2}+s^{2}+t^{2}\) \\ & \(de+il+op+rs+su+tv\) \\ & \(df+im+oq+rt+sv+tz\) \\ & \(dgl+dhp+dis+dlu+dmv-egi-eho-eir-els-emt\) \\ & \(dgm+dhaq+dit+dmu+dmz+eiv-ems-fgi-fho-fir-fiu-fmt\) \\ & \(dhl+dnp+dos+dpu+dqv-ehi-eno-eor-eps-eqt\) \\ & \(dhm+dnaq+dot+dq+dqz+eov-eqs-fhi-fno-for-fou-fqt\) \\ & \(dhop+dhrs+dhsu+dhtv-dinp-dios-dipu-diqv-eho^{2}-ehr^{2}-ehs^{2}-eht^{2}+eino+ eior+eips+eiqt\) \\ & \(dhoq+dhrt+dhtu+dhtz-ding-dot-diqu-diq-diq-zelot+elqr-fho^{2}-fhr^{2}-fhs^{2}-fht^{2} +fino+fior+fips+fiqt+flos-flpr\) \\ & \(dhp^{2}+dhs^{2}+dhu^{2}+dhv^{2}-dlnp-dos-dlpu-dmqu-ehop-ehrs-ehsu-ehtv-eiqv+elno+ elor+elps+elqt+emqs+fiqu-flqs\) \\ & \(dhpq+dhst+dhuv+dhvz-dmnp-dmos-dmpu-dmqv-einq-eiot-eiqu-eiqu-eiqz-elov+elqs+emno+ emor+emou+emqt-fhop-fhrs-fhsu-fhtv+finp+fios+fipu+fiqv\) \\ & \(dhq^{2}+dht^{2}+dhv^{2}+dhz^{2}-dmnq-dmot-dmqu-dmqz-emov+emqs-fhoq-fhrt-fhtu-fhtz- flov+flpt+fmno+fmor+2fmu-fmps+fmqt\) \\ & \(dil+dop+drs+dsu+dtv-ei^{2}-eo^{2}-er^{2}-es^{2}-et^{2}\) \\ & \(dim+doq+drt+du+dtz+erv-est-fi^{2}-fo^{2}-fr^{2}-fru-ft^{2}\) \\ & \(dip^{2}+dis^{2}+diu^{2}+div^{2}-dlop-dlrs-dlsu-dmtu-eiop-eirs-eisu-2eitv+elo^{2}+ elr^{2}+els^{2}+elt^{2}+emst+fitu-flst\) \\ & \(dipq+dist+diuv+divz-dmpo-dmrs-dmsu-dmtv-eioq-eirt-eitu-eitz-elrv+elst+emo^{2}+emr^{2}+ emru+emt^{2}\) \\ & \(diq^{2}+dit^{2}+div^{2}+diz^{2}-dmoq-dmrt-dmtu-dmtz-emrv+emst-fioq-firt-fitu-fitz- fdrv+flst+fmo^{2}+fmr^{2}+2fmru-fms^{2}+fmt^{2}\) \\ & \(dl^{2}+dp^{2}+ds^{2}+du^{2}+dv^{2}-eil-eop-ers-esu-etv\) \\ & \(dlm+dpq+dst+duv+dvz-fil-fop-frs-fsu-ftv\) \\ & \(dla-dmp-eiq+emo+fip-flo\) \\ & \(dlt-dms-eit+emr+fis-flr\) \\ & \(dlv-dmu-eiv+ems+fiu-fls\) \\ & \(dlz-dmv-eiz+emt+fiv-flt\) \\ & \(dm^{2}+dq^{2}+dt^{2}+dv^{2}+dz^{2}-fim-foo-frt-fsv-ftz\) \\ & \(dpt-dqs-eot+eqr+fos-fpr\) \\ & \(dpv-dqu-eov+eqs+fou-fps\) \\ & \(dpz-dqv-eoz+eqt+fov-fpt\) \\ & \(dsv-dtu-erv+est+fru-fs^{2}\) \\ & \(dsz-dtv-erz+et^{2}+frv-fst\) \\ & \(\vskip 6.0pt plus 2.0pt minus 2.0pt\) \\ \hline \(G_{15}=G_{e}\) & \(e^{2}+l^{2}+p^{2}+s^{2}+u^{2}+v^{2}\) \\ & \(ef+lm+pq+st+uv+vz\) \\ \hline \end{tabular} \[\begin{array}{l}|l \begin{tabular}{|l|l|} \hline \(G_{23}=G_{h}\) & \(\begin{array}{l}h^{2}o^{2}+h^{2}r^{2}+h^{2}s^{2}+h^{2}t^{2}-2hino-2hior-2hips-2hiqt+i^{2}n^{2}+i^{2}o^{2}-i^{2}s^{2} -\\ i^{2}t^{2}-i^{2}u^{2}-2i^{2}v^{2}-i^{2}z^{2}+2ilop+2ilrs+2ilsu+2iltv+2imoq+2imrt+ \\ 2imtu+2imtz-l^{2}o^{2}-l^{2}r^{2}-l^{2}s^{2}-l^{2}t^{2}+2lmrv-2lmst-m^{2}o^{2}-m^ {2}r^{2}-2m^{2}ru+m^{2}s^{2}-m^{2}t^{2}+n^{2}r^{2}+n^{2}t^{2}-2no^{2}r-2nops-2 noot+o^{4}+o^{2}p^{2}+o^{2}q^{2}-o^{2}u^{2}-2o^{2}v^{2}-o^{2}z^{2}+2opsu+2optv+2 oqtu+2oqtz-p^{2}s^{2}+2pqrv-4pqst-2q^{2}ru+2q^{2}s^{2}-q^{2}t^{2}-r^{2}u^{2}-2r^{2}v^{2}-r^{2}z^{2} +2rs^{2}u+4rstv+2rt^{2}z-s^{4}-2s^{2}t^{2}-s^{2}v^{2}-s^{2}z^{2}+2stuv+2stvz-t^{ 4}-t^{2}u^{2}-t^{2}v^{2}\\ h^{2}op+h^{2}rs+h^{2}su+h^{2}tv-hinp-hipr-hipu-hiqv-hlno-hlor-hlps-hlqt-\\ i^{2}ns+i^{2}op+iln^{2}+ilnr+ilp^{2}-ilt^{2}-ilt^{2}-ilv^{2}-ilz^{2}+impq+imst+ imuv+imvz+\\ lmoq+lmrt+lmsv+lmtz-m^{2}op-m^{2}rs-m^{2}su-m^{2}tv+n^{2}rs+n^{2}su+n^{2}tv-\\ no^{2}s-nopr-nopu-nogv-np^{2}s-npqt+o^{3}p+o^{2}su+o^{2}tv+opq^{2}-opru-\\ ops^{2}-opt^{2}-opv^{2}-opr^{2}-opr^{2}-opxv+oquv+oqvz+p^{2}rs+p^{2}tv+pqrt- pqtu+pqtz-\\ q^{2}tv-rsv^{2}-rsz^{2}+rtuv+rtvz+s^{2}tv-st^{2}u+st^{2}z-suz^{2}+sv^{2}z-t^{3}v+twvz-tv^{3} \\ h^{2}qq+h^{2}rt+h^{2}tu+h^{2}tz-hinq-hiqr-hiqu-hiqz-hmno-hmor-hmps-hmqt-i^{2}nt+i^{2} oq+imn^{2}+imn^{2}+imnr+imp^{2}+imq^{2}-l^{2}nt+l^{2}oiq+lmns-lmop+n^{2}rt+n^{2}tu+n^{2}tz-noq^{2}t- noqur-noqz-np^{2}t-nq^{2}t+o^{3}q+o^{2}tu+o^{2}tz+op^{2}q-2opst+oq^{3}-qqru- oqqrz+qqs^{2}-oqt^{2}+p^{2}rt+p^{2}tz-pqsz-pqtv+q^{2}rt+q^{2}sv\\ h^{2}p^{2}+h^{2}s^{2}+h^{2}u^{2}+h^{2}v^{2}-2hlnp-2hlpr-2hlpu-2hlqv-2ilns+2ilop+l^{2} n^{2}+2l^{2}nr-l^{2}o^{2}+l^{2}p^{2}-l^{2}t^{2}-l^{2}v^{2}-l^{2}z^{2}+2lmpq+2 lmst+2lmuv+2lmvz-m^{2}p^{2}-m^{2}s^{2}-m^{2}u^{2}-m^{2}v^{2}+n^{2}s^{2}+n^{2}u^{2}+n^{2}v^{2}-2 nops-2np^{2}u-2npqv+o^{2}p^{2}+o^{2}u^{2}+o^{2}v^{2}-2opsu-2 oqtu+p^{4}p^{2}+p^{2}s^{2}-p^{2}t^{2}-p^{2}z^{2}-2pqrv+4pqst+2pqvz+2q^{2}ru-2q^{2}s^{2}-q^{2}v^{2}-s^{ 2}z^{2}+2stuv+2stvz-t^{2}u^{2}-t^{2}v^{2}-u^{2}z^{2}+2uv^{2}z-v^{4}\) \(h^{2}pq+h^{2}st+h^{2}uv+h^{2}vz-hlnq-hlqr-hlqu-hlqz-hmnp-hmpr-hmpu-hmpu-hmpu-hmpu-hmpu-hmpu-hmpu- lintu-lioq-imns+imop-l^{2}nv+l^{2}pq+lmn^{2}+lmnr+lmnu-lmo^{2}+lmq^{2}+m^{2}st+n^{2}uv+n^{2}vz-nopt-noqs-np^{2}v-npqu-npqz- nq^{2}v+o^{2}pq+o^{2}uv+o^{2}vz-2optu-2oqtv+p^{3}q-p^{2}rv+2p^{2}st+p^{2}vz+pq^{3}+pqru- pqrz-pqs^{2}+pqt^{2}-pquz-pqv^{2}+q^{2}rv+q^{2}uv\) \(h^{2}q^{2}+h^{2}t^{2}+h^{2}v^{2}+h^{2}z^{2}-2hmnq-2hmqr-2hmq-2hmqz-2imnt+2imoq-2 lmnv+2lmpq+m^{2}n^{2}n^{2}+2m^{2}nr+2m^{2}nu-m^{2}o^{2}-m^{2}p^{2}+m^{2}q^{2}+n^{2}q^{2}+ \\ n^{2}t^{2}+n^{2}v^{2}+n^{2}z^{2}-2noot-2npqv-2nq^{2}z+o^{2}v^{2}+o^{2}z^{2}-2optv -2oqtz+p^{2}q^{2}+p^{2}t^{2}+p^{2}z^{2}-2pqvz+q^{4}+q^{2}t^{2}+q^{2}v^{2}\) \(hiop+hirs+hhiu-hito^{2}-hlr^{2}-hls^{2}-hlt^{2}-i^{2}np-i^{2}os-i^{2}pu-i^{2}qv+ ilno+ilor+ilps+ilqt+nors+nosu+notv-npr^{2}-nps^{2}-npt^{2}-o^{3}s+o^{2}pr-o^{2}pu-o^{2}qv+ op^{2}s+opt+orsu+ortv-os^{3}-ost^{2}-pr^{2}u+prs^{2}+pstv-pt^{2}u-qr^{2}v+qrst-qs^{2}v+qstu\) \(hioq+hirt+hitu+hitz-hmo^{2}-hmr^{2}-hms^{2}-hmt^{2}-i^{2}nq-i^{2}ot-i^{2}qu-i^{2}qz+ imno+imor+imps+imqt-l^{2}ot+l^{2}qr+lmos-lmpr+nort+notu+notz-notz-nqr^{2}-nqs^{2}- nqt^{2}-o^{3}t+o^{2}qr-o^{2}qu-o^{2}qz-op^{2}t+2opqs+oq^{2}t+ortu+ortz-os^{2}t-ot^{3}+ pstz-pt^{2}v-qr^{2}u-qr^{2}z+qrs^{2}+qrt^{2}-qs^{2}z+qstv\) \(hip^{2}+his^{2}+hiu^{2}+hiu^{2}+hiv^{2}-hlop-hlrs-hlsu-hltv-ilnp-ilos-ilpu-ilqv+ llov+l^{2}no+l^{2}or+l^{2}ps+l^{2}qt+nos^{2}+nou^{2}+nov^{2}-nprs-npsu-nptu-o^{2}ps+op^{2}r- op^{2}u-opqv+oru^{2}+orv^{2}-os^{2}u-ot^{2}u+p^{3}s+p^{2}qt-prsu-prtv+ps^{3}+pst^{2}+psv^{2}- ptuv-qrsv+qrtu-qsuv+qtu^{2}\) \(hipq+hist+hiuv+hiuv+hivz-hmop-hmrs-hmsu-hmtv-ilq-ilqz-l^{2}ov+l^{2}qs+lmno+lmor+lmmq+ lmqt+nost+nouv+novz-nqrs-nqsu-natv-o^{2}pt-op^{2}v+opqr-opqz+orv+orvz- ostu-ot^{2}v+p^{2}qs+pq^{2}t-prsv+ps^{2}t+psvz-ptv^{2}-qrsz+qst^{2}-qsuz+quv\) \\ \hline \end{tabular} \[\begin{array}{l}\vskip 6.0pt plus 2. \begin{tabular}{|l|l|} \(ilq^{2}+ilt^{2}+ilv^{2}+ilz^{2}-impq-imst-imuv-imvz-lmoq-lmrt-lmsv-lsmtz+m^{2}op+m^{2} rs+m^{2}su+m^{2}tv+opt^{2}+opv^{2}+opz^{2}-oqst-oquv-oqvz-pqrt-pqsv-pqtz+q^{2} rs+q^{2}su+q^{2}tv+rsv^{2}+rsz^{2}-rtuv-rtvz-s^{2}tv+st^{2}u-st^{2}z+suz^{2}-sv^{2}z+t^{3}v- tuvz+tv^{3}\) \\ \(ipt-iqs-lot+lqr+mos-mpr\) \\ \(ipv-iqu-lov+lqs+mou-mps\) \\ \(ipz-iqv-loz+lqt+mov-mpt\) \\ \(isv-itu-lrv+lst+mru-ms^{2}\) \\ \(isz-itv-lrz+lt^{2}+mrv-mst\) \\ \end{tabular} \begin{tabular}{|l|l|} \(iuz-iv^{2}-lsz+ltv+msv-mtu\) \\ \hline \(G_{25}=G_{l}\) & \(l^{2}q^{2}+l^{2}t^{2}+l^{2}v^{2}+l^{2}z^{2}-2lmpq-2lmst-2lmuv-2lmvz+m^{2}p^{2 }+m^{2}s^{2}+m^{2}u^{2}+m^{2}v^{2}+p^{2}t^{2}+p^{2}v^{2}+p^{2}z^{2}-2pqst-2 pquv-2pqvz+q^{2}s^{2}+q^{2}u^{2}+q^{2}v^{2}+s^{2}v^{2}+s^{2}z^{2}-2stuv-2stvz+t^{2}u^{2}+t^{2}v^{2}+u^{2}z^{2} -2uv^{2}z+v^{4}\) \\ \hline \(G_{33}=G_{n}\) & \(nru-ns^{2}-o^{2}u+2ops-p^{2}r\) \\ & \(nrv-nst-o^{2}v+opt+oqs-pqr\) \\ & \(nrz-nt^{2}-o^{2}z+2oqt-q^{2}r\) \\ & \(nsv-ntu-opv+oqu+p^{2}t-pqs\) \\ & \(nsz-ntv-opz+oqv+pqt-q^{2}s\) \\ & \(nuz-nv^{2}-p^{2}z+2pqv-q^{2}u\) \\ \hline \(G_{34}=G_{o}\) & \(osv-otu-prv+pst+qru-qs^{2}\) \\ & \(osz-otv-prz+pt^{2}qrv-qst\) \\ & \(ouz-ov^{2}-psz+ptv+qsv-qtu\) \\ \hline \(G_{44}=G_{r}\) & \(ruz-rv^{2}-s^{2}z+2stv-t^{2}u\) \\ \hline \end{tabular} ## Appendix B Code for Chapter 2 The following script can be run in SageMath [SD23] and produces the Grobner basis above together with the computations needed in the proof of Proposition 2.7. One can slightly modify the matrix in the beginning to adapt the code to higher dimension \(n\). We caution the reader that the function for computing the set of unlucky primes, in the sense of Proposition 2.14, is highly inefficient. Especially the last part of this code requires about one day running time on a laptop. # Define the polynomial ring, fix the lexicographic order and the matrix X R.< a,b,c,d,e,f,g,h,i,l,m,n,o,p,q,r,s,t,u,v,z> = PolynomialRing(QQ, 21, order= "lex") M = MatrixSpace(R,6,6) X = M([a,b,c,d,e,f,b,g,h,i,l,m,c,h,n,o,p,q,d,i,o,r,s,t,e,l,p,s,u,v, f,m,q,t,v,z]) # Define the ideal J L = [X.trace()] for row in X*X : for entry in row: if not entry in L: L.append(entry) for minor in X.minors(3): if not minor in L: L.append(minor) J = R.ideal(L) # Compute the Gr\"obner basis, the output is listed above grob = J.groebner_basis() # The following function takes two sets of polynomials F and G, # computes the matrices Z, Y, R such that G = Z.L, L = Y.G, R.G = 0, # inspects their coefficients and produces the set of coefficients # that are not 1 or -1. def unlucky_primes(F, G, ring): #F is a set of generators for the ideal and G is the Groebner basis over Q #we need to check the entries of three matrices Z,Y,R with F = Z.G; #G = Y.F; R matrix of syzygies. generators = list(F) Ideal_F = ring.ideal(generators) Ideal_G = ring.ideal(list(G)) unlucky = [] for poly in G: row_Z = poly.lift(Ideal_F) #this produces the entries of the matrix Z #on the line corresponding to poly in G for index in range(len(row_Z)): entry = row_Z[index] for coeff in entry.coefficients(): if not coeff == 1 and not coeff == -1: if coeff not in unlucky: unlucky.append(coeff) for poly in F: row_Y = poly.lift(Ideal_G) #this produces the entries of the matrix Y #on the line corresponding to poly in G for index in range(len(row_Y)): entry = row_Y[index] for coeff in entry.coefficients(): if not coeff == 1 and not coeff == -1: if coeff not in unlucky: Sy = Ideal_G.syzygy_module() for row in Sy: for entry in row: for coeff in entry.coefficients(): if not coeff == 1 and not coeff == -1: if coeff not in unlucky: unlucky.append(coeff) if not unlucky: print("There are no unlucky primes") return(unlucky) # We apply our function on the polynomials defining J and the Groebner basis # found above unlucky_primes(L, grob, R) #Output: [2] # We have to check now that the leading coefficients of the marked generators # are non zero divisors modulo J. This is done by comparing the division # ideal (J : lc) with J. # We start with the the leading coefficient of all but one marked generators # of degree one lc_1 = u*z -v^2 J == J.quotient(ideal(lc_1)) # Then the leading coefficient of the remaining marked generator # of degree one lc_2 = f J == J.quotient(ideal(lc_2)) # Last, the leading coefficient of the marked generator of degree two # which is not already monic lc_3 = q^2 + t^2 + v^2 + z^2 J == J.quotient(ideal(lc_3)) # Again, computing the division ideal requires computing a Groebner basis # for (xJ, lc(x-1)) hence we have to check again if there are unlucky primes. # We define a new polynomial ring obtained by adding the auxiliary variable x S.<x, a,b,c,d,e,f,g,h,i,l,m,n,o,p,q,r,s,t,u,v,z> = PolynomialRing(QQ, 22, order= "lex") division1 = [] division2 = [] division3 = [] we construct the ideal xJ (one for each leading coefficient) for poly in L: division1.append(x*poly) division2.append(x*poly) division3.append(x*poly) # We add the remaining polynomial lc*(x-1), compute a Groebner basis and # find the unlucky primes # This last part requires about a day running time on a laptop. division1.append(lc_1*x -lc_1) grob_division1 = S.ideal(division1).groebner_basis() print(unlucky_primes(division1, grob_division1, S)) # Output: [2, 6, 3] division2.append(lc_2*x -lc_2) grob_division2 = S.ideal(division2).groebner_basis() print(unlucky_primes(grob_division2, division2, S)) # Output: [2, 4, 3, 6] division3.append(lc_3*x -lc_3) grob_division3 = S.ideal(division3).groebner_basis() print(unlucky_primes(division3, grob_division3, S)) # Output: [2, -2] # Last, we need to check that the two discriminants of the polynomials # of degree two are not zero-divisors. # The discriminant of the quadratic polynomial in the variable l is delta1 = (m*p*q + m*s*t + m*u*v + m*v*z)^2 -( m^2*p^2 + m^2*s^2 + m^2*u^2 + m^2*v^2 + p^2*t^2 + p^2*v^2 + p^2*z^2 - 2*p*q*s*t - 2*p*q*u*v - 2*p*q*v*z + q^2*s^2 + q^2*u^2 + q^2*v^2 + s^2*v^2 + s^2*z^2 - 2*s*t*u*v - 2*s*t*v*z + t^2*u^2 + t^2*v^2 + u^2*z^2 - 2*u*v^2*z + v^4) # Observe that it is enough to check that it is not a zero divisor modulo # J_m. Recall that we need to add one auxiliary varible x to compute # the quotient ideal. J_m = S.ideal(J).elimination_ideal([m,n,o,p,q,r,s,t,u,v,z]) J_m == J_m.quotient(S.ideal(delta1)) # Output: True # Similarly, for the discriminant of the quadratic polynomial in f delta2 = m^2 + q^2 + t^2 + v^2 + z^2 J_g = S.ideal(J).elimination_ideal([g,h,i,l,m,n,o,p,q,r,s,t,u,v,z]) J_g == J_g.quotient(S.ideal(delta2)) # Output: True # Last, since we have computed two new Groebner bases, we have to check # for unlucky primes division1 = [] for poly in J_m.gens(): division1.append(x*poly) division1.append(delta1*x -delta1) grob_division1 = S.ideal(division1).groebner_basis() unlucky_primes(division1, grob_division1, S) Output: [2,-2, 3, -3, 4] division2 = [] for poly in J_g.gens(): division2.append(x*poly) division2.append(delta2*x -delta2) grob_division2 = S.ideal(division2).groebner_basis() unlucky_primes(grob_division2, division2, S) Interestingly for this last computation the number of unlucky primes # explodes: # [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, # 59, 61, 67, 71, 73, 79, 101, 103, 107, 109, 113, 127, 131, 137, 167, # 173, 179, 193, 211, 223, 263, 283, 313, 359, 461, 809] ## Appendix C Code for Chapter 7 The following script can be run in SageMath [SD23] and produces the list of admissible elements for the group theoretical datum \((\widetilde{B}_{3},J=\{0,1,2\},\sigma,\omega_{2}^{\vee})\) studied in Section 7.2. The function newtonPoint also computes the Newton point of a given element in the extended affine Weyl group. We define the extended affine Weyl group we will be working with and # fix the cocharacter omega_2 = (1,1,0) and the non-trivial length-zero # element tau, which gives the action of the Frobenius in the non-split case E = ExtendedAffineWeylGroup(["B", 3, 1]) WF = E.WF() F = E.fundamental_group() b = E.lattice_basis() Wa = E.affine_weyl() omega_2 = PW0(b[2]) tau = F[1] # Here we compute the set Adm(omega_2)^J: first we find all elements # smaller in the Bruhat order than omega_2 or any conjugate via the # finite Weyl group, then take minimal length representatives # in the left coset W_JW compare = [] #contains all the t^{x(omega_2)} for x in WO: e = WF(x*omega_2*x^-1) if e in compare: continue compare.append(e) adm = [] #will contain all the elements smaller than t^{x(omega_2)} modulo W_J for w in compare: for i in range(w.length() + 1): for u in Wa.elements_of_length(i): if (WF(u)).bruhat_le(w): x = u.coset_representative([0,1,2], side = "left"). reduced_word() if WF.from_reduced_word(x) in adm: continue adm.append(WF.from_reduced_word(x)) print(adm) # The output is listed in Section 7.2 # The following function computes the Newton point of a given element in # the extended affine Weyl group. Observe that changing the parameter # coweight_space one can use it for other groups. The parameter tau # is a length-zero element whose adjoint action is the Frobenius R = RootSystem(['B',3]) coweight_space = R.coweight_space() def newtonPoint(w, tau, coweight_space): powers = w sigma = tau order = 1 while not powers.to_classical_weyl().is_one(): powers = powers*sigma*w*sigma^-1 sigma = sigma*tau order = order +1 newton = coweight_space(powers.to_translation_right()). to_dominant_chamber()/order return(newton) # We compute the Newton points of the admissible element in the split case for win adm: print(w) print(newtonPoint(w, WF.from_reduced_word([ ]), coweight_space)) print( " ") # Newton points of the admissible elements in the non-split case for win adm: print(w) print(newtonPoint(w, tau, coweight_space)) print(" ") # The output is presented in Section 7.2
2305.19519
Two applications of stochastic thermodynamics to hydrodynamics
Recently, the theoretical framework of stochastic thermodynamics has been revealed to be useful for macroscopic systems. However, despite its conceptual and practical importance, the connection to hydrodynamics has yet to be explored. In this Letter, we reformulate the thermodynamics of compressible and incompressible Newtonian fluids so that it becomes comparable to stochastic thermodynamics and unveil their connections; we obtain the housekeeping--excess decomposition of entropy production rate (EPR) for hydrodynamic systems and find a lower bound on EPR given by relative fluctuation similar to the thermodynamic uncertainty relation. These results not only prove the universality of stochastic thermodynamics but also suggest the potential extensibility of the thermodynamic theory of hydrodynamic systems.
Kohei Yoshimura, Sosuke Ito
2023-05-31T03:01:39Z
http://arxiv.org/abs/2305.19519v3
# Geometric housekeeping-excess decomposition for hydrodynamic systems ###### Abstract We study a connection between classical hydrodynamics described deterministically by the Navier-Stokes equation and stochastic (and chemical) thermodynamics. In particular, we show that the minimum dissipation of a hydrodynamic system formulated by Helmholtz in 1868 can be seen as the housekeeping entropy production in the sense of the geometric decomposition (also known as Maes-Netocny decomposition) of entropy production that has been formulated in stochastic and chemical thermodynamics. We generalize the decomposition for hydrodynamic systems by identifying a conservative subspace suitable for them. The housekeeping entropy production evaluates how the boundary affects dissipation, while the excess one quantifies the system's nonstationarity in the sense of the material derivative. We discuss a universality of excess entropy production rate that it may not vanish in a steady state or yield an additional term when inertia is not negligible, regardless of the details of the system. _Introduction.--_The second law of thermodynamics is the most fundamental, universal restriction on what physical systems can do. This century, its detailed character has been revealed by stochastic thermodynamics in thermally fluctuating nonequilibrium systems, which can be classical or quantum, relying on entropy production as a critical quantity [1; 2]. Despite the significant development of our knowledge of entropy production and second law in such systems [3; 4; 5], application of the developed techniques to other types of systems is not well examined, except for deterministic chemical systems [6; 7]. Deterministic hydrodynamic systems described by the Navier-Stokes equation are among the least investigated subjects. Thermodynamics of such systems was once intensively studied in the last century [8], but it has yet to be considered from the viewpoint of stochastic thermodynamics. Nonetheless, a universal understanding of hydrodynamic systems as provided by thermodynamics is no less valuable than that of thermally fluctuating or chemical systems, because the Navier-Stokes equation governs many phenomena ranging from the motion of tiny cells [9; 10] to daily water usage and industrial water management [11]. In addition to the theoretical importance, entropy production has practical usefulness. For example, the development of computational fluid dynamics has enabled us to use it practically to evaluate the performance of hydraulic machines such as pumps and turbines [12]. However, the entropy production theory used there is almost identical to the one developed several decades ago. In this Letter, we report a connection between classical hydrodynamics and stochastic thermodynamics. We show that the minimum dissipation theorem first proved by Helmholtz in the nineteenth century [13] can be used to derive geometric housekeeping-excess decomposition of entropy production, which has recently been studied in the fields of stochastic and chemical thermodynamics [14; 15; 16; 17; 18; 19; 20]. Housekeeping-excess decomposition, which divides entropy production rate (EPR) into two parts called excess and housekeeping EPR, was proposed to recover the second law when it becomes less meaningful [21]. In a quasi-static process between nonequilibrium steady states, unlike equilibrium states, the increase of entropy diverges, so the second law becomes futile. Nevertheless, the excess EPR is expected to be finite to lead to an extended Clausius law [6; 22; 23; 24; 25]. Moreover, decomposing EPR enables us to know in more detail other kinds of trade-off relations between benefits, such as precision [4] or speed [5], and thermodynamic costs [15; 16; 17; 18; 5; 18; 26]. It is also essential that recent studies have recast the decomposition by focusing on geometrical structures of thermodynamic quantities to reveal its connection to optimal transport theory [14; 15; 16; 17]. However, how and whether we can meaningfully define an EPR decomposition for deterministic hydrodynamic systems still needs to be discovered. The situation is similar to chemical systems, where nonlinearity had made it impossible to define a decomposition generically until the geometric approach was developed [17; 18; 19]. In this Letter, we show that we can define an EPR decomposition for hydrodynamic systems, identifying the Helmholtz minimum dissipation as the housekeeping entropy production of a hydrodynamic system and the remainder Figure 1: Comparison of decompositions. New results are highlighted in orange. The left figure explains the geometric understanding of the housekeeping–excess decomposition for stochastic or chemical systems. Here, thermodynamic force \(\mathbf{F}\), which gives total entropy production rate (EPR) \(\dot{\Sigma}\), is decomposed into its orthogonal projection onto the space of conservative forces \(\bar{\nabla}\psi^{*}\) (conservative subspace) and the remainder (dashed arrow), which respectively provide the excess and the housekeeping EPR. We reveal the counterparts in deterministic hydrodynamic systems, where no EPR decomposition has been obtained. In particular, we identify that the counterpart of conservative forces is velocity fields that vanish at the boundary. as the excess one. Besides, comparing with other decompositions in other kinds of systems, we discuss that an anomaly of the excess term possesses a certain universality. _Preliminaries._--We consider an incompressible fluid contained in a connected, bounded region \(\Omega\subset\mathbb{R}^{n}\) (\(n=2,3,\dots\)) whose shape and boundary \(\partial\Omega\) can move in time continuously [27]. We denote a velocity field as \(\mathbf{v}=(v_{i}(\mathbf{x}))_{i=1}^{n}\). We fix the frame of reference so that the boundary does not move at least one point on \(\partial\Omega\)[28]. The density of the fluid is set to \(\rho>0\). We introduce the stress tensor \(\sigma[\mathbf{v},p]\) as \(\sigma_{ij}[\mathbf{v},p]=-p\delta_{ij}+2\mu E_{ij}[\mathbf{v}]\), where \(p=p(\mathbf{x})\) is the pressure, \(\delta_{ij}\) is the Kronecker delta, \(\mu\) is the viscosity, and \(E_{ij}[\mathbf{v}]=\frac{1}{2}(\partial_{t}v_{j}(\mathbf{x})+\partial_{j}v_{i}(\mathbf{x}))\) is the strain-rate tensor. Then, the Navier-Stokes equation is given as \[\rho(\partial_{t}\mathbf{v}+\mathbf{v}\cdot\nabla\mathbf{v})=\nabla\cdot\sigma[\mathbf{v},p]+ \rho\mathbf{f} \tag{1}\] with volume force \(\mathbf{f}\)[29]. We assume incompressibility \(\nabla\cdot\mathbf{v}=0\) is always satisfied. The temperature is presumed to be homogeneous and set to unity. We also need to indicate a boundary condition to formulate a physical system. Hereafter, we generally write the boundary velocity field as \(\mathbf{v}_{\rm b}\). We consider the Navier-Stokes equation with the boundary condition \(\mathbf{v}(\mathbf{x},t)=\mathbf{v}_{\rm b}(\mathbf{x})\) for \(\mathbf{x}\in\partial\Omega\) or in short, \(\mathbf{v}|_{\partial\Omega}=\mathbf{v}_{\rm b}\), at each time \(t\). As long as \(\int_{\partial\Omega}\mathbf{v}_{\rm b}\cdot d\mathbf{S}=0\) holds, any boundary condition is allowed (including fixed inflow/outflow boundary and no-slip boundary). When the Reynolds number is small, we may neglect the nonlinear term \(\mathbf{v}\cdot\nabla\mathbf{v}\), which we call the Stokes approximation [9; 29]. We call the equation \(\rho\partial_{t}\mathbf{v}=\nabla\cdot\sigma[\mathbf{v},p]\) obtained by the Stokes approximation and neglecting the volume force term in the Navier-Stokes equation, the Stokes equation [30]. If we further set the left-hand side to zero, we get the stationary Stokes equation \(\nabla\cdot\sigma[\mathbf{v},p]=\mathbf{0}\). It is known that its solution is unique given a boundary condition for \(\mathbf{v}\) if the condition \(\int_{\partial\Omega}\mathbf{v}_{\rm b}\cdot d\mathbf{S}=0\) is satisfied (for detail, see SM [31]). Pressure \(p\) is determined from \(\mathbf{v}\) uniquely up to an additive constant. Let us denote the space of the solutions of the stationary Stokes equation with the boundary condition unspecified as \(\mathrm{St}:=\{\mathbf{v}\mid\exists p,\,\nabla\cdot\sigma[\mathbf{v},p]=\mathbf{0}\}\). The local equilibrium assumption allows us to discuss entropy in the hydrodynamic system [8]. Once the entropy function is locally introduced, by using thermodynamic relations, we can prove that the entropy production rate (EPR) is \[\dot{\Sigma}[\mathbf{v}]=2\mu\int_{\Omega}\mathrm{tr}\left(E[\mathbf{v}]^{\mathsf{T}} E[\mathbf{v}]\right)dx, \tag{2}\] where \(dx\) is a volume element and \({}^{\mathsf{T}}\) indicates transposition. EPR \(\dot{\Sigma}[\mathbf{v}]\) includes both the entropy change in the fluid itself and that of the environment due to heat and matter exchange [8]. Therefore, its nonnegativity expresses the second law of thermodynamics. As we fixed the frame of reference, the equilibrium state is uniquely determined as \(\mathbf{v}=\mathbf{0}\), where the EPR vanishes. It is critical to regard the right-hand side of Eq. (2) as a metric of velocity fields. We define an inner product by \[\langle\mathbf{u},\mathbf{w}\rangle:=2\mu\int_{\Omega}\mathrm{tr}\left(E[\mathbf{u}]^{ \mathsf{T}}E[\mathbf{w}]\right)dx, \tag{3}\] and denote the induced norm as \(\|\cdot\|\). As a result, we can write the EPR as \(\dot{\Sigma}[\mathbf{v}]=\|\mathbf{v}\|^{2}\). Helmholtz proved that given a boundary velocity field \(\mathbf{v}_{\rm b}\), EPR takes its minimum at the solution of the stationary Stokes equation among all the velocity fields that satisfy \(\mathbf{v}|_{\partial\Omega}=\mathbf{v}_{\rm b}\)[13]. That is, for \(\mathbf{v}\), any velocity field, and \(\mathbf{v}^{\star}\in\mathrm{St}\), if \(\mathbf{v}|_{\partial\Omega}=\mathbf{v}^{\star}|_{\partial\Omega}\) holds, then \[\dot{\Sigma}[\mathbf{v}]\geq\dot{\Sigma}[\mathbf{v}^{\star}]. \tag{4}\] This fact is known as the Helmholtz minimum dissipation theorem. To be self-contained, we give its proof in SM [31]. _EPR decomposition._--Entropy production can be discussed in other systems, like Langevin systems, Markov jump processes [1], and chemical reaction networks [6]. In all cases, the EPR, which we denote \(\dot{\mathfrak{S}}\) to distinguish from \(\dot{\Sigma}[\mathbf{v}]\), can be given in the form \(\dot{\mathfrak{S}}=\mathbf{J}*\mathbf{F}\) with a certain product \(*\) between current \(\mathbf{J}\) and thermodynamic force \(\mathbf{F}\). Once a linear relation is established as \(\mathbf{J}=\mathsf{M}\mathbf{F}\), this formula can be rewritten as \(\dot{\mathfrak{S}}=\|\mathbf{F}\|_{\mathsf{M}}^{2}:=\mathbf{F}*\mathsf{M}\mathbf{F}\), which articulates the geometrical structure with \(\mathsf{M}\) the metric [20]. We may consider non-Euclidean geometry such as information geometry [18] or Hessian geometry [19], with other correspondences between \(\mathbf{J}\) and \(\mathbf{F}\). By definition, the EPR is the sum of the entropy change of the system and that of the environment, so the Clausius inequality should assure the nonnegativity of the EPR \(\dot{\mathfrak{S}}\geq 0\). On the other hand, when there are multiple reservoirs and, for example, continuous heat flow, the equality will never be attained. Oono and Paniconi proposed decomposing dissipation into a housekeeping part and an excess part so that an equality is attainable in the quasi-static limit [21]. The most prevalent way to define the housekeeping and the excess EPR was found by Hatano and Sasa for overdamped Langevin systems [22]. The definition was generalized to Markov jump systems [23] generically, and to a limited class of quantum systems and of chemical systems [6; 24; 32]. The definition embodies the original idea to attribute the housekeeping EPR (or heat) to the system's nonequilibriumness, and the excess EPR to its nonstationary dynamics. Decomposing EPR can tighten a thermodynamic bound called classical (or thermodynamic) speed limit with the excess EPR in Markov jump systems [5; 26]. Nonetheless, the definition does not work in dynamical systems without a physically meaningful unique steady state (fixed point), such as oscillatory chemical reaction networks and unstable hydrodynamic systems. Thus, tighter thermodynamic bounds are not available in those systems with Hatano-Sasa's way. A less famous but no less important method to decompose EPR was found by Maes and Netocny (MN) from the minimum entropy production principle [25]. It was proved to be equivalent to a geometric decomposition derived from optimal transport theory later [14; 15; 16]. Although the decomposition was defined for overdamped Langevin systems then, it has been generalized to Markov jump processes and deterministic chemical reaction networks [17; 18; 19]. This type of decomposition, which we call the MN decomposition, can be formulated as \[\begin{split}\dot{\mathfrak{E}}^{\rm{hk,MN}}&=\inf_{ \dot{\varphi}}\lVert\mathbf{F}-\tilde{\nabla}\psi\rVert_{\sf{M}}^{2},\\ \dot{\mathfrak{E}}^{\rm{ex,MN}}&=\inf_{\mathbf{F}^{ \prime}}\lVert\mathbf{F}^{\prime}\rVert_{\sf{M}}^{2}\quad\text{s.t.}\quad\dot{ \rho}=-\tilde{\nabla}\cdot(\mathsf{M}\mathbf{F}^{\prime}),\end{split} \tag{5}\] where \(\tilde{\nabla}\) is an abstract gradient operator, which acts as the usual nabla operator \(\nabla\) in Langevin systems [14; 15; 16; 25] and as a discrete counterpart in discrete systems [17; 18; 19]. Variable \(\rho\) is a distributional variable like a probability distribution or a concentration distribution. Hence, the constraint means that force \(\mathbf{F}^{\prime}\) should induce the same dynamics as \(\mathbf{F}\). The optimal potential \(\psi^{*}\) that gives the housekeeping EPR can be associated with the excess part as \(\dot{\mathfrak{E}}^{\rm{ex,MN}}=\lVert\tilde{\nabla}\psi^{*}\rVert_{\sf{M}}^ {2}\). This comes from the orthogonality relation \(\langle\tilde{\nabla}\psi^{*},\mathbf{F}-\tilde{\nabla}\psi^{*}\rangle_{\sf{M}}=0\). Thus, the decomposition \(\tilde{\mathfrak{S}}=\tilde{\mathfrak{E}}^{\rm{hk,MN}}+\tilde{\mathfrak{E}}^{ \rm{ex,MN}}\) can be seen as a kind of Pythagorean theorem. This definition possesses several advantages, among which the most important ones here are the following two. First, it can be defined in nonlinear chemical systems that oscillate or have more than one fixed point, in which the Hatano-Sasa decomposition does not work. Consequently, it has been enabled to tighten thermodynamic inequalities, not only speed limits but also thermodynamic uncertainty relations, in such systems [17; 18]. Second, it provides a clear geometrical viewpoint that is also physically insightful. In the first line of Eq. (5), the minimization can be seen as measuring the squared _distance_ between the actual force \(\mathbf{F}\) and the subspace of _conservative_ forces \(\tilde{\nabla}\psi\). When the force is conservative, the distance is zero and the system is ensured to relax to equilibrium. This property is compatible with the original idea that the housekeeping EPR arises when the steady state is out of equilibrium. Moreover, more importantly, it suggests that we can generally define the housekeeping EPR by _measuring_ how far from being conservative the system is with a metric consistent with entropy production. With this approach, we will never need any steady state. Therefore, once the way to assess conservativeness and the entropic metric are given, the definition should work in any kind of system. _Main result._--Above we explained how to decompose entropy production, using a conservative subspace and the geometry deduced from entropy production, as summarized in Fig. 1. As for hydrodynamic systems, the geometry is given as Eq. (3), so we need to characterize the conservative subspace next. Let \(\mathrm{B}[\mathbf{v}_{\rm{b}}]\operatorname{be}\left\{\mathbf{v}\mid\mathbf{v}_{| \partial\Omega}=\mathbf{v}_{\rm{b}}\right\}\), which is the space of velocity fields that are compatible with the given boundary condition. As mentioned earlier, the stationary Stokes equation has only one solution for each boundary condition. Therefore, the intersection between \(\mathrm{B}[\mathbf{v}_{\rm{b}}]\) and \(\mathrm{St}\) includes only one element, which we define as \(\mathbf{v}^{*}\) (see Fig. 2). Note \(\mathbf{v}^{*}\) is the same thing as \(\mathbf{v}^{*}\), which provides the minimum dissipation in Eq. (4). It is known that \(\mathbf{v}^{*}\) and \(\mathbf{v}-\mathbf{v}^{*}\) are orthogonal with respect to the inner product \(\langle\cdot,\cdot\rangle\), which is the most important relationship in this Letter. Moreover, we can show that \(\mathbf{v}^{*}\) and \(\mathbf{v}-\mathbf{v}^{*}\) are respectively orthogonal to each element of \(\mathrm{B}[\mathbf{0}](\ni\mathbf{v}-\mathbf{v}^{*})\) and that of \(\mathrm{St}(\ni\mathbf{v}^{*})\). For their proofs, see SM [31]. We define a velocity field as being _conservative_ if it vanishes at the boundary. In other words, if a fluid is contained in a mechanically and materially isolated box whose boundary is static, it is conservative. This definition is analogous to that of other kinds of systems being conservative. The vanishing boundary condition guarantees that the hydrodynamic system relaxes to the equilibrium state, in the same way that a conservative stochastic or chemical system does. Therefore, space \(\mathrm{B}[\mathbf{0}]\) has a special meaning as the space of conservative velocity fields. This conservative subspace has an apparent intersection with \(\mathrm{St}\), the null (equilibrium) velocity field \(\mathbf{0}\), which turns out to be their unique intersection. Before going to the definition of decomposition, let us finally introduce one more space, \(\mathbf{v}+\mathrm{St}=\left\{\mathbf{v}+\mathbf{u}\mid\mathbf{u}\in\mathrm{St}\right\}\). This is the space of velocity fields that induce the same material derivative of momentum \(\rho\mathbf{v}\). Let \(\mathbf{v}\) be the solution of the Navier-Stokes equation with pressure \(p\) and define \(D/Dt:=\partial_{t}+\mathbf{v}\cdot\nabla\). Obviously, every \(\mathbf{v}^{\prime}\in\mathbf{v}+\mathrm{St}\) can be written as \(\mathbf{v}^{\prime}=\mathbf{v}+\mathbf{u}\) with \(\mathbf{u}\in\mathrm{St}\). If \(\mathbf{u}\) solves the stationary Stokes equation with pressure \(q\) as \(\nabla\cdot\sigma[\mathbf{u},q]=\mathbf{0}\), then we have \[\nabla\cdot\sigma[\mathbf{v}^{\prime},p+q]+\rho\mathbf{f}=\nabla\cdot\sigma[\mathbf{v},p] +\rho\mathbf{f}=\rho\frac{D\mathbf{v}}{Dt} \tag{6}\] because \(\sigma[\mathbf{v},p]\) is a linear function of pair \((\mathbf{v},p)\). Therefore, \(\mathbf{v}^{\prime}\in\mathbf{v}+\mathrm{St}\) means that its stress tensor induces the "same" dynamics as \(\mathbf{v}\). The space \(\mathbf{v}+\mathrm{St}\) intersects \(\mathrm{B}[\mathbf{v}_{\rm{b}}^{\prime}]\) only once for any boundary condition \(\mathbf{v}_{\rm{b}}^{\prime}\)[33]. In particular, \(\mathbf{v}\) and \(\mathbf{v}-\mathbf{v}^{*}\) are the points of intersections with \(\mathrm{B}[\mathbf{v}_{\rm{b}}]\) and \(\mathrm{B}[\mathbf{0}]\), respectively. We write \(\mathbf{v}-\mathbf{v}^{*}\) as \(\mathbf{v}^{\dagger}\) to stress that it is a notable point. With the geometry (3) and the conservative subspace characterized by the boundary, we can define an EPR decomposition for hydrodynamic systems. We define the housekeeping and the excess EPR as \[\dot{\Sigma}^{\rm{hk}}[\mathbf{v}] :=\min_{\mathbf{v}^{\prime}\in\mathrm{B}[\dot{\Sigma}[\mathbf{v}-\mathbf{v}^{ \prime}], \tag{7}\] \[\dot{\Sigma}^{\rm{ex}}[\mathbf{v}] :=\min_{\mathbf{v}^{\prime}\in\mathbf{v}+\mathrm{St}}\dot{\Sigma}[\mathbf{v}^{ \prime}]. \tag{8}\] The housekeeping EPR is defined as the distance from the conservative subspace \(\mathrm{B}[\mathbf{0}]\), while the excess is the minimum EPR such that the velocity field does not change \(D\mathbf{v}/Dt\). This definition is analogous to the MN decomposition (5), as is compared in Fig. 1. Although the decomposition is defined variationally, we can write down the decomposition as \[\dot{\Sigma}^{\rm{hk}}[\mathbf{v}]=\dot{\Sigma}[\mathbf{v}^{*}],\quad\dot{\Sigma}^{\rm{ ex}}[\mathbf{v}]=\dot{\Sigma}[\mathbf{v}^{\dagger}]. \tag{9}\] On the boundary, \(\mathbf{v}^{*}\) coincides with \(\mathbf{v}\), while \(\mathbf{v}^{\dagger}\) vanishes. On the other hand, \(\mathbf{v}^{*}\) is "stationary" in the sense of the Stokes equation, while \(\mathbf{v}^{\dagger}\) is as "nonstationary" as \(\mathbf{v}\). Consequently, we can understand that the housekeeping and the excess EPR respectively quantify how the boundary's movement and the purely internal, i.e., conservative, dynamics contribute to dissipation. As is the case with the original MN decomposition [14; 15; 16; 25], the housekeeping and the excess EPR can be expressed in another variational manner as \[\dot{\Sigma}^{\rm hk}[\mathbf{v}]=\min_{\mathbf{v}^{\prime}\in\mathrm{B}[\mathbf{v}|_{ \partial n}]}\dot{\Sigma}[\mathbf{v}^{\prime}],\ \dot{\Sigma}^{\rm ex}[\mathbf{v}]=\min_{\mathbf{v}^{\prime}\in\mathrm{St}}\dot{ \Sigma}[\mathbf{v}-\mathbf{v}^{\prime}]. \tag{10}\] These variational formulae are directly derived from the definition. The first equations of Eqs. (9) and (10) jointly suggest the Helmholtz minimum dissipation theorem, which states that the solution of the stationary Stokes equation provides a minimum dissipation among all velocity fields under a fixed boundary condition [13]. More than two hundred years after his birth, our consideration reveals that Helmholtz's finding was indeed a tip of the iceberg of geometric thermodynamics [20]. Finally, let us prove Eq. (9). Since \(\mathbf{v}^{*}\) is orthogonal to \(\mathrm{B}[\mathbf{0}]\), if \(\mathbf{v}^{\prime}\in\mathrm{B}[\mathbf{0}]\), then we have \[\|\mathbf{v}-\mathbf{v}^{\prime}\|^{2} =\|\mathbf{v}-\mathbf{v}^{\dagger}-(\mathbf{v}^{\prime}-\mathbf{v}^{\dagger})\|^{2}\] \[=\|\mathbf{v}^{*}\|^{2}+\|\mathbf{v}^{\prime}-\mathbf{v}^{\dagger}\|^{2}+2 \langle\mathbf{v}^{*},\mathbf{v}^{\prime}-\mathbf{v}^{\dagger}\rangle\] \[=\|\mathbf{v}^{*}\|^{2}+\|\mathbf{v}^{\prime}-\mathbf{v}^{\dagger}\|^{2},\] where we used \(\mathbf{v}^{\prime}-\mathbf{v}^{\dagger}\in\mathrm{B}[\mathbf{0}]\). Therefore, the minimum in Eq. (7) is acheived by \(\mathbf{v}^{\prime}=\mathbf{v}^{\dagger}\), so \(\dot{\Sigma}^{\rm hk}[\mathbf{v}]=\dot{\Sigma}[\mathbf{v}^{*}]\). In the same manner, by considering \(\|\mathbf{v}^{\prime}\|^{2}\) for \(\mathbf{v}^{\prime}\in\mathbf{v}+\mathrm{St}\) and using the fact that \(\mathbf{v}^{\dagger}\) is orthogonal to \(\mathrm{B}[\mathbf{0}]\), we can show Eq. (8) leads to \(\dot{\Sigma}^{\rm ex}[\mathbf{v}]=\dot{\Sigma}[\mathbf{v}^{\dagger}]\). _Discussion._--In summary, we have proposed a housekeeping-excess decomposition for deterministic hydrodynamic systems, for which no such decomposition was known. Our decomposition is more similar to Maes-Netocny's way [14; 15; 16; 25] than Hatano-Sasa's [22]. As well as the MN decomposition, we rely on a geometric structure derived from entropy production and the notion of being conservative, which we introduced as a velocity field being zero on the boundary. In stochastic and chemical thermodynamics, the geometric technique has tightened thermodynamic bounds such as thermodynamic uncertainty relations and revealed a close connection between thermodynamics and optimal transport [20]. Although such bounds or relationships have yet to be obtained for hydrodynamic systems, the decomposition and the geometry we presented would be beneficial for discovering novel universal properties in such systems beyond the classical understanding [8]. Here, let us discuss a difference between our decomposition and conventional understanding of housekeeping-excess decomposition, as proposed by Oono and Paniconi. Conventionally, the excess term is presumed to vanish in a steady state, but \(\dot{\Sigma}^{\rm ex}[\mathbf{v}]\) does not because it is \(D\mathbf{v}/Dt=\mathbf{0}\) (plus \(\mathbf{f}\) is a gradient force) rather than \(\partial_{t}\mathbf{v}=\mathbf{0}\) that leads to \(\dot{\Sigma}^{\rm ex}[\mathbf{v}]=0\). These two conditions unconditionally coincide only when the system is described by the Stokes equation (or the configuration of the region is special [34]). Therefore, the equality will not be recovered even in the quasi-static limit, against the original philosophy of decomposition [21]. This difference has already been observed in other kinds of systems. The approach based on steady states can lead to a decomposition with unphysical negative values for underdamped Langevin systems [35] and Markov jump systems with odd variables [36]. On the other hand, if one chooses the geometric approach, the excess term may not vanish in a steady state if there are odd variables, as shown in [18]. We summarize and compare these systems in Table. 1. In the type-1 systems comprised of overdamped Langevin systems, Markov jump processes without odd variables, and hydrodynamic systems where the Stokes approximation is valid, the excess EPR vanishes in a steady state unconditionally, and no unphysical term appears. This is not the case with the type-2 systems consisting of underdamped Langevin systems, Markov jump processes with odd variables, and hydrodynamic systems where the Stokes approximation is invalid. This classification is reasonable because these two classes can be distinguished by whether inertia is negligible. Therefore, our result rather \begin{table} \begin{tabular}{c||c c c} & _Langevin_ & _Markov jump_ & _Hydrodynamic_ \\ \hline \hline Type 1 & Overdamped & w/o odd variables & Stokes eq. \\ \hline Type 2 & Underdamped & w/ odd variables & Navier–Stokes eq. \\ \hline \end{tabular} \end{table} Table 1: Comparison of systems and dynamics. In type-1 systems, the excess EPR vanishes in a steady state, while it does not, or an unphysical additional term appears in type-2 systems. Figure 2: Important spaces and intersections. The blue vectors correspond to \(\mathbf{v}^{*}=\mathbf{v}-\mathbf{v}^{\dagger}\), whose squared distance provides the housekeeping EPR, while the orange ones correspond to \(\mathbf{v}^{\dagger}=\mathbf{v}-\mathbf{v}^{*}\), which defines the excess EPR. The housekeeping EPR reflects the distance between the given condition, which is represented by the boundary velocity \(\mathbf{v}_{\rm b}\), and the conservative subspace, which includes the equilibrium state \(\mathbf{0}\). On the other hand, the excess EPR is given by a conservative velocity field that provides the same value of \(D\mathbf{v}/Dt\) in the sense of the Navier–Stokes equation. suggests that the less desirable trait has, however, an intriguing universality. Finally, we remark on future research. As mentioned, entropy production is currently attracting the attention of hydraulic researchers because it provides a physically well-founded measure of machinery's performance [11; 12]. However, it is an ad hoc quantity called "wall entropy production" that is used to estimate boundary effect. Not only because our decomposition stems from the underlying geometric structure, but also because it is easy to calculate as it only requires the instantaneous velocity field, the housekeeping EPR is expected to play a role as a physically reasonable estimate of boundary influence. Beyond application to hydraulics, our decomposition may yield a novel understanding of a wide range of hydrodynamic phenomena, including turbulence [37]. In any case, a numerical investigation should be carried out to clarify how practically helpful the decomposition is. K. Y. thanks Artemy Kolchinsky, Ken Hiura, and Kazumasa A. Takeuchi, for their suggestive comments. K. Y. is supported by Grant-in-Aid for JSPS Fellows (Grant No. 22J21619). S. I. is supported by JSPS KAKENHI Grants No. 19H05796, No. 21H01560, and No. 22H01141, and UTEC-UTokyo FSI Research Grant Program.
2309.13493
Structure of the probability mass function of the Poisson distribution of order $k$
The Poisson distribution of order $k$ is a special case of a compound Poisson distribution. For $k=1$ it is the standard Poisson distribution. Although its probability mass function (pmf) is known, what is lacking is a $visual$ interpretation, which a sum over terms with factorial denominators does not supply. Unlike the standard Poisson distribution, the Poisson distribution of order $k$ can display a maximum of $four$ peaks simultaneously, as a function of two parameters: the order $k$ and the rate parameter $\lambda$. This note characterizes the shape of the pmf of the Poisson distribution of order $k$. The pmf can be partitioned into a single point at $n=0$, an increasing sequence for $n \in [1,k]$ and a mountain range for $n>k$ (explained in the text). The ``parameter space'' of the pmf is mapped out and the significance of each domain is explained, in particular the change in behavior of the pmf as a domain boundary is crossed. A simple analogy (admittedly unrelated) is that of the discriminant of a quadratic with real coefficients: its domains characterize the nature of the roots (real or complex), and the domain boundary signifies the presence of a repeated root. Something similar happens with the pmf of the Poisson distribution of order $k$. As an application, this note explains the mode structure of the Poisson distribution of order $k$. Improvements to various inequalities are also derived (sharper bounds, etc.). New conjectured upper and lower bounds for the median and the mode are also proposed.
S. R. Mane
2023-09-23T23:05:17Z
http://arxiv.org/abs/2309.13493v1
# Structure of the probability mass function of the Poisson distribution of order \(k\) ###### Abstract The Poisson distribution of order \(k\) is a special case of a compound Poisson distribution. For \(k=1\) it is the standard Poisson distribution. Although its probability mass function (pmf) is known, what is lacking is a _visual_ interpretation, which a sum over terms with factorial denominators does not supply. Unlike the standard Poisson distribution, the Poisson distribution of order \(k\) can display a maximum of _four_ peaks simultaneously, as a function of two parameters: the order \(k\) and the rate parameter \(\lambda\). This note characterizes the shape of the pmf of the Poisson distribution of order \(k\). The pmf can be partitioned into a single point at \(n=0\), an increasing sequence for \(n\in[1,k]\) and a mountain range for \(n>k\) (explained in the text). The "parameter space" of the pmf is mapped out and the significance of each domain is explained, in particular the change in behavior of the pmf as a domain boundary is crossed. A simple analogy (admittedly unrelated) is that of the discriminant of a quadratic with real coefficients: its domains characterize the nature of the roots (real or complex), and the domain boundary signifies the presence of a repeated root. Something similar happens with the pmf of the Poisson distribution of order \(k\). As an application, this note explains the mode structure of the Poisson distribution of order \(k\). Improvements to various inequalities are also derived (sharper bounds, etc.). New conjectured upper and lower bounds for the median and the mode are also proposed. keywords: Poisson distribution of order \(k\), probability mass function, median, mode, Compound Poisson distribution, discrete distribution Msc: 60E05, 39B05, 11B37, 05-08 + Footnote †: journal: (internal report CC23-6) Introduction In two recent notes [1, 2], the author presented numerical results for the Poisson distribution of order \(k\)[3]. It is a variant (or extension) of the well-known Poisson distribution. We begin with its formal definition. **Definition 1.1**.: _The Poisson distribution of order \(k\) (where \(k\geq 1\) is an integer) and parameter \(\lambda>0\) is an integer-valued statistical distribution with the probability mass function (pmf)_ \[f_{k}(n;\lambda)=e^{-k\lambda}\sum_{n_{1}+2n_{2}+\cdots+kn_{k}=n}\frac{ \lambda^{n_{1}+\cdots+n_{k}}}{n_{1}!\ldots n_{k}!}\,,\qquad n=0,1,2\ldots \tag{1.1}\] For \(k=1\) it is the standard Poisson distribution. The Poisson distribution of order \(k\) is a special case of the compound Poisson distribution introduced by Adelson [4]. Although exact expressions for the mean and variance of the Poisson distribution of order \(k\) are known [5], exact results for its median and mode are difficult to obtain. What is lacking is a _visual_ interpretation of the pmf, which a formal mathematical sum such as eq. (1.1) does not supply. Unlike the standard Poisson distribution, the Poisson distribution of order \(k\) can display a maximum of _four_ peaks simultaneously, as a function of two parameters: the order \(k\) and the rate parameter \(\lambda\). This note characterizes the shape of the pmf of the Poisson distribution of order \(k\). The "parameter space" of the pmf is mapped out and the significance of each domain is explained, in particular the change in behavior of the pmf as a domain boundary is crossed. A simple analogy (admittedly unrelated) is that of the discriminant of a quadratic with real coefficients: its domains characterize the nature of the roots (real or complex), and the domain boundary signifies the presence of a repeated root. Something similar happens with the pmf of the Poisson distribution of order \(k\). As an application, this note explains the mode structure of the Poisson distribution of order \(k\). Improvements to various inequalities are derived (sharper bounds, etc.). New conjectured upper and lower bounds for the median and the mode are also proposed. The structure of this paper is as follows. Sec. 2 presents basic definitions and notation employed in this note. Improvements to some published inequalities are also presented (sharper bounds, etc.) Sec. 3 quantifies the structure of the pmf of the Poisson distribution of order \(k\). Secs. 4 and 5 present new conjectures for upper and lower bounds for the median and mode, respectively. Sec. 6 concludes. ## 2 Basic notation and definitions For later reference we define the parameter \(\kappa=k(k+1)/2\). We denote the mean by \(\mu\), the median by \(\nu\) and the mode by \(m\) (with pertinent subscripts, etc. to denote the dependence on \(k\) and \(\lambda\), see below). Philippou [5] derived that the mean is \(\mu_{k}(\lambda)=\kappa\lambda\) and the variance is \(\sigma_{k}^{2}(\lambda)=\frac{1}{6}k(k+1)(2k+1)\lambda\). For the median, we follow the exposition in [6]: if \(Y_{k,\lambda}\) is a random variable which is Poisson distributed with order \(k\) and parameter \(\lambda\), the median is defined as the smallest integer \(\nu\) such that \(P(Y_{k,\lambda}\leq\nu)\geq\frac{1}{2}\). With this definition, the median is unique and is always an integer. The mode is defined as the location(s) of the _global maximum_ of the probability mass function. It is known that the mode may not be unique. For the standard Poisson distribution with parameter \(\lambda\), the mode equals \(\lfloor\lambda\rfloor\) if \(\lambda\not\in\mathbb{N}\), but both \(\lambda-1\) and \(\lambda\) are modes if \(\lambda\in\mathbb{N}\). We adopt the following notation and definitions from [7]. 1. We work with \(h_{k}(n;\lambda)=e^{k\lambda}f_{k}(n;\lambda)\) (see [7]) and refer to it as the "scaled pmf" below. Observe from eq. (1.1) that \(h_{k}(n;\lambda)\) is a polynomial in \(\lambda\) with all positive coefficients. It has degree \(n\) and for \(n>0\) it has no constant term (also \(h_{k}(0;\lambda)=1\) and \(h_{k}(1;\lambda)=\lambda\) for all \(k\geq 1\)). Hence for fixed \(k\geq 1\) and \(n>0\), \(h_{k}(n;0)=0\) and \(h_{k}(n;\lambda)\) is a strictly increasing function of \(\lambda\) for \(\lambda>0\). 2. The parameter \(r_{k}\) is defined as the positive root of the equation \(h_{k}(k;\lambda)=1\). It was shown in [7] that \(r_{k}\) is unique and \(0<r_{k}<1\). 3. It was proved (Lemma 1 in [7]) that for fixed \(k\geq 2\) and \(\lambda>0\), the sequence \(\{h_{k}(n;\lambda),n=1,\ldots,k\}\) is strictly increasing, i.e. \(h_{k}(n-1;\lambda)<h_{k}(n;\lambda)\) for \(n=2,\ldots,k\). Hence only the last index \(k\) can be a mode. _An integer in the interval \([1,k-1]\) can never be a mode of the Poisson distribution of order \(k\)._ 4. It was proved (Lemma 3 in [7]) that for fixed \(k\geq 2\) and \(0<\lambda\leq r_{k}\), then \(h_{k}(k;\lambda)>h_{k}(k+1;\lambda)\). This makes \(h_{k}(k;\lambda)\) a local maximum in the histogram of the pmf, for sufficiently small values of \(\lambda\). We shall see this below, when plotting graphs of the histogram of the pmf. Note that the condition \(0<\lambda\leq r_{k}\) is sufficient _but not necessary_ to attain \(h_{k}(k;\lambda)>h_{k}(k+1;\lambda)\). By the term "double mode" we mean the distribution is bimodal, with joint modes at \(m_{1}\) and \(m_{2}\). For \(k=1\), the standard Poisson distribution, the integers \(m_{1}\) and \(m_{2}\) are always consecutive integers, but for \(k\geq 2\) this need not be so. Kwon and Philippou [7] tabulated a list of double modes for \(k=2,3,4\) and \(0<\lambda\leq 2\). It was shown in [1] that for any \(k\geq 2\), the Poisson distribution of order \(k\) has a denumerable infinity of double modes, consisting of pairs of consecutive integers. A major topic of this note is to characterize the mode structure for \(k\geq 2\), including double modes with non-consecutive integers. The existence of three or more joint modes is an open question. This note presents numerical evidence that the Poisson distribution of order \(k\) does not have three or more joint modes. The term "first double mode" signifies the first time (smallest value of \(\lambda\)) that the Poisson distribution of order \(k\) has a double mode. The mode values in this case are \(0\) and \(m>0\). The following notation was introduced in [2] for the first double mode. The nonzero mode value was denoted by \(\hat{m}_{k}\) and the corresponding value of \(\lambda\) was denoted by \(\hat{\lambda}_{k}\). In terms of this notation, Kwon and Philippou [7] showed that \(\hat{m}_{k}=k\) for \(k=2,3,4\). It was shown in [1] that \(\hat{m}_{k}=k\) for \(k=2,\ldots,14\) and \(\hat{m}_{k}>k\) for \(k\geq 15\). It was proved in [2] that \(\hat{m}_{k}\geq k\) for all \(k\geq 2\), but an exact formula for \(k\geq 15\) is not known to date. Nor is it proved that \(\hat{m}_{k}\) increases strictly with \(k\) (for \(k\geq 15\)) although numerical tests up to at least \(k=10^{4}\) have not found any exceptions [2]. We take the opportunity here to present improvements to various published inequalities (sharper bounds, etc.). The following inequality was derived in [2] \[1/\kappa\leq\hat{\lambda}_{k}\leq r_{k}<1\,. \tag{2.1}\] Recall that it was proved in [7] that for fixed \(k\geq 2\) and \(\lambda>0\), the sequence \(\{h_{k}(n;\lambda),n=1,\ldots,k\}\) is strictly increasing. It follows that the location of the first double mode is not less than \(k\), i.e. \(\hat{m}_{k}\geq k\). Next, the mode is bounded by the floor of the mean (Theorem 2.1 in [8], recall eq. (5.2)) and Philippou [5] showed that the value of the mean is \(\kappa\lambda\). Hence \[\hat{m}_{k}\leq\kappa\hat{\lambda}_{k}\,. \tag{2.2}\] We can deduce two inequalities from this information. **Proposition 2.1**.: _Using eq. (2.2) and \(\hat{\lambda}_{k}<1\), it follows that for all \(k\geq 2\),_ \[k\leq\hat{m}_{k}<\kappa\,. \tag{2.3}\] **Proposition 2.2**.: _Using eq. (2.2) and \(\hat{m}_{k}\geq k\), we deduce \(\kappa\hat{\lambda}_{k}\geq k\). Solving for \(\hat{\lambda}_{k}\) yields the inequality_ \[\hat{\lambda}_{k}\geq\frac{k}{\kappa}=\frac{2}{k+1}\,. \tag{2.4}\] **Remark 2.3**.: _Using eq. (2.4), we improve the inequalities in eq. (2.1) as follows_ \[\frac{2}{k+1}\leq\hat{\lambda}_{k}\leq r_{k}<1\,. \tag{2.5}\] **Remark 2.4**.: _Philippou [9] showed that the Poisson distribution of order \(k\) has a unique mode of zero if \(\lambda<1/\kappa=2/(k(k+1))\). Using eq. (2.4), we improve this bound to say the mode is uniquely zero if_ \[\lambda<\frac{2}{k+1}\,. \tag{2.6}\] _This is a sufficient but not necessary condition._ **Remark 2.5**.: _We can now prove the conjecture in [1] that if the median is zero then the mode is also zero._ Proof.: The median is zero if and only if \(\lambda\leq(\ln 2)/k\) (proved in [1]). Observe that for all \(k\geq 1\), \[\frac{\ln 2}{k}<\frac{2}{k+1}\,. \tag{2.7}\] Hence if \(\lambda\leq(\ln 2)/k\) (median is zero) then \(\lambda<2/(k+1)\) and from eq. (2.6) the mode is zero. ## 3 Structure of the probability mass function ### General remarks We shall plot graphs to investigate the structure of the Poisson distribution of order \(k\). To fix ideas, we plot the value of the scaled pmf \(h_{k}(n;\lambda)\), as opposed to the true pmf \(f_{k}(n;\lambda)\) in eq. (1.1). The prefactor \(e^{-k\lambda}\) does not alter the shape of the histogram and is therefore irrelevant to the analysis below. To demonstrate, the scaled pmf of the standard Poisson distribution is plotted in Fig. 1. 1. For \(\lambda<1\), e.g. \(\lambda=0.8\), the unique mode is zero and the pmf decreases monotonically. 2. When \(\lambda=1\), the first double mode is attained, at \(0\) and \(1\). 3. As the value of \(\lambda\) increases further, the pmf displays a single peak. Its location shifts to the right as the value of \(\lambda\) increases. As the value of \(\lambda\) increases, the location of the mode increases in unit steps. 4. If the value of \(\lambda\) is an integer, e.g. \(\lambda=4\), the peak has a flat top and the distribution is bimodal, with joint modes at \(\lambda-1\) and \(\lambda\). 5. If the value of \(\lambda\) is not an integer, e.g. \(\lambda=4.2\), the mode is unique, with value \(\lfloor\lambda\rfloor\). By contrast, Fig. 2 displays a plot of the scaled pmf of the Poisson distribution of order \(50\), for \(\lambda\simeq 0.10194\). There are _four_ peaks in the histogram, at (i) \(n=0\) (height \(=1\)), (ii) \(n=50\) (height \(\simeq 0.6698\)), which is a local maximum and not a mode, (iii) \(n=98\) (height \(\simeq 0.98358\)), which is also a local maximum and not a mode, (iv) \(n=113\) (height \(=1\)), which is a double mode along with \(0\). As the value of \(\lambda\) changes, the relative heights of the peaks change, and the mode structure will vary. For example, we know that for sufficiently small values \(0<\lambda<2/(k+1)\), the unique mode is \(0\). Our goal in this section is to understand the evolution of the structure of the scaled pmf of the Poisson distribution of order \(k\). As complicated as it appears, Fig. 2 does contain useful clues as to how to parameterize the scaled pmf. First, the point at \(n=0\) always has a height of \(1\) and is an invariant: it is the same for all \(k\) and \(\lambda\). Next, it was mentioned earlier that the points for \(1\leq n\leq k\) form a strictly increasing sequence (this was proved in Lemma 1 in [7]) and that sequence is visible in Fig. 2. It was also mentioned earlier that for sufficiently small \(\lambda\), the point at \(n=k\) is a local maximum (proved in Lemma 3 in [7]) and this fact is also visible in Fig. 2. The remaining points \(n>k\) exhibit a more or less "mountain range" appearance, similar to the standard Poisson distribution, except there are _two_ peaks. We shall refer to the region \(n>k\) as the "mountain range" region and speak of the "left peak" and the "right peak" below. However, be advised that the mountain range does not always have two peaks: that is why we need a "parameter map" to classify matters. We wish to compose a "parameter map" to characterize the behavior of the scaled pmf of the Poisson distribution of order \(k\) as a function of the order \(k\) and the rate parameter \(\lambda\). For the dependence on \(k\), there are four cases: (i) \(k\in[2,3]\), (ii) \(k\in[4,14]\), (iii) \(k\in[15,41]\), (iv) \(k\geq 42\). In each case, we increase the value of \(\lambda\) continuously from zero and examine the structure of the scaled pmf, and the resulting consequences. ### Case \(k\in[2,3]\) We select \(k=3\) as a representative example, because for \(k=2\) the "increasing sequence" \(n=1,\ldots,k\) contains only two points and is not informative. The scaled pmf of the Poisson distribution of order 3 is plotted in Fig. 3 for selected values of \(\lambda\), increasing from top to bottom. 1. In the top panel, \(\lambda=0.4\). This value is so small that the mountain range region (\(n>k\)) is monotonically decreasing. The increasing sequence for \(n=1,\ldots,k\) is visible, as is the local maximum at \(n=k\), but it is not a global maximum. The unique mode is zero. 2. As the value of \(\lambda\) increases, the height of the local maximum at \(n=k\) reaches 1 and a double mode is attained. The joint modes are at 0 and \(k\). This is displayed in the second panel, where \(\lambda\simeq 0.601679\). The mountain range region (\(n>k\)) is monotonically decreasing, although one can discern that a peak is forming. 3. When the value of \(\lambda\) increases, the unique mode jumps from 0 to \(k\), i.e. an increase of more than one unit. The "first double mode" represents a boundary between two domains in the parameter space. 4. As the value of \(\lambda\) increases further, the mountain range develops a single peak, and its height rises to equal that of the point at \(n=k\). This is displayed in the third panel, where \(\lambda\simeq 0.9962\). _The joint modes are at \(k\) and \(k+2\) (for both cases \(k=2\) and \(k=3\))._ 5. When the value of \(\lambda\) increases, the unique mode jumps from \(k\) to \(k+2\), i.e. an increase of more than one unit. This is the "second double mode" and is a boundary between two domains in the parameter space. 6. As the value of \(\lambda\) increases further, the height of the peak of the mountain range exceeds that of the point at \(n=k\). The location of the single peak shifts rightwards and its height increases. The mode is determined solely by the location of the peak of the mountain range and increases in unit steps and takes all integer values \(\geq k+2\). This is displayed in the fourth panel, where \(\lambda=1.02\). The scaled pmf has a unique mode given by the height of the single mountain peak. The point at \(n=k\) is still a local maximum but plays no further role to determine the mode. 7. For a discrete (denumerably infinite) set of values of \(\lambda\), the mountain peak has a flat top and the distribution is bimodal. The joint modes consist of consecutive integers. This is displayed in the fifth panel, where \(\lambda\simeq 1.4293\). There is a double mode, at 7 and 8. It is _almost_ a triple mode, but the height of the point at \(n=6\) is a little lower. The point at \(n=k\) is no longer a local maximum. ### Case \(k\in[4,14]\) We select \(k=10\) as a representative example. The scaled pmf of the Poisson distribution of order 10 is plotted in Fig. 4 for selected values of \(\lambda\), increasing from top to bottom. 1. In the top panel, \(\lambda=0.2\). This value is so small that the mountain range region (\(n>k\)) is monotonically decreasing. The increasing sequence for \(n=1,\ldots,k\) is visible, as is the local maximum at \(n=k\), but it is not a global maximum. The unique mode is zero. 2. As the value of \(\lambda\) increases, the height of the local maximum at \(n=k\) reaches 1 and a double mode is attained. The joint modes are at 0 and \(k\). This is displayed in the second panel, where \(\lambda\simeq 0.31713\). The mountain range region (\(n>k\)) exhibits a left peak, but its height is less than 1 and is a local but not global maximum. 3. When the value of \(\lambda\) increases, the unique mode jumps from 0 to \(k\), i.e. an increase of more than one unit. The "first double mode" represents a boundary between two domains in the parameter space. 4. As the value of \(\lambda\) increases further, the height of the left peak rises and equals that of the point at \(n=k\). This is displayed in the third panel, where \(\lambda\simeq 0.36189\). The joint modes are at \(k\) and the location of the left mountain peak, say \(m_{\rm left}\). The value of \(m_{\rm left}\) depends on \(\lambda\) but is always at least \(k+2\). No explicit formula is yet known for \(m_{\rm left}\) when a double mode is attained with the point at \(n=k\), although the values can be tabulated for all \(k\in[4,14]\). 5. When the value of \(\lambda\) increases, the unique mode jumps from \(k\) to \(m_{\rm left}\), i.e. an increase of more than one unit. This is the "second double mode" and is a boundary between two domains in the parameter space. 6. As the value of \(\lambda\) increases further, the mode is determined by the location of the left mountain peak. The value of the mode increases in unit steps _and there are double modes consisting of pairs of consecutive integers_. However, the mountain range develops a second (right) peak and its height rises faster than that of the left peak and it catches up with the height of the left peak. 7. As the value of \(\lambda\) increases further, a double mode is attained, where the heights of the two mountain peaks are equal. This is displayed in the fourth panel, where \(\lambda\simeq 0.472694\). We denote the locations of the two mountain peaks by \(m_{\rm left}\) and \(m_{\rm right}\), respectively. _Very significant:_ the value of \(m_{\rm left}\) is _larger_ than that in the previous panel, i.e. the value of \(m_{\rm left}\) increases with \(\lambda\). The point at \(n=k\) is still a local maximum but plays no further role to determine the mode. Observe also that there is a local minimum between the two mountain peaks, i.e. they are separated by more than one unit and are not consecutive integers. 8. When the value of \(\lambda\) increases, the unique mode jumps from \(m_{\rm left}\) to \(m_{\rm right}\), which is an increase of more than one unit. _However, this is not the "third double mode" because the value of \(m_{\rm left}\) increased between the third and fourth panels, i.e. there were double modes consisting of pairs of consecutive _integers_. We may refer to this as the "third mode jump" (by which is meant an increase of the mode by more than one unit) and is a boundary between two domains in the parameter space. With this terminology, the first double mode is the _first mode jump_ and the second double mode is the _second mode jump_. 9. As the value of \(\lambda\) increases further, the height of the right mountain peak exceeds that of the left. The location of the right peak shifts rightwards and its height increases. The mode is determined solely by the location of the right mountain peak and increases in unit steps and takes all integer values from its value at the third mode jump upwards. 10. For a discrete (denumerably infinite) set of values of \(\lambda\), the right mountain peak has a flat top and the distribution is bimodal. The joint modes consist of consecutive integers. This is displayed in the fifth panel, where \(\lambda\simeq 0.5119\). There is a double mode at 24 and 25. The point at \(n=k\) is still a local maximum but is no longer relevant to determine the mode. ### Case \(k\in[15,41]\) We select \(k=20\) as a representative example. The scaled pmf of the Poisson distribution of order 20 is plotted in Fig. 5 for selected values of \(\lambda\), increasing from top to bottom. 1. In the top panel, \(\lambda=0.1\). This value is so small that the mountain range region (\(n>k\)) is monotonically decreasing. The increasing sequence for \(n=1,\ldots,k\) is visible, as is the local maximum at \(n=k\), but it is not a global maximum. The unique mode is zero. 2. As the value of \(\lambda\) increases, the mountain range develops a left peak and its height rises and catches up with that of the point at \(n=k\). This is displayed in the second panel, where \(\lambda\simeq 0.1899\). _Note that both heights are less than \(1\), hence they are not modes_. 3. As the value of \(\lambda\) increases further, the height of the left mountain peak rises faster than that of the point at \(n=k\) and it equals \(1\)_before the point at \(n=k\) does so_. This is displayed in the third panel, where \(\lambda\simeq 0.20333\). Hence the first double mode is _not_ at \(0\) and \(k\). _The point at \(n=k\) never plays a role to determine the mode_. The joint modes are at \(0\) and the location of the left mountain peak, say \(m_{\rm left}\). The value of \(m_{\rm left}\) depends on \(\lambda\) but is always at least \(k+2\). No explicit formula is yet known for \(m_{\rm left}\) when a double mode is attained with the point at \(n=0\), although the values can be tabulated for all \(k\in[15,41]\). 4. When the value of \(\lambda\) increases, the unique mode jumps from \(0\) to \(m_{\rm left}\), i.e. an increase of more than one unit. This is the "first double mode" (perhaps it is better to say the _first mode jump_) and is a boundary between two domains in the parameter space. 5. As the value of \(\lambda\) increases further, the mode is determined by the location of the left mountain peak. The value of the mode increases in unit steps _and there are double modes consisting of pairs of consecutive integers_. However, the mountain range develops a second (right) peak and its height rises faster than that of the left peak and it catches up with the height of the left peak. 6. As the value of \(\lambda\) increases further, a double mode is attained, where the heights of the two mountain peaks are equal. This is displayed in the fourth panel, where \(\lambda\simeq 0.24159\). We denote the locations of the two mountain peaks by \(m_{\rm left}\) and \(m_{\rm right}\), respectively. _Very significant:_ the value of \(m_{\rm left}\) is _larger_ than that in the previous panel, i.e. the value of \(m_{\rm left}\) increases with \(\lambda\). Observe also that there is a local minimum between the two mountain peaks, i.e. they are separated by more than one unit and are not consecutive integers. 7. When the value of \(\lambda\) increases, the unique mode jumps from \(m_{\rm left}\) to \(m_{\rm right}\), which is an increase of more than one unit. We refer to this as the "second mode jump" (by which is meant an increase of the mode by more than one unit) and is a boundary between two domains in the parameter space. 8. As the value of \(\lambda\) increases further, the height of the right mountain peak exceeds that of the left. The location of the right peak shifts rightwards and its height increases. The mode is determined solely by the location of the right mountain peak and increases in unit steps and takes all integer values from its value at the second mode jump upwards. 9. For a discrete (denumerably infinite) set of values of \(\lambda\), the right mountain peak has a flat top and the distribution is bimodal. The joint modes consist of consecutive integers. This is displayed in the fifth panel, where \(\lambda\simeq 0.3039\). There is a double mode at 55 and 56. ### Case \(k\geq 42\) We select \(k=50\) as a representative example. The scaled pmf of the Poisson distribution of order \(50\) is plotted in Fig. 6 for selected values of \(\lambda\), increasing from top to bottom. 1. In the top panel, \(\lambda=0.04\). This value is so small that the mountain range region (\(n>k\)) is monotonically decreasing. The increasing sequence for \(n=1,\ldots,k\) is visible, as is the local maximum at \(n=k\), but it is not a global maximum. The unique mode is zero. 2. As the value of \(\lambda\) increases, the mountain range develops a left peak and its height rises and catches up with that of the point at \(n=k\). This is displayed in the second panel, where \(\lambda\simeq 0.07822\). _Note that both heights are less than \(1\), hence they are not modes._ 3. As the value of \(\lambda\) increases further, the mountain range develops a right peak and its height rises and catches up with that of the left peak. This is displayed in the third panel, where \(\lambda\simeq 0.098\). _Note that both heights are less than \(1\), hence they are not modes._ 4. As the value of \(\lambda\) increases further, the height of the right mountain peak rises faster than that of the left peak and it equals \(1\)_before the left peak or the point at \(n=k\) does so._ This is displayed in the fourth panel, where \(\lambda\simeq 0.10194\). Hence the first double mode is _not_ at \(0\) and \(k\). _The point at \(n=k\) and the left mountain peak never play a role to determine the mode._ The joint modes are at \(0\) and the location of the right mountain peak, say \(m_{\rm right}\). The value of \(m_{\rm right}\) depends on \(\lambda\) but is always at least \(k+2\). No explicit formula is yet known for \(m_{\rm right}\) when a double mode is attained with the point at \(n=0\), although the values can be tabulated for all desired values \(k\geq 42\). 5. When the value of \(\lambda\) increases, the unique mode jumps from \(0\) to \(m_{\rm right}\), i.e. an increase of more than one unit. This is the "first double mode" (perhaps it is better to say the _first mode jump_) and is a boundary between two domains in the parameter space. 6. As the value of \(\lambda\) increases further, the mode is determined solely by the location of the right mountain peak. The location of the right peak shifts rightwards and its height increases. The mode increases in unit steps and takes all integer values from its value at the first mode jump upwards. _There is only one mode jump, for \(k\geq 42\)._ 7. For a discrete (denumerably infinite) set of values of \(\lambda\), the right mountain peak has a flat top and the distribution is bimodal. The joint modes consist of consecutive integers. This is displayed in the fifth panel, where \(\lambda\simeq 0.105\). There is a double mode at \(116\) and \(117\). ### Parameter map The shape of the pmf of the Poisson distribution of order \(k\) can be partitioned into three sections: (i) the single point at \(n=0\), (ii) the increasing sequence (\(n\in[1,k]\)), and (iii) the mountain range (\(n>k\)). It is by no means obvious from the formal sum in eq. (1.1) that such a partition exists. The increasing sequence is peculiar to the Poisson distribution of order \(k\) and does not exist for the standard Poisson distribution (\(k=1\)). It consists of terms where the number of summands equals \(n\), because \(n\in[1,k]\). The mountain range consists of terms where the number of summands is strictly less than \(n\), because \(n>k\). The mountain range is similar in character to the scaled pmf of the standard Poisson distribution. For sufficiently small \(\lambda>0\), it decreases monotonically, but for larger values of \(\lambda\) it exhibits a peak. It actually exhibits a maximum of _two_ peaks. The reason for this is not known: why not a single peak, and why not more than two peaks? The mode structure of the scaled pmf of the Poisson distribution of order \(k\) is determined by four parameters: (i) the single point at \(n=0\), whose height is always \(1\), (ii) the single point at \(n=k\), whose height depends on \(\lambda\), (iii) the left mountain peak \(m_{\rm left}\), and (iv) the right mountain peak \(m_{\rm right}\). For \(k=2\) and \(k=3\), there is only a single peak in the region \(n>k\). Both the location and the height of the mountain peaks depend on \(\lambda\). The evidence presented in this note suggests that the Poisson distribution of order \(k\) does not have three or more joint modes. The evidence is admittedly numerical, and cannot claim to be exhaustive, hence the existence of triple, etc. modes remains open. This is the parameter map: 1. For \(k\in[2,3]\), the following happens as the value of \(\lambda\) increases continuously from \(0\). 1. For sufficiently small \(\lambda>0\), the mode is zero. 2. As \(\lambda\) increases, the height of the point at \(n=k\) reaches \(1\) and there is a double mode at \(0\) and \(k\). 3. When this domain boundary is crossed, the mode jumps from \(0\) to \(k\). This is the first mode jump. 4. As \(\lambda\) increases, the height of the single peak in the mountain range catches up to the height of the point at \(n=k\) and there is a double mode at \(k\) and \(k+2\). 5. When this domain boundary is crossed, the mode jumps from \(k\) to \(k+2\). This is the second mode jump. 6. As \(\lambda\) increases further, the mode is determined by the single peak in the mountain range. The location of the single peak shifts rightwards and its height increases. The mode increases in unit steps and takes all integer values \(\geq k+2\). There is a denumerable infinity of double modes, consisting of pairs of consecutive integers. 2. For \(k\in[4,14]\), the following happens as the value of \(\lambda\) increases continuously from \(0\). 1. For sufficiently small \(\lambda>0\), the mode is zero. 2. As \(\lambda\) increases, the height of the point at \(n=k\) reaches \(1\) and there is a double mode at \(0\) and \(k\). 3. When this domain boundary is crossed, the mode jumps from \(0\) to \(k\). This is the first mode jump. 4. As \(\lambda\) increases, the height of the left mountain peak catches up to the height of the point at \(n=k\) and there is a double mode at \(k\) and \(m_{\text{left}}\). The value of \(m_{\text{left}}\) is not less than \(k+2\). 5. When this domain boundary is crossed, the mode jumps from \(k\) to \(m_{\text{left}}\). This is the second mode jump. 6. As \(\lambda\) increases further, the mode is determined by the left mountain peak, which shifts rightwards and its height increases. The mode increases in unit steps and there are some double modes, consisting of pairs of consecutive integers. 7. As \(\lambda\) increases further, the height of the right mountain peak catches up to the height of the left mountain peak and there is a double mode at \(m_{\text{left}}\) and \(m_{\text{right}}\). The values of \(m_{\text{left}}\) and \(m_{\text{right}}\) are never consecutive integers. 8. When this domain boundary is crossed, the mode jumps from \(m_{\text{left}}\) to \(m_{\text{right}}\). This is the third mode jump. 9. As \(\lambda\) increases further, the mode is determined by the right mountain peak, which shifts rightwards and its height increases. The mode increases in unit steps and takes all values from the third mode jump upwards. There is a denumerable infinity of double modes, consisting of pairs of consecutive integers. 3. For \(k\in[15,41]\), the following happens as the value of \(\lambda\) increases continuously from \(0\). 1. For sufficiently small \(\lambda>0\), the mode is zero. 2. As \(\lambda\) increases, the height of the left mountain peak catches up to the height of the point at \(n=k\), but their heights are less than \(1\). The mode is zero. 3. As \(\lambda\) increases further, the height of the left mountain peak reaches \(1\) and there is a double mode at \(0\) and \(m_{\text{left}}\). The value of \(m_{\text{left}}\) is not less than \(k+2\). The point at \(n=k\) plays no role to determine the mode structure. 4. When this domain boundary is crossed, the mode jumps from \(0\) to \(m_{\text{left}}\). This is the first mode jump. 5. As \(\lambda\) increases further, the mode is determined by the left mountain peak, which shifts rightwards and its height increases. The mode increases in unit steps and there are some double modes, consisting of pairs of consecutive integers. 6. As \(\lambda\) increases further, the height of the right mountain peak catches up to the height of the left mountain peak and there is a double mode \(m_{\text{left}}\) and \(m_{\text{right}}\). The values of \(m_{\text{left}}\) and \(m_{\text{right}}\) are never consecutive integers. 7. When this domain boundary is crossed, the mode jumps from \(m_{\text{left}}\) and \(m_{\text{right}}\). This is the second mode jump. 8. As \(\lambda\) increases further, the mode is determined by the right mountain peak, which shifts rightwards and its height increases. The mode increases in unit steps and takes all values from the second mode jump upwards. There is a denumerable infinity of double modes, consisting of pairs of consecutive integers. 4. For \(k\geq 42\), the following happens as the value of \(\lambda\) increases continuously from \(0\). 1. For sufficiently small \(\lambda>0\), the mode is zero. 2. As \(\lambda\) increases, the height of the left mountain peak catches up to the height of the point at \(n=k\), but their heights are less than \(1\). The mode is zero. 3. As \(\lambda\) increases further, the height of the right mountain peak catches up to the height of the left mountain peak but their heights are less than \(1\). The mode is zero. 4. As \(\lambda\) increases further, the height of the right mountain peak reaches \(1\) and there is a double mode at \(0\) and \(m_{\rm right}\). The value of \(m_{\rm right}\) is not less than \(k+2\). The point at \(n=k\) and the left mountain peak play no role to determine the mode structure. 5. When this domain boundary is crossed, the mode jumps from \(0\) to \(m_{\rm right}\). This is the first (and only) mode jump. 6. As \(\lambda\) increases further, the mode is determined by the right mountain peak, which shifts rightwards and its height increases. The mode increases in unit steps and takes all values from the first mode jump upwards. There is a denumerable infinity of double modes, consisting of pairs of consecutive integers. ### Excluded values We can now explain the "excluded values" tabulated in [2], i.e. integers which cannot be modes of the Poisson distribution of order \(k\). _The values tabulated in [2] are the integers which are skipped in the first, second or third mode jumps,_ and we can now explain the cause of those mode jumps. A mode jump greater than unity occurs only when the controlling parameter of the mode changes, e.g. from \(0\) to \(k\) or from the left peak to the right peak, etc. Only values \(k\in[4,14]\) display three mode jumps, and for \(k\geq 42\) there is only one mode jump. _But for all \(k\geq 2\) there is always a mode jump._ As noted in [2], the integers \(1\) and \(k+1\) are never modes of the Poisson distribution of order \(k\geq 2\). ## 4 Median It was proved in [1] that the median is zero if and only if \(\lambda\leq(\ln 2)/k\). Numerical studies reported in [1] also yielded the following expression for the median for \(\lambda\geq 1\). If \(n\in\mathbb{N}\) and \(n\geq\kappa\), set \(\lambda=n/\kappa\) (so \(\lambda\geq 1\)). Then the median is given by (eq. (3.1) in [1]) \[\nu_{k}(n/\kappa)=n-\left\lfloor\frac{k+4}{8}\right\rfloor. \tag{4.1}\] Note that \(\frac{\kappa}{k}\,\ln 2=\frac{1}{2}(k+1)\ln 2>\lfloor(k+4)/8\rfloor\) for all \(k\geq 1\). Hence we conjecture the following bounds for the value of the median in the intermediate zone \(\lambda\in(\frac{1}{k}\,\ln 2,\,1)\). **Conjecture 4.1**.: _For fixed \(k\geq 2\) and \(\lambda\in(\frac{1}{k}\,\ln 2,\,1)\), we claim the lower bound for the median is_ \[\nu_{k}(\lambda)\geq\max\bigl{\{}0,\lfloor\kappa\lambda\rfloor-\tfrac{1}{2}( k+1)\ln 2\bigr{\}}\,. \tag{4.2}\] _For an upper bound, we know from the numerical studies reported in [1] it is possible for the value of the median to exceed the mean, because \(\lfloor(k+4)/8\rfloor=0\) if \(k<4\). We propose the following upper bound for the median, where \(c_{0}=1\) if \(k\leq 3\) and \(c_{0}=0\) if \(k\geq 4\)._ \[\nu_{k}(\lambda)\leq\lfloor\kappa\lambda\rfloor+c_{0}\,. \tag{4.3}\] Monte Carlo scans using values \(2\leq k\leq 4\times 10^{4}\) and \(\lambda\in(\frac{1}{k}\,\ln 2,\,1)\) found no violations of the bounds in eqs. (4.2) or (4.3). Note the following: 1. From eq. (4.1), it is tempting to conjecture that a sharper upper bound is \(\nu_{k}(\lambda)\leq\lfloor\kappa\lambda\rfloor-\lfloor(k+4)/8\rfloor\) or possibly \(\nu_{k}(\lambda)\leq 1+\lfloor\kappa\lambda\rfloor-\lfloor(k+4)/8\rfloor\), but they are both false for \(\lambda<1\). An example is \(k=514\) and \(\lambda\simeq 0.0031619\), whence \(\kappa\lambda\simeq 418.4998\) and \(\lfloor\kappa\lambda\rfloor-\lfloor(k+4)/8\rfloor=418-64=354\), but the median value is \(367\). 2. A graph of the median is plotted in Fig. 7, for \(k=10\) and \(0<\kappa\lambda\leq 40\), whence \(\kappa=55\) and \(0<\lambda\leq 40/55=0.7272\dots\) The value of the median is plotted as the solid line. The lower bound from eq. (4.2) is plotted as the dotted line. The upper bound from eq. (4.3) is plotted as the dashed line. Observe that the upper bound equals the median at several places, indicating that it is a sharp upper bound. Observe also that the lower bound is essentially a straight line, and does not capture the curvature as the value of the median approaches zero for small values of \(\kappa\lambda\). This indicates that the expression for the lower bound can be improved. 3. The graph of the median in Fig. 7 looks hyperbolic, with a straight line asymptote for large values of \(\kappa\lambda\) (see eq. (4.1) for \(\lambda\geq 1\)). This suggests a better parameterization for the value of the median is possible in the interval \(\lambda\in(\frac{1}{k}\ln 2,\,1)\). The matter is left for future work. Given the above, we propose the following exact results and bounds for the median, for all \(k\geq 2\) and \(\lambda>0\). In all cases, we fix \(k\geq 2\) and the median is denoted by \(\nu_{k}(\lambda)\). 1. For \(\lambda\in(0,\frac{1}{k}\,\ln 2]\) the median is zero: \(\nu_{k}(\lambda)=0\). This was proved in [1]. 2. For \(\lambda\in(\frac{1}{k}\,\ln 2,\,1)\), the median is bounded via eqs. (4.2) and (4.3), where \(c_{0}=1\) if \(k\leq 3\) and \(c_{0}=0\) if \(k\geq 4\). \[\max\bigl{\{}0,\lfloor\kappa\lambda\rfloor-\tfrac{1}{2}(k+1)\ln 2\bigr{\}} \leq\nu_{k}(\lambda)\leq\lfloor\kappa\lambda\rfloor+c_{0}\,.\] (4.4) 3. For \(\lambda\geq 1\), the median is given by eqs. (3.1), (3.3) and (3.4) in [1] as follows. Let \(n\in\mathbb{N}\) and \(n\geq\kappa\), then for \(\kappa\lambda\in(\alpha_{k,n-1},\alpha_{k,n}]\) the median equals \(\nu_{k}(n/\kappa)\) as follows. \[\nu_{k}(n/\kappa) =n-\left\lfloor\frac{k+4}{8}\right\rfloor,\] (4.5a) \[\alpha_{k,n} =n+\text{frac}\Bigl{(}\frac{k+4}{8}\Bigr{)}+\frac{k}{8(2k+1)}+A_{ k,n}\;,\] (4.5b) \[A_{k,n} =\left(\frac{3\kappa}{349}+\frac{13}{1000}\right)\!\frac{1}{n}+ \frac{13}{1500}\biggl{(}\left\lfloor\frac{k+4}{8}\right\rfloor-3\biggr{)}\, \frac{\kappa}{n^{2}}+\cdots\] (4.5c) The expression for \(A_{k,n}\) is approximate, but the expressions for \(\nu_{k}(n/\kappa)\) and \(\alpha_{k,n}\) are otherwise conjectured to be exact. ## 5 Mode Recall \(\hat{m}_{k}\) is the value of the first double mode of the Poisson distribution of order \(k\), i.e. the distribution is bimodal, with modes at \(0\) and \(\hat{m}_{k}\) and \(\hat{\lambda}_{k}\) is the corresponding value of \(\lambda\). It was shown in [2] that \(\hat{\lambda}_{k}\) is a strictly decreasing function of \(k\) and also that \(\hat{m}_{k}\geq k\). The following asymptotic expression for the mode was conjectured in [1]. Let \(n\in\mathbb{N}\) and \(n\geq 2\kappa\) and set \(\lambda=n/\kappa\) (so \(\lambda\geq 2\)). Then the mode is given by (eq. (4.1) in [1]) \[m_{k}(n/\kappa)=n-\left\lfloor\frac{3k+5}{8}\right\rfloor. \tag{5.1}\] Theorem 2.1 in [8] states the following upper and lower bounds for the mode. \[\left\lfloor\kappa\lambda\right\rfloor-\kappa+1-\delta_{k,1}\leq m_{k}( \lambda)\leq\left\lfloor\kappa\lambda\right\rfloor. \tag{5.2}\] We conjecture an improved lower bound for the mode. **Conjecture 5.1**.: _For fixed \(k\geq 2\) and \(\lambda\in(\hat{\lambda}_{k},\,2)\) (so the value of the mode is nonzero), we propose the following as an improved lower bound for the mode_ \[m_{k}(\lambda)\geq\left\lfloor\kappa\lambda\right\rfloor-k\,. \tag{5.3}\] _The right-hand side is nonnegative and evidence will be presented below that it is a sharp lower bound._ Monte Carlo scans using values \(2\leq k\leq 4\times 10^{4}\) and \(0<\lambda<2\) found no violations of eq. (5.3). (Only cases where the value of the mode was positive were included in the scan, to satisfy the requirements of Conjecture 5.1.) Note the following: 1. From eq. (5.1), it is tempting to conjecture that a sharper upper bound is \(m_{k}(\lambda)\leq\left\lfloor\kappa\lambda\right\rfloor-\left\lfloor(3k+5) /8\right\rfloor\) or possibly \(m_{k}(\lambda)\leq 1+\left\lfloor\kappa\lambda\right\rfloor-\left\lfloor(3k+5) /8\right\rfloor\), but they are both false for \(\lambda<2\). An example is \(k=44\) and \(\lambda\simeq 0.114198\), whence \(\kappa\lambda\simeq 113.056\) and \(\left\lfloor\kappa\lambda\right\rfloor-\left\lfloor(3k+5)/8\right\rfloor=113 -17=96\), but the mode value is \(98\). 2. Note however that Monte Carlo scans have thus far failed to find an example where \(m_{k}(\lambda)>0\) and \(m_{k}(\lambda)\geq 3+\left\lfloor\kappa\lambda\right\rfloor-\left\lfloor(3k+5) /8\right\rfloor\). 3. A graph of the mode is plotted in Fig. 8, for \(k=10\) and \(0<\kappa\lambda\leq 40\), whence \(\kappa=55\) and \(0<\lambda\leq 40/55=0.7272\dots\) The value of the mode is plotted as the solid line. The lower bound from eq. (5.3) is plotted as the dotted line. The upper bound is \(\left\lfloor\kappa\lambda\right\rfloor\) (from [8]) and is plotted as the dashed line. 4. Unlike the median, the value of the mode does not always increase in unit steps. Observe in Fig. 7 that the value of the mode jumps from \(0\) to \(10\) and then from \(10\) to \(17\) and then from \(20\) to \(23\). These mode jumps were explained in Sec. 3. 5. In Fig. 7, it looks as if the lower bound equals the mode at \(\kappa\lambda=20\), but it does not. The value of the mode jumps from \(10\) to \(17\) at \(\kappa\lambda\simeq 19.91\), while the lower bound increases from \(9\) to \(10\) at \(\kappa\lambda=20\). 6. However, the case \(k=2\) demonstrates that the expression for the lower bound in eq. (5.3) is a sharp lower bound. A graph of the mode is plotted in Fig. 9, for \(k=2\) and \(0<\kappa\lambda\leq 8\), whence \(\kappa=3\) and \(0<\lambda\leq 8/3=2.666\dots\) The value of the mode is plotted as the solid line. The upper and lower bounds are plotted as the dashed and dotted lines, respectively. Both the mode and lower bound equal \(2\) at \(\kappa\lambda=4\). For \(\kappa\lambda=4.0238\) the value of the mode jumps to \(4\) but the lower bound remains at \(2\). Hence the mode and lower bound both equal \(2\) in a nonempty subset of the interval \(\kappa\lambda\in[4,4.0238)\). Given the above, we propose the following exact results and bounds for the mode, for all \(k\geq 2\) and \(\lambda>0\). In all cases, we fix \(k\geq 2\) and the mode is denoted by \(m_{k}(\lambda)\). 1. For \(\lambda\in(0,\hat{\lambda}_{k})\) the mode is zero: \(m_{k}(\lambda)=0\), by the definition of \(\hat{\lambda}_{k}\). 2. For \(\lambda=\hat{\lambda}_{k}\), there is a double mode at \(0\) and \(\hat{m}_{k}\) (this is the definition of \(\hat{m}_{k}\)). 3. For \(\lambda\in(\hat{\lambda}_{k},\,2)\), the mode is bounded as follows \[\lfloor\kappa\lambda\rfloor-k\leq m_{k}(\lambda)\leq\lfloor\kappa\lambda\rfloor\,.\] (5.4) The lower bound is based on numerical studies in this note. The upper bound is from [8]. 4. For \(\lambda\geq 2\), the mode is given by eqs. (4.1), (4.5) and (4.6) in [1] as follows. Let \(n\in\mathbb{N}\) and \(n\geq 2\kappa\), then for \(\kappa\lambda\in(\beta_{k,n-1},\beta_{k,n}]\) the mode equals \(m_{k}(n/\kappa)\) as follows. \[m_{k}(n/\kappa) =n-\left\lfloor\frac{3k+5}{8}\right\rfloor,\] (5.5a) \[\beta_{k,n} =n+\text{frac}\Big{(}\frac{3k+5}{8}\Big{)}+\frac{k-1}{8(2k+1)}+B_ {k,n}\,,\] (5.5b) \[B_{k,n} =\left(\frac{\kappa}{16+\frac{8}{9}}-\frac{1}{13+\frac{2}{3}} \right)\!\frac{1}{n}+\left\lfloor\frac{3k+5}{8}\right\rfloor\frac{3\kappa}{50n ^{2}}+\cdots\] (5.5c) The expression for \(B_{k,n}\) is approximate, but the expressions for \(m_{k}(n/\kappa)\) and \(\beta_{k,n}\) are otherwise conjectured to be exact. 5. Note that no general formula is yet known for \(\hat{m}_{k}\) or \(\hat{\lambda}_{k}\), although specific cases have been solved. It was stated in [1] that the asymptotic formula for the mode is applicable for values \(\lambda\geq 2\) (see eq. (5.1)). Just out of curiosity, let us plot the pmf for \(\lambda=2\) for the values \(k=3,10,20,50\) employed in Sec. 3. To plot them on the same scale, we scale all the curves to a peak height of 1 and on the horizontal axis we plot the value of \(n/\kappa\in[0,6]\). The curves are displayed in Fig. 10. The mean is \(\kappa\lambda=2\kappa\), hence the mean is at \(n/\kappa=2\) for all the curves. Observe that all the curves are smooth. Issues of a local maximum at \(n=k\), or a left peak, have faded out of the picture. The smoothness improves as \(k\) increases, which is expected. The peaks get narrower as \(k\) increases. We can explain this as follows. Recall the variance is \(\sigma_{k}^{2}(\lambda)=\frac{1}{6}k(k+1)(2k+1)\lambda\), hence in Fig. 10 the scaled peak width is (scaled standard deviation) \[\begin{split}\frac{\sigma_{k}(\lambda)}{\kappa}&= \sqrt{\frac{\frac{1}{6}k(k+1)(2k+1)\lambda}{\frac{1}{4}k^{2}(k+1)^{2}}}\\ &=\sqrt{\frac{2\lambda}{3}}\;\sqrt{\frac{2k+1}{k(k+1)}}\\ &=\sqrt{\frac{2\lambda}{3}}\;\sqrt{\frac{1}{k}+\frac{1}{k+1}}\;. \end{split} \tag{5.6}\] Hence the scaled peak width decreases as the value of \(k\) increases. We fixed \(\lambda=2\) in Fig. 10. Observe also that the peaks shift to the right as \(k\) increases. This is consistent with eq. (5.1). Set \(n=\kappa\lambda=\mu_{k}(\lambda)\) and divide by \(\kappa\) to obtain \[\begin{split}\frac{\mu_{k}(\lambda)-m_{k}(\lambda)}{\kappa}& =\frac{\lfloor(3k+5)/8\rfloor}{\frac{1}{2}k(k+1)}\\ &\leq\frac{(3k+5)/8}{\frac{1}{2}k(k+1)}\\ &=\frac{1}{4}\left(\frac{5}{k}-\frac{2}{k+1}\right).\end{split} \tag{5.7}\] The scaled difference between the mean and the mode decreases as the value of \(k\) increases. ## 6 Conclusion The major goal of this note was to characterize the structure of the probability mass function (pmf) of the Poisson distribution of order \(k\). The pmf can be partitioned into a single point at \(n=0\), an increasing sequence for \(n\in[1,k]\) and a mountain range for \(n>k\). That structure is by no means obvious from the formal definition of the pmf as a sum over terms with factorial denominators. The mode structure of the pmf was quantified. The locations of _and the reasons for_ the mode jumps (where the mode increases by more than one unit) were established. The "excluded values" tabulated in [2], i.e. integers which cannot be modes of the Poisson distribution of order \(k\), were explained as those integers which are skipped in the mode jumps. It was demonstrated that for all \(k\geq 2\), the Poisson distribution of order \(k\) has a denumerably infinite set of double modes, consisting of pairs of consecutive integers. Numerical evidence was presented that the Poisson distribution of order \(k\) does not have three or more joint modes. The opportunity was also taken to publish improvements to various inequalities (sharper bounds, etc.) and also to present new conjectured upper and lower bounds for the median and the mode of the Poisson distribution of order \(k\).
2303.17876
WebQAmGaze: A Multilingual Webcam Eye-Tracking-While-Reading Dataset
We present WebQAmGaze, a multilingual low-cost eye-tracking-while-reading dataset, designed as the first webcam-based eye-tracking corpus of reading to support the development of explainable computational language processing models. WebQAmGaze includes webcam eye-tracking data from 600 participants of a wide age range naturally reading English, German, Spanish, and Turkish texts. Each participant performs two reading tasks composed of five texts each, a normal reading and an information-seeking task, followed by a comprehension question. We compare the collected webcam data to high-quality eye-tracking recordings. The results show a moderate to strong correlation between the eye movement measures obtained with the webcam compared to those obtained with a commercial eye-tracking device. When validating the data, we find that higher fixation duration on relevant text spans accurately indicates correctness when answering the corresponding questions. This dataset advances webcam-based reading studies and opens avenues to low-cost and diverse data collection. WebQAmGaze is beneficial to learn about the cognitive processes behind question-answering and to apply these insights to computational models of language understanding.
Tiago Ribeiro, Stephanie Brandl, Anders Søgaard, Nora Hollenstein
2023-03-31T08:18:30Z
http://arxiv.org/abs/2303.17876v3
# WebQAmGaze: A Multilingual Webcam Eye-Tracking-While-Reading Dataset ###### Abstract We create WebQAmGaze, a multilingual low-cost eye-tracking-while-reading dataset, designed to support the development of fair and transparent NLP models. WebQAmGaze includes webcam eye-tracking data from 332 participants naturally reading English, Spanish and German texts. Each participant performs two reading tasks composed of five texts, a normal reading and an information-seeking task. After preprocessing the data, we find that fixations on relevant spans seem to indicate correctness when answering the comprehension questions. Additionally, we perform a comparative analysis of the data collected to high-quality eye-tracking data. The results show a moderate correlation between the features obtained with the webcam-ET compared to those of a commercial ET device. We believe this data can advance webcam-based reading studies and open a way to cheaper and more accessible data collection. WebQAmGaze is useful to learn about the cognitive processes behind question answering (QA) and to apply these insights to computational models of language understanding. ## 1 Introduction Eye movement data is useful for natural language processing (NLP) models since it provides direct access to human language processing signals Mishra and Bhattacharyya (2018). Eye-tracking (ET) recordings can therefore be leveraged to augment NLP models by providing a human inductive bias Hollenstein et al. (2020) or to evaluate and analyze the inner workings of the models and increase their explainability Sood et al. (2020). Eye movement data recorded from natural reading has been used to improve models for various NLP tasks such as document summarizing tasks Xu et al. (2009), part of speech tagging Barrett et al. (2016), and named-entity recognition Hollenstein and Zhang (2019), among others. The gaze points from reading patterns are translated into engineered features, such as the number of fixations, duration of fixations, and number of saccades, which reflect the various stages of linguistic processing during language comprehension. However, these machine learning approaches rely on large text datasets and are thus often limited by the size and availability of existing eye-tracking datasets, which require expensive equipment and participants to be present in a lab to be collected. Dataset availability is also sparse, with most reading stimuli being in English. It is also common for datasets to provide a specific reading task since different gaze patterns are produced depending on the task participants are primed on, e.g. linguistic annotation or information-seeking Hollenstein et al. (2020); Malmaud et al. (2020). This means that features from specific datasets might be hard to transfer to a different domain or task. Moreover, it opens the question of whether eye movement data from normal reading or task-specific reading is more beneficial for NLP models. In light of these challenges, in this work, we present WebQAMGaze, a multilingual webcam eye-tracking-while-reading dataset tailored to be used not only for reading research and comparisons between high and low-quality gaze recordings but also in machine learning-based NLP applications. To enable a large-scale experiment setup, the data is collected through the crowd-sourcing platform _Amazon Mechanical Turk_ paired with open-source libraries such as _jsPsych_de Leeuw (2015) and _WebGazer_Papoutsaki et al. (2016). To ensure the adequacy of the text stimuli for their use for downstream NLP applications, we select texts in multiple languages English, Spanish, and German) from an open-source question answering dataset. The WebQAmGaze data used and related experi ment and analysis code are available online.1 Footnote 1: [https://github.com/tfnribeiro/WebQAMGaze](https://github.com/tfnribeiro/WebQAMGaze) For the data collection, we employ two experiment paradigms, a _normal reading_ (NR) task, where participants read a continuous text and answer a comprehension question on the next screen, and an _information-seeking_ (IS) task, where participants are presented with the question they have to answer before reading the text, followed by the text itself and the question again. Previous work has shown that information-seeking reading results in faster reading speed and higher omission rates, i.e., fewer words are fixated during such a search task when compared to normal reading (Hollenstein et al., 2020) which also leads to less alignment with NLP models (Eberle et al., 2022). By collecting eye-tracking data from both tasks, we provide the possibility to analyze this behavior also in webcam-quality recordings. We hypothesize that a higher number of fixations on the relevant target spans in the text will result in participants answering correctly to the questions. With the WebQAMGaze dataset we introduce the first corpus of webcam eye-tracking for reading studies. The objectives of this new data collection are two-fold: First, we aim to investigate to what extent can webcam-ET be used for reading studies. To the best of our knowledge, we are the first to provide word-level gaze features for a webcam reading dataset. Second, we explore how this dataset can be leveraged for explainability in NLP models. ## 2 Related Work In this section, we discuss previous work in this area. We focus on the usage of eye-tracking data in natural language processing and recent progress in webcam-based eye-tracking, especially with respect to reading research. ### Eye Movement Data for NLP When looking at ET collected in reading tasks, one can observe that readers move their eyes rapidly across the text and fixate on different words, often skipping words entirely, other times fixating them for longer and sometimes returning to previous words. These behaviors seem to be linked with cognitive-linguistic processes and reveal insight not only into the reader's comprehension of the text but also highlight some linguistic properties of the words that are fixated on. For these reasons, a variety of datasets have been created to study different properties (Mathias et al., 2021), such as _ZuCo_(Hollenstein et al., 2020), _GECO_(Cop et al., 2017) or _PROVO_(Luke and Christian, 2018), each introducing different tasks and texts, which usually are decided based on the research question being answered. Tasks can range from self-paced reading of novels to existing NLP task-specific corpora, such as sentiment banks or questioning-answering datasets, where ET patterns might highlight certain linguistic patterns. In NLP tasks, ET has been successfully used in tasks such as part-of-speech tag (Barrett et al., 2016), readability (Gonzalez-Garduno and Sogaard, 2017), sentiment analysis (Mishra et al., 2016; Barrett et al., 2018), named-entity-recognition (NER) (Hollenstein and Zhang, 2019), among others, and they all report significant statistical improvement over baselines without gaze features (Mathias et al., 2021). Nevertheless, Hollenstein et al. (2020) highlight the importance of high precision and accuracy for ET features in an NLP context. In addition, it advises against low-cost models, such as webcam ET, due to low sampling rates and degradation in precision, which would propagate to all downstream tasks performed. Moreover, it is still unclear what best way to employ the extracted features (e.g., as attention proxies or text features), and to which granularity (e.g., word or sentence level features) they should be used to obtain the best results given a task. ### Webcam-Based Eye-Tracking Low-cost video-based eye-tracking has been investigated for some time now (Ferhat and Vilarino, 2016; Papoutsaki, 2015), with libraries being publicly available such as TurkerGaze (Xu et al., 2015), implemented in Javascript, which allows it to run independently of any platform and to be easily incorporated into a webpage. Turkergaze, for example, aimed to explore using _Amazon Mechanical Turk_ to determine the saliency of objects in images and showed promising results already in 2015. **WebGazer**(Papoutsaki et al., 2016), was our chosen library, while similar to TurkerGaze, it is focused on providing real-time gaze prediction and is actively maintained with the latest release on the 28th of March 2022. More recent approaches include SearchGazer (Papoutsaki et al., 2017), which was developed for information retrieval and does not require any calibration, as it would automatically tune itself by using clicks from users. A few years later, it has been used for personalized text summarization Dubey et al. (2020), where gaze point density was used as a metric for sentence saliency. These approaches, however, focus on extracting heatmaps, which are then used as features for a model to help reduce the search space. These advances show promise in the technology with models achieving better performance than the base models lacking these features. ### Webcam-Based Eye-Tracking-While-Reading More particularly for webcam-eye-tracking in NLP, a recent study Lin et al. (2022) performs a direct comparison of data collected from a commercial eye-tracker to that of a webcam-based eye-tracking in a reading task and an accuracy task. For the latter, results showed that the webcam ET distinguishes movements on the horizontal axis (right/left) more clearly, as opposed to vertical ones. This can be a challenge to overcome, especially if text lines are close together. Nevertheless, the preliminary results are encouraging showing that differences between age groups can be identified by the webcam setup. Guan et al. (2022) use _WebGazer_ to perform a study on L2 English speakers (\(n=32\)) reading and investigate reading comprehension based on some engineered features, such as fixation counts on page and lines and regressions (going to a previous page). Their results show that these features were indicative of participants responding correctly. To the best of our knowledge, there is yet to be an extensive study of webcam eye-tracking in a natural reading setting, in order to extract linguistic features on the word level. For this reason, we hope to contribute to the research by comparing our results to those of datasets compiled using eye-tracking equipment at a lab, such as MECO Siegelman et al. (2022), and provide recommendations on how to collect data in such a setting. We also have a methodology similar to that employed in Malmaud et al. (2020), which we will describe and compare in the next section and in Section 5.3. ## 3 The WebQAMGaze Dataset In this section, we describe the process of compiling WebQAMGaze, a data resource that provides both raw eye movement recordings as well as pre-processed gaze fixations across 332 participants reading texts in 3 different languages: English (EN), Spanish (ES) and German (DE). ### Experiment Design Our dataset is divided into two types of reading scenarios following the approach in Malmaud et al. (2020): A _normal reading_ task (NR), and an _information-seeking reading_ task (IS). In the NR task, the participants are instructed to read a text carefully at their own speed and to press the spacebar to proceed to the next screen where they are asked a comprehension question about the text. The question can be either True/False or an open question. In the IS task, the participants are first presented with the question they will need to find an answer to in the text. The text is then presented on the next screen and the participants are instructed to press the spacebar as soon as they find the answer in the text. Then they are shown the question once again and they have to type their answer into a free text field. There is no time control for reading the texts in either scenario. ### Reading Materials For the construction of this corpus, we use the freely available texts from the multilingual XQuAD dataset Artetxe et al. (2020) that can be used to test machine text comprehension models, meaning that the texts are accompanied by human annotations for question-answering. XQuAD is a subset of the SQuAD question-answering dataset Rajpurkar et al. (2016) that has been translated into other languages. It contains pairs of texts and questions, annotated with target spans and correct answers. We include these texts as they already contain a relevant span that can be comparable to a human rationale and allows us to compare the data we collect to investigate if the gaze information also reflects the spans that are annotated when the questions are responded to correctly. The data we collect can be used in the existing state-of-the-art approaches and compared directly and investigate how good the gaze data is in reflecting the human-rationales. We also include texts from the MECO corpus Siegelman et al. (2022). This dataset contains NR ET data coupled with reading comprehension questions for 13 different languages. Including texts from this dataset, will allow us to compare directly the features we extract to those collected by a commercial ET and reason about the quality of webcam-ET. We go into more detail in the approach used in Section 5.2. We collect data for languages in which the two datasets overlap: English, Spanish, German, Turkish, Greek, and Russian. ### Text Selection Criteria We include texts from both datasets according to the following criteria: _XQuAD Texts_: We select texts from the XQuAD question answering dataset that are at most 650 characters long. We do this to allow for fitting text into smaller screens and to avoid the experiment taking too long to complete. _MECO Texts_: The texts in the MECO corpus are significantly longer than those present in XQuAD and in order to include them in the experiment, we extend the total character limit to 1300. This results in the MECO texts having a slightly smaller font and spacing compared to the texts in XQuAD. We report the statistics along with some linguistic properties of the subsets of texts included in the WebQamGaze dataset in Table 1. The number of tokens and sentence lengths are obtained by using _spaCy_'s2 tokenizer. Footnote 2: [https://spacy.io/api/tokenizer/](https://spacy.io/api/tokenizer/) ### Crowd-Sourcing Setting To allow for the collection of data in the crowdsourcing platform of _Amazon Mechanical Turk3_ we set up a stack that consists of using _Heroku4_ to host our _jsPsych_(de Leeuw, 2015) experiment and _psiTurk5_(Gureckis et al., 2016), to handle the payment and posting of the experiment on the _Amazon Mechanical Turk_ website. The code can be found online.6 The experiment sets are generated offline, consisting of a combination of existing plugins offered by _jsPsych_ extended with _WebGazer_ when needed. They are then hosted online on a _Heroku_ server. We then share this link with the workers from _Amazon Mechanical Turk_ so they can complete the experiment. For each batch, containing 10 texts, we collect up to 9 unique responses. The batches are then downloaded to a local machine to allow for data processing. Footnote 3: [https://www.mturk.com/](https://www.mturk.com/) Footnote 4: [https://www.heroku.com/](https://www.heroku.com/) Footnote 5: [https://github.com/NYUCCL/psiTurk](https://github.com/NYUCCL/psiTurk) To participate in the experiment, the participants need to have the following characteristics: (1) are fluent in the language in which the texts are written; (2) are at least 18 years old; (3) can read English; (4) need to be on a Laptop/Desktop device with a webcam available; (5) cannot have completed the same experiment before; (6) the screen needs to at least have a reported resolution of \(1280\times 720\) to ensure the stimuli are presented in a consistent way. Additionally, we have extra requirements in place to dissuade participants from not completing the task correctly: (1) we only accept HITs that have a correct response rate \(>50\%\); (2) have a HIT approval rate of at least 95% (this is only valid when workers have more than 100 HITs completed); (3) have at least 5 HITs approved in their account and (4) need to complete the task in less than 45 minutes. We offer bonuses based on the number of correct answers as a further incentive to pay attention during the task. These bonuses are offered in the following criteria: **1.005**, in case they have at least 60% correct answers, or **2.005**, in case they have at least 75% correct answers. We restrict the regions to locations where the official language corresponds to the data being collected, to ensure the likelihood of getting native \begin{table} \begin{tabular}{l|c|r|r|r|r} \hline \hline **Language** & **Origin Dataset** & **Texts** & **Tokens** & **Tokens per Sent.** \\ \hline \multirow{2}{*}{English} & MECO & 4 & 184 & 218 & 203.5 & 24.70 \\ & XQuAD & 97 & 31 & 130 & 97.2 & 32.60 \\ \hline \multirow{2}{*}{German} & MECO & 2 & 178 & 192 & 185.0 & 20.73 \\ & XQuAD & 36 & 26 & 115 & 83.6 & 28.88 \\ \hline \multirow{2}{*}{Spanish} & MECO & 1 & 195 & 195 & 195.0 & 24.38 \\ & XQuAD & 64 & 35 & 131 & 98.7 & 34.47 \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of the reading materials included in the WebQamGaze dataset, including the original datasets the texts were extracted from, the number of individual texts, the text length as the number of tokens (min, max, mean), as well as the average number of tokens per sentence. speakers to countries where the language of the experiment is the primary language: US and Great Britain (for English); Spain, Mexico, Argentina, Colombia, Chile, Ecuador, Guatem, Peru, and Venezuela (for Spanish); Germany and Austria (for German). The sets are corrected manually, with the exception of the MECO texts, where the procedure is automatic since there is a binary choice. For XQuAD, answers are corrected based on how close to the original span they are. For some questions, we considered other responses due to the question being ambiguous/unclear. We ignore typos, as long as they are interpretable to the original question. ### Experiment Structure The experiment follows the procedure illustrated in Figure 1. Before the participants start, they are asked to accept a consent form informing them of the requirements of the experiment described above. They are then introduced to the two reading contexts and are asked to fill out a survey where they are asked about their age, mother tongue, and fluency in the set's language. They are then told to close any unnecessary applications and are shown how to set up their screen to allow the best lighting conditions for _WebGazer_. In the next screen, _WebGazer_ asks permission to access the webcam and start collecting data, further instructions are provided on how to use the eye-tracker via an image with the instructions: (1) Make sure your face is centered and nothing is obstructing the camera. (2) Do not move or tilt your head. Use only your eye movements to perform the task. (3) Do not sit too far or too close to the screen. (4) The image cannot be too dark. The participants are then prompted to set their browser to full-screen to allow better calibration of _WebGazer_, they are also asked to keep their browsers in this mode until the end of the experiment. They then proceed to perform an 11-point calibration followed by a 5-point validation step. Here, the accuracy score is calculated based on how many gaze points fall inside a \(100px\) radius for each point, and if the average accuracy is lower than \(60\%\) we ask participants to repeat the calibration step once more. The reported accuracy is the last average obtained from validation. The participants then start with the NR task, where they first read instructions and then complete 5 texts (1 MECO, 4 XQuAD) followed by the IS task, where first instructions and then 5 texts (5 XQuAD) are shown. We decide to perform the NR task first, as it is the most cognitively demanding task, followed by the easier IS task. A quick calibration (QC) step (5-point calibration) is done every 2nd trial. This means a QC is performed after the 2nd and 4th trial in the NR task and the 1st and 3rd trials in the IS task. The experiment then terminates by quitting full-screen mode and asking participants to submit the HIT on the _Amazon Mechanical Turk_ platform. Figure 1: Experiment structure for the WebQamGaze data collection. Light blue boxes represent reading pages, purple indicates input from the participants, orange indicates _WebGazer_ calibration and validation trials and white boxes represent the fixation cross-screen. Every second trial there is a quick calibration step, indicated by the yellow and orange boxes within the two reading scenarios. This means that there is a calibration after the 2nd and 4th trial in the NR task and the 1st and 3rd trials in the IS task. ### Stimulus Presentation Settings The stimuli are constructed through HTML and CSS. The CSS is different based on the dataset of origin for the text. We decided to use a common online font "Open Sans" and use a _word-spacing_ of 25 px and a _text-alignment_ to the left. They differ in _font-size_ 24 px/22 px and _line-height_ 3 em/1.9 em for XQuAD and MECO, respectively. This difference is due to MECO containing larger texts in comparison to those present in XQuAD. To generate the stimuli we take screenshots of each of the texts for the resolution of \(1280\times 720\) which are then shown on the participant's screen. We pick this resolution for two reasons: (1) looking at online statistics on the number of screen sizes, according to a survey7 around \(77.2\%\) of users have the same or larger resolutions than the one specified, and (2) smaller resolutions would become too small to allow for the stimuli to be presented. This way, we can use the resolution to generate the stimuli and we present the same resolution (and consequently the same spacing) to all participants. The image may be scaled in case the user contains a scaling factor in their browser or OS. However, as long as the "projected" resolution is above the target, the image should be comparable. If smaller than \(1280\times 720\) the participant will not be allowed to continue. Footnote 7: [https://gs.statcounter.com/screen-resolution-stats/desktop/worldwide](https://gs.statcounter.com/screen-resolution-stats/desktop/worldwide) ## 4 Data Processing We first filter out participants who did not get approved based on their number of correct questions, which were \(31.02\%\) of the total data collected. Out of these, we filter \(4.37\%\) who experience an error with _WebGazer_, resulting in either the targets or the gaze points not being stored correctly. We further remove \(9.59\%\) based on a sample rate \(<10\,Hz\), which is too low for linguistic processing and \(1.52\%\) due to their avg. accuracy being \(0\%\). One participant is dropped due to low screen resolution. After filtering with these criteria we obtain data from \(194\) participants out of the initial \(332\). All following experiments and analyses in this work are performed on this filtered dataset. ### Fixation Detection and Gaze Point Filtering We perform the following preprocessing steps on the gaze points obtained to transform them into fixation points. First, we subtract the image coordinates \((i_{x},i_{y})\), and for each gaze point \(g=(g_{x},g_{y},g_{t})\) we perform the following to transform it into a resolution-independent gaze point \(g^{\prime}\): \[g^{\prime}=(g_{x}-i_{x},\,g_{y}-i_{y},\,g_{t}) \tag{1}\] Where \(i_{x},i_{y}\) are the coordinates of the top-left corner of image \(i\) and \(g_{x},g_{y}\) is the estimation of the gaze location at time \(g_{t}\) in \(ms\). Before \(g^{\prime}\) all \(x,y\) coordinates are referring to a location in the participant's screen. The set of all resolution-independent gaze points is defined as \(G^{\prime}\). Second, we define/merge the data points into fixations related to the reading process by defining a time window \(w=150ms\) and a radius \(r=32px\). For every point \(g^{\prime}_{i}=(x_{i},y_{i},t_{i})\in G^{\prime}\), we perform: \[f_{i}=(x,y,t_{j})=\frac{1}{N}\sum g^{\prime}_{j}(x,y) \tag{2}\] \[\begin{split}\forall j>i&\,:||g^{\prime}_{j}(x,y) -g^{\prime}_{i}(x,y)||\\ &\,<r\wedge g^{\prime}_{j}(t)-g^{\prime}_{i}(t)\leq w\end{split} \tag{3}\] Where \(f_{i}\) is the new fixation point created and \(N\) is the total number of points summed. Both \(r\) and \(w\) are empirically defined parameters. We then calculate the fixation duration to be the difference between the fixation time of two consecutive points so that \(d_{i}=t_{i}-t_{i-1}\) and set the first gaze point to have \(d_{0}=0\). Additionally, we remove any data points which are outside the text target area, in this case, the borders of the image containing the text with a tolerance threshold of \(50px\), to allow for fixations related to reading that fall outside of the text box due to lower accuracy. As a final step, we also remove any fixations which are shorter than \(50ms\), which will remove points that the participants did not fixate, except the first gaze point \(g_{0}\). We pick this value as it seems to provide a good balance between filtering non-fixation points and not removing too much data. Lower values result in very little filtering due to the low frequency of _Webgazer_ and higher values can result in over-filtering. It is important to note that both of these steps will significantly impact all the downstream tasks with features derived from these fixations. We pick this method as a simple initial approach that does not make strong assumptions about the quality of the gaze data collected and provides a good starting point for analysis. ### Word Boundary Detection No software is available to automatically generate word boundaries in conjunction with _WebGazer_ experiments. For this reason, we have also included automatic word boundary detection to retrieve the individual word positions from the image file presented during the experiment. We use _pytesseract_8, a python wrapper for the OCR engine (_libtseseract_), to retrieve word boundaries given the images used as stimuli for the experiment. Based on the bounding boxes detected we create targets for each of the words, by taking the original bounding boxes and expanding them to ensure that they provide some margin of error, but without overlapping with other words. These values were tuned empirically based on the values set in the CSS when creating the images. The results can be seen in Figure 2. Footnote 8: [https://pypi.org/project/pytesseract/](https://pypi.org/project/pytesseract/) ## 5 Data Analysis ### Dataset Statistics We collected data from a total of 332 participants. After performing the filtering steps described in Section 4, we obtain data from 194 participants, 124 from the English data, 51 from the Spanish data, and 19 from the German data. We report the participants' age distribution (\(\mu\approx 35.19,\sigma\approx 12.07\)) and the sampling rate of _WebGazer_ (\(\mu\approx 24.93,\sigma\approx 5.54\)) in Figure 3. We also visualize through box plots the _WebGazer_'s average accuracy after the last validation step and the time taken to complete the experiment in Figure 4. ### Comparison to High-Quality Recordings We further compare the recordings for the MECO texts with the original MECO eye-tracking data. The MECO dataset has been recorded in 13 different laboratories (one per language) around the world using EyeLink trackers with a sampling rate of 1000 Hz. Participants used a chin rest and a head restraint to minimize head movements. Thus the data is expected to be of much higher quality than WebQAmGaze with respect to comparability between participants and accuracy. We compute the total reading time (TRT, the sum of all fixations on a word) and the number of fixations (nfix) both averaged across all participants and words. Note that for TRT we only include the words that have been fixated whereas for the nfix we include all words, starting with a count of \(0\). Results can be found in Table 2. We also compute the Spearman correlation coefficients between relative fixations of both datasets. Total fixation time (TRT) per word is divided by the sum of all TRTs in the respective sentence to compute relative fixation duration for individual participants for each text, similar to Hollenstein and Beinborn (2021). Here, we also omit NaN values, i.e., words that have not been fixated in the entire dataset. In comparison to MECO, we see an increase in TRT of \(80-170\) ms and a slight increase in the number of fixations in particular for English and Spanish. We assume this is caused by an overall Figure 2: Example of the boundaries generated and data collected for XQuAD text: NikolaTesla, 5th paragraph. The image shows the different targets for _WebGazer_. In yellow (paragraph), the target area of the text, in green, red, and purple are the relevant passages to answer questions for the XQuAD dataset. The legends for the relevant passages follow a naming convention of a_[NameOfParagraph]_[ParagraphN]_qa_[QuestionN]. For this set in particular, _mturk_EN_v10_, the question corresponds to **qa_0**: _What article was published in 1937?_, corresponding to the green highlight in the text. lower accuracy when assigning fixations to individual words as some fixations are aggregated to one word instead of being split across neighboring words. We see correlation values between both datasets mostly around \(0.55\) which is substantially higher than the correlation between MECO and first attention as shown in Brandl and Hollenstein (2022). Correlation for text \(1\) in German is lower than the other values, here only 19 participants have been recorded. In Figure 5, we also show relative fixation patterns of two-handpicked English sentences for WebQAmGaze and MECO where relative fixation has been averaged across participants. Both follow a similar pattern where WebQAmGaze show much higher deviations than MECO. The sentence on the right shows a high relative fixation for _such_ which might actually correspond to the prior word _vehicle_. In Figure 6, we further show TRT averaged across participants and tokens based on their word length for all English words in WebQAmGaze and corresponding MECO texts 3, 7, 11, and 12. We overall see an expected increase in TRT for longer words, which is in line with the well-studied word length effect in reading (Just and Carpenter, 1980). ### Towards Eye Movement Rationales for Explainable AI Current state-of-the-art machine learning models for language processing are still mostly black boxes. A rationale can be defined as a justification for a particular decision by a model. For example, in a question answering system, a rationale is the target text span relevant for finding the correct answer to a question. Explainable AI models aim to provide justifications for the decisions made by a Figure 4: Box plots for the ROI validation and Total time taken to complete the experiment for the filtered data. Figure 3: Participants’ age and WebGazer sampling frequency distribution. The bars in lighter colors show the full data before filtering. model, often based on human annotations. Datasets such as ERASER DeYoung et al. (2020) provide a methodology that can be used to compare and evaluate how explainable different rationales are by introducing metrics that are comparable across different approaches. However, these rationale annotations are time-consuming and involve conscious task-solving. Therefore, they include human subjectiveness and biases Chiang and Lee (2022). We hypothesize that eye-tracking information can be used to extract rationales without the need for annotators manually denoting which spans are relevant. The conscious, time-consuming process can be replaced by gaze information while reading, a more efficient process grounded in human attention. As an initial validation for this hypothesis, we show that, in information-seeking reading, the fixations of a participant are more indicative of the correctness of their answers than in normal reading. #### 5.3.1 Significance Testing To test this hypothesis, we perform independent t-tests for each task (NR, IR) grouping participants by whether they responded correctly or incorrectly to the trial. We evaluate if there is a significant difference for various eye-tracking metrics namely, (1) fixations on target, (2) total fixations (the number of fixation points during the trial, not necessarily in the text), (3) target/total fixation ratio (ratio between fixations on target and total fixations), (4) TRT on the full text, (5) TRT on the target span, (6) TRT target/TRT text ratio (ratio between the TRT spent on target span over the total TRT on text), and (7) total trial time (the full time spent on a trial screen). The results are shown in Table 3. We obtain significant values for (2), (4), and (7) for IS. No significant differences are seen in NR for fixation features, where only (7) is significant. Since NR did not guide participants in any way, it matches our expectations. For the IS task, we see that the features pertaining to the total time, also \begin{table} \begin{tabular}{c|c|c|c||c|c||c} \hline \hline & & \multicolumn{2}{c||}{**MECO**} & \multicolumn{2}{c||}{**WebQAmGaze**} & \\ & **ID** & _TRT [ms]_ & _nfx_ & _TRT [ms]_ & _nfx_ & \(\rho\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Engl.** \\ \end{tabular} } & 3 & 244 & 1.18 & 411 & 2.17 & 0.52 \\ & 7 & 226 & 1.09 & 304 & 0.92 & 0.60 \\ & 11 & 260 & 1.22 & 340 & 1.50 & 0.55 \\ & 12 & 210 & 1.02 & 337 & 1.43 & 0.56 \\ \hline \multirow{4}{*}{ \begin{tabular}{c} **Engl.** \\ \end{tabular} } & 12 & 245 & 1.22 & 333 & 1.30 & 0.53 \\ \cline{2-6} & 1 & 313 & 1.49 & 282 & 1.31 & 0.31 \\ \cline{1-1} & 12 & 301 & 1.46 & 298 & 1.18 & 0.55 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison between MECO texts in WebQAmGaze and original MECO data. Mean total reading time (TRT) and number of fixation (nfx) averaged across all participants and words for different texts in English (EN), Spanish (ES), and German (DE). Note that we only include fixated words in the calculation for TRT. The last column shows the Spearman correlation coefficient between relative fixation averaged across participants in both datasets. Figure 5: Comparison between fixations patterns for MECO and WebQAmGaze on two individual sentences. Relative fixation has been calculated individually and then averaged across participants. Figure 6: Averaged total reading time (TRT) per word length for WebQAmGaze (all English sentences) and MECO (English texts that appear in WebQAmGaze). TRT is averaged across tokens and participants, standard deviation is shown based on tokens after averaging across participants. show significance meaning that in this task the eye-tracking features still show a significant difference between the correct and wrong groups. This finding is further confirmed by analyzing the averaged word TRT over all the trials and grouping them by task (Figure 7). Not only do participants spend less time on each word on average in the IS task over the NR task, but they seem to spend slightly more time looking at words in the target region rather than on the rest of the text. This is in line with the results from Malmaud et al. (2020), demonstrated using a commercial eye-tracker in a lab setup. Interestingly, in our dataset, it does appear that total trial time is a significant effect on both tasks. Looking at the total trial time means of the two groups in NR, the group that has correct answers takes longer to respond (incorrect answer: \(\mu_{1}=34425ms\) vs correct answer: \(\mu_{2}=40350ms\), while in IS the reverse is true (Wrong Group: \(\mu_{1}=29094ms\) vs Correct Group: \(\mu_{2}=22964ms\)), shorter total trial time times affects the correctness of the answers. This lines up with our hypothesis, as in the NR task, participants need to read attentively as they are not aware which question is asked, while in the IS task participants are incentivized to find only the relevant information. #### 5.3.2 Correctness Classification With these findings, we train a binary classifier to predict whether a correct answer has been given to a task, taking the features (1), (2), (3), (4), (5), (6) and average word TRT in and out of target span. For training the classifier, trials without any fixations on the text are removed (\(\approx 1\%\) of total data, \(n=18\)). We then test three different classifiers from _sklearn9_: (i) SVC, a support vector machine classifier, (ii) Logistic Regression and (iii) Random Forests. We also provide a random baseline classifier, which outputs the labels in the training data (0=incorrect answer, 1=correct answer) in a uniform manner. The parameters for all these classifiers are left to their defaults from _sklearn_. We split the data by task, NR (\(n=970\), \(0=433\), \(1=537\)) and IS (\(n=952\), \(0=186\), \(1=766\)), and use a shuffled train/test split of \(80/20\%\). As there are substantially more positive labels, especially for the IS task, we decide to balance the labels by artificially up-sample negative examples from the training split to augment the data, resulting in a balanced training set. Footnote 9: [https://scikit-learn.org/stable/](https://scikit-learn.org/stable/) We then train each of the classifiers on this augmented training set and classify on the test set. We repeat this procedure \(10\) times with different seeds. We report the averaged results in Table 4. For the IS task, random forests yield \(Acc\approx 70.31\%\) and \(F1=70.42\%\), while the other classifiers perform slightly above the random baseline, they are considerably worse than random forests. However, for the NR task, while Log. Reg. obtains higher _Acc_ and _F1_, all perform very similarly to the baseline. This indicates that the features used are insufficient to discriminate whether a participant would answer correctly in this task. This might be due to the quality and selection of the features or due to the nature of the NR task. We experiment with adding some text features _token count_ and _average token length_ Figure 7: Average word fixation time (TRT) by task for words within the target within and outside the target region. Lines on the bars represent the standard deviation for each value. \begin{table} \begin{tabular}{l|l|l} \hline \hline & **NR** & **IS** \\ \hline (1) Fixation on target & 0.84 & 0.14 \\ (2) Total fixations & 0.15 & **0.02** \\ (3) Target/total fixation ratio & 0.59 & 0.48 \\ (4) TRT on text & 0.11 & **0.04** \\ (5) TRT on target & 0.81 & 0.12 \\ (6) TRT target/TRT text & 0.70 & 0.72 \\ (7) Total trial time & **0.001** & **9.46e-05** \\ \hline \hline \end{tabular} \end{table} Table 3: Significance testing. p-values obtained for the various independent t-tests performed separating the groups based on giving the correct response to a trial or not, aggregated over NR or IS tasks. The total number of trials is \(n=1940\) (\(970\) for each task), for feature (6) the number is \(n=1922\), due to some lack of TRT text for some participants. In bold, we highlight the significant values (\(p<0.05\)). for each text, and we see that these improve most results, and improve the random forest performance by \(\approx 10\%\) in NR. Returning to our hypothesis, our results indicate that while information-seeking reading seems to provide reasonable proxies for human rationales, this does not seem to be the case for NR scenarios, where only with the addition of text features can classifiers perform better than a baseline. ## 6 Discussion We compile and share a new webcam eye-tracking dataset including reading data from 332 participants in three languages. The results obtained on the WebQamGaze dataset reveal similar trends to what has been previously considered to be only possible with a commercial eye-tracker in a lab environment. We show promising results when comparing our data to that of a commercial setup and gaze patterns in task-specific reading seem to align with relevant passages. In this section, we highlight some of the methodological challenges encountered with WebQamGaze as well as potential directions for future work. ### Methodological Challenges With the proposed WebQamGaze data collection process, we describe a solution to how to present the text stimulus in a way that maintains consistency across the diverse landscape of possible computer setups. However, other factors reveal to be more difficult to control. First of all, _WebGazer_ requires a combination of a good webcam and a computer with a good CPU and RAM, which is hard to control in our current setting. We attempt to counteract this challenge by using the reported sampling rate, which seems to be a good indicator of how well _WebGazer_ is functioning. Furthermore, we face the issue that while we perform multiple calibration steps throughout the experiment to ensure that we correct for possible head movements by participants, we are unable to validate the accuracy of the eye-tracking throughout the experiment, as this would add a considerable amount of time to the total experiment time and possible loss of interest or focus from the participants. It is also worth mentioning that _WebGazer_ itself might crash or run into issues while the experiment is ongoing. This can result in loss of data for certain trials or worse by crashing the page where the participants are performing the experiment. Finally, and most importantly, it is difficult to know how engaged the participants are when performing the task. Furthermore, nothing stops them from getting distracted by their own surroundings or how closely they follow the instructions given. The minimum number of correct answers required for payment is the only controlling factor. Nevertheless, observing our results, we can see that following the proposed filtering steps yields cleaner data as shown in our analysis. ### Data Quality Challenges The texts and questions we are using come from existing datasets, which have their own limitations. Namely, XQuAD Artetxe et al. (2020) contains open-field questions, which are sometimes formulated in an unclear manner. Furthermore, as these texts are originally collected from Wikipedia, they can contain typos or other text marks, which may cause distraction or confusion. The annotations of the target spans may be ambiguous or only partially correct, which sets an upper bound for human rationale prediction or the correctness classification presented in Section 5.3.2. On the other hand, these texts represent naturally-occurring language, which increases the ecological validity of the experiment. Further, the MECO Siegelman et al. (2022) corpus contains simple true/false comprehension questions, which in some cases can be answered relying on common sense rather than knowledge acquired by carefully reading the text. \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c|}{**IS**} & \multicolumn{4}{c}{**NR**} \\ \cline{2-9} & \multicolumn{2}{c|}{Only ET Features} & \multicolumn{2}{c|}{With Text Features} & \multicolumn{2}{c|}{Only ET Features} & \multicolumn{2}{c}{With Text Features} \\ & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 \\ Random & 49.5 (03.9) & 54.9 (03.9) & 49.5 (3.9) & 54.9 (3.9) & 49.7 (2.8) & 50.0 (2.6) & 49.7 (2.8) & 50.0 (2.6) \\ SVM & 53.5 (11.2) & 56.7 (10.3) & 57.2 (8.9) & 60.9 (7.8) & 50.6 (2.8) & 47.4 (3.5) & 55.5 (2.3) & 54.0 (2.3) \\ Log. Reg. & 55.8 (07.1) & 60.3 (06.4) & 53.6 (4.5) & 58.6 (4.5) & **52.1 (2.0)** & **51.8 (1.8)** & 55.1 (3.2) & 54.4 (3.4) \\ Rand. Forest & **70.3 (03.8)** & **70.4 (02.8)** & **73.4 (2.7)** & **72.5 (2.3)** & 49.6 (3.1) & 49.8 (3.1) & **59.2 (2.4)** & **59.3 (2.4)** \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy and weighted F1-Scores for different classifiers on predicting whether a correct answer was given in the IS and NR tasks. Results are averaged across 10 runs, with standard deviation in brackets, and compared to the random baseline. ### Future Work A potential path for improvement is the inclusion of better algorithms to estimate the fixation data. Our proposed approach is simple and performs well when compared to high-quality eye-tracking data. However, it can be improved further with more complex methods, such as correcting the gaze points by taking the line height, the distributions of existing gaze data, or the accuracy and sampling rate of _WebGazer_ into account when performing the fixation cleaning and merging. Finally, as described by Malmaud et al. (2020), the extracted gaze data could be converted into features that can be used in combination with language models, such as BERT Devlin et al. (2019), to investigate if that leads to more human-like reasoning. Given that our data is collected on texts which contain annotated spans, is it possible to analyze if WebQamGaze improves the performance and explainability of these models in a QA setting. ## 7 Conclusion We present a novel approach to collecting low-cost eye-tracking data while reading from webcam recordings. We compile the WebQamGaze dataset, which is the first of its kind to include word-level eye movement features. We demonstrate that the data collected reflects linguistic patterns that have been corroborated by previous studies, namely in our comparison with high-quality eye-tracking recordings. We show that webcam eye-tracking can be used to predict the correctness of participants' responses in a task-specific context, paving the way to a more efficient collection of human rationales for explainable AI. Knowing where readers look can help to explain machine behavior in terms of human cognitive processes Ikhwantri et al. (2023). Lastly, the online crowd-sourcing approach presents added benefits from a wider population range and ease of access, both physical and in terms of hardware.
2309.12955
On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures
Collaborative perception, which greatly enhances the sensing capability of connected and autonomous vehicles (CAVs) by incorporating data from external resources, also brings forth potential security risks. CAVs' driving decisions rely on remote untrusted data, making them susceptible to attacks carried out by malicious participants in the collaborative perception system. However, security analysis and countermeasures for such threats are absent. To understand the impact of the vulnerability, we break the ground by proposing various real-time data fabrication attacks in which the attacker delivers crafted malicious data to victims in order to perturb their perception results, leading to hard brakes or increased collision risks. Our attacks demonstrate a high success rate of over 86% on high-fidelity simulated scenarios and are realizable in real-world experiments. To mitigate the vulnerability, we present a systematic anomaly detection approach that enables benign vehicles to jointly reveal malicious fabrication. It detects 91.5% of attacks with a false positive rate of 3% in simulated scenarios and significantly mitigates attack impacts in real-world scenarios.
Qingzhao Zhang, Shuowei Jin, Ruiyang Zhu, Jiachen Sun, Xumiao Zhang, Qi Alfred Chen, Z. Morley Mao
2023-09-22T15:54:04Z
http://arxiv.org/abs/2309.12955v2
# On Data Fabrication in Collaborative Vehicular Perception: ###### Abstract Collaborative perception, which greatly enhances the sensing capability of connected and autonomous vehicles (CAVs) by incorporating data from external resources, also brings forth potential security risks. CAVs' driving decisions rely on remote untrusted data, making them susceptible to attacks carried out by malicious participants in the collaborative perception system. However, security analysis and countermeasures for such threats are absent. To understand the impact of the vulnerability, we break the ground by proposing various real-time data fabrication attacks in which the attacker delivers crafted malicious data to victims in order to perturb their perception results, leading to hard brakes or increased collision risks. Our attacks demonstrate a high success rate of over 86% on high-fidelity simulated scenarios and are realizable in real-world experiments. To mitigate the vulnerability, we present a systematic anomaly detection approach that enables benign vehicles to jointly reveal malicious fabrication. It detects 91.5% of attacks with a false positive rate of 3% in simulated scenarios and significantly mitigates attack impacts in real-world scenarios. ## 1 Introduction The perception system of connected and autonomous vehicles (CAVs) is safety-critical as its performance directly affects driving decisions [7, 8]. However, CAV's perception is confronted with the basic limitation that onboard sensors have limited sensing capabilities. For instance, LiDAR, the commonly adopted 3D sensor, cannot see through occlusions and may render low resolutions for far-away objects, leading to imperfect detection performance. Many recent efforts have proposed LiDAR-based collaborative perception algorithms [73, 79, 32, 87], where different nearby vehicles exchange perception information (e.g., raw sensor data or feature maps processed by neural networks) and perform object detection algorithms on the fused data. In terms of the accuracy of object detection, the approach significantly outperforms the traditional CAV collaboration [37, 38, 64] sharing simple GPS messages or object locations, as illustrated in related studies [73, 79]. CAV industry [2, 3, 4, 9, 14, 16, 81] also proposes solutions of collaborative perception and launch road testing across the globe. Although collaborative perception is evolving quickly towards maturity, it introduces a severe vulnerability to vehicle safety because the safety-critical perception algorithms now rely on sensor data or feature maps from remote untrusted vehicles. With the control of a remote vehicle via physical access to either software or hardware, an attacker can fabricate the data to share, aiming to inject fake object detection results into the view of victim vehicles and even mislead them to trigger accidents. However, the impact of such a severe data integrity threat has not been comprehensively evaluated. Existing studies of CAV security [61, 86] either focus on other scopes (_e.g._, physical sensor security [69, 30], network protocols [25, 80]) or assume a different threat model (_e.g.,_ single-vehicle perception [55, 69], object-sharing collaboration [23, 88]), thus existing mitigation methods are not effectively designed for the new threat. To bridge the gap, we propose a series of stealthy, targeted, and realistic attacks exploiting LiDAR-based collaborative perception in this study. Our proposed attacks can spoof or remove objects at specified locations in the victim's perception results, making all mainstream types of collaborative perception schemes vulnerable. For early-fusion systems which directly merge LiDAR point clouds, we propose black-box ray casting to reconstruct malicious but natural raw point clouds. We design offline adversarial object generation and run-time occlusion-aware point sampling to further optimize the distribution of modified points. For intermediate-fusion systems which merge feature maps as intermediate results of object detection models, we design a white-box adversarial attack to perturb the feature maps. For optimal efficiency, the adversarial attack initializes the perturbation vector via a black-box method and runs one-step backward propagation in each LiDAR cycle (_e.g.,_ 100 ms). More importantly, we propose zero-delay attack scheduling to make attacks realizable in the real world. To be specific, in order to attack the perception of frame \(i\), attackers prepare a fabrication plan based on the knowledge of frame \(i-1\) before the next frame comes. In this way, attackers earn one LiDAR cycle time to complete attack generation without introducing a noticeable delay in the fabricated data. We evaluate the attack effectiveness on 211 traffic scenarios in a simulated dataset Adv-OPV2V and a real-world dataset Adv-MCity (including 8 scenarios collected from a real-vehicle testbed MCity [19]). On the simulated dataset, all attacks have a success rate of more than 86% regardless of fusion methods and model configurations. In our real-world experiments, we deploy three vehicles equipped with LiDAR/GPS sensors and the latest Baidu Apollo autonomous driving software [8]. Our attacks can be launched in real-time and trigger safety hazards such as collisions and emergent hard brakes. We also provide a comprehensive analysis of how the attack effectiveness is affected by various factors including attack methods, fusion schemes, and scenarios. Our findings will guide system designers to build robust collaborative perception schemes. To mitigate the demonstrated attacks, we propose Collaborative Anomaly Detection (CAD), a system that detects data fabrication attacks by revealing geometry inconsistencies of the shared data from different vehicles. To achieve this, CAD requires each vehicle to generate and share an occupancy map, which is a 2D map labeling the 2D space into three classes, free, occupied, and unknown. On receiving occupancy maps from others, the vehicle validates the consistency of the maps, _i.e.,_ there is no region classified as occupied and free at the same time. Then the vehicle carries out the second check by merging the occupancy maps into one and checking perception results against it. For instance, free regions should not overlap with detected bounding boxes; each on-road moving occupied region should have one bounding box overlap with it. In this way, abnormal detection results caused by either fabricated data or perception faults are revealed if the attacked region is observed by at least one benign CAV. CAD detects 91.5% attacks with a false positive rate <3% on datasets Adv-OPV2V and Adv-MCity. As the first comprehensive security analysis of collaborative perception, we will open-source all the above attack-/defense practices as a benchmark tool to facilitate future research. Our contributions can be summarized as three-fold: \(\bullet\) We compile the benchmark datasets Adv-OPV2V and Adv-MCity for evaluating the security of collaborative perception. Especially, Adv-MCity is the _first_ multi-vehicle collaboration dataset collected on real vehicles and real roads. \(\bullet\) We propose multiple data fabrication attacks, where one attacker, as a collaborative perception participant, can successfully spoof or remove objects at specified locations. We conduct an extensive study on the impact of such attacks. \(\bullet\) We develop CAD, a defense system of collaborative perception for detecting our proposed data fabrication attacks. CAD reveals abnormal perception results through the sharing of fine-grained occupancy maps. ## 2 Background and Related Work **Connected and autonomous vehicles (CAVs)** are transforming the transportation systems by enabling automatic and intelligent vehicle driving control. CAVs are complicated cyber-physical systems equipped with sensors such as LiDAR, camera, and radar to perceive the surroundings, and software to make appropriate driving decisions. By the end of 2022, numerous companies including Waymo, Honda, Baidu, and Tesla [10, 6, 13, 8] have developed models of CAVs. **Collaborative perception** has been proposed to enhance CAV perception [62, 47, 49, 52, 21] by sharing raw or processed sensor data among infrastructure or vehicles. Mainstream solutions focus on LiDAR sensors because of the rich 3D geometry features brought by LiDAR images. Collaborative perception has three major types according to the sharing data, as shown in Figure 1. CAVs in early-fusion sharing schemes [62, 31, 87, 33, 48] directly exchange raw sensor data, whose format is usually universal and can be naively concatenated, at a cost of data transmission bandwidth; intermediate-fusion schemes [73, 34, 78, 82] ask CAVs to transmit feature maps, the intermediate product of perception algorithms, offering a good tradeoff between network efficiency and perception accuracy; in late-fusion schemes [67, 54, 68] lightweight perception results such as object bounding boxes are shared. Collaborative perception is advancing quickly towards real-world deployment. 3GPP standardized for Cellular Vehicle-to-Everything (C-V2X) techniques in 2017 [1], indicating the maturity of roadside communication. Since then, major technology companies such as Huawei, Intel, Bosch, Infineon, and Qualcomm [9, 2, 4, 14, 2] have strived to build various C-V2X solutions. Road trials have been launched across the globe in countries like Germany, France, the United States, and Japan. Ford [16] and Baidu Apollo [81] built real-world collaborative perception datasets. **Attacks on CAV perception**. Several attacks can harm LiDAR perception systems as listed in Table 1. First, LiDARs on CAVs are vulnerable to physical attacks, such as GPS spoofing [65, 51], LiDAR spoofing [40, 44, 51], and physical realizable adversarial objects [71, 84, 91]. These attacks are against one single autonomous vehicle. Late-fusion collaborative perception shares object locations [37, 38, 39, 64] thus \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Method & Target-system & Region-structured & Control & Flow-based & Flow-based & Flow-based & Flow-based \\ \hline LiDAR mapping [25, 26, 50, 67] & Single-vehicle & Lane emitting & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ Physical object [71, 91] & Single-vehicle & Physical access to the target & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ Flue object images [9] & Single-vehicle & Physical access to the target & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ Flue object images [9] & Single-vehicle & Lane emitting & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ Multi-agent adversarial [72] & In-focus & Lane emitting & \(\bullet\) & \(\bullet\) & \(\circ\) & \(\circ\) \\ Ours & **Early-driven-driven** & Lane emitting & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\circ\) & \(\circ\) \\ \hline \end{tabular} \end{table} Table 1: Existing attacks on LiDAR collaborative perception. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Method & Target-system & Region-structured & Control & Flow-based & Flow-based & Flow-based \\ \hline Network integrity [17] & Cyber attacks & & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ Temporal execution [42] & Multi-sensor adhesive & & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ Continuous control [9] & Physical LAND logging & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ Temporal attack [55] & Physical LAND logging & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ Flue-based sensor data [51] & Physical LAND logging & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ Both-sensor attack [51] & Physical LAND logging & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ Spatial conflicts of up [23] & Flue sensor images in late fashion & \(\bullet\) & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ Ours (CAD) & Our attacks. & & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \end{tabular} \end{table} Table 2: Effectiveness of defenses on our attacks. the attacker can trivially modify these locations, which is the threat model of many existing studies [26, 27, 46, 57]. Tu _et al._[72] is the first attack specific to intermediate-fusion collaborative perception, which is an untargeted adversarial attack creating inaccurate detection bounding boxes as many as possible by perturbing feature maps in intermediate-fusion systems. However, the attack is not realistic considering the constraints of real systems, as discussed in SS3.3. We propose real-world realizable attacks that challenge both early-fusion and intermediate-fusion systems. **Defenses on CAV perception**. As shown in Table 2, existing defense mechanisms are not designed for our proposed attacks thus cannot resolve them effectively. Several Vehicle-to-Everything (V2X) communication standards [15, 17, 18, 20, 43] define security practices of network protocols (_e.g.,_ access control, message integrity). They cannot block the data fabrication attacks because the attackers can modify data before wrapping it into protocol messages where the protection is enforced. Trusted Execution Environments (TEEs) [42] can potentially safeguard perception algorithms via secure hardware, but its deployment is difficult and vulnerable to side-channel attacks. Against physical sensor attacks, various anomaly detection methods are proposed [63, 69, 41, 22, 41, 55]. For LiDAR systems especially, CARLO [69] detects abnormal point clouds that violate occlusion features and LIFE [55] detects temporal and sensor-fusion inconsistencies. Above defenses rely on physical rules but attackers in collaborative perception can simulate the physics to craft realistic but malicious data, as discussed in SS5.1. For connected vehicle applications, many efforts model the benign behaviors of ego/remote vehicles and detect model outliers as anomalies [26, 27, 46, 57]. The models may involve various aspects including temporal consistency [27], physical constraints on message delivery or vehicle control [26, 46], cross-validation with local sensor [57], etc. However, existing works assume the systems to share simple GPS/OBU data, making it challenging to adapt them effectively for addressing anomalies in complicated LiDAR images or feature maps. We propose joint anomaly detection leveraging the sensing of spatial space from all connected vehicles, which enhances the spatial coverage of effective anomaly detection compared with the previous approaches. ## 3 Problem Definition We define the data fabrication problem in SS3.1 and the threat model in SS3.2. We emphasize the technical challenges for such new attacks compared with existing attacks in SS3.3. ### Formulation In a scenario where multiple vehicles jointly execute collaborative perception, the attacker aims to spoof or remove road objects (_e.g.,_ vehicles, pedestrians) from designated locations in the victim's perception results. We formulate the problem of data fabrication as an optimization problem. We denote LiDAR data at frame \(i\in\mathbb{N}\) from the attacker, the victim, and other benign vehicles by \(A_{i}\), \(V_{i}\), and \(X_{i}^{(j)}\), \(j\in\{0,1,\ldots N\}\), respectively. LiDAR data with the same frame index will be merged on the victim side to generate perception results. From Figure 1, we denote pre-process before data sharing as \(f\) and post-process after data sharing as \(g\). A normal collaborative perception for the victim on frame i can be described as: \[y_{i}=g(f(V_{i}),f(A_{i}),f(X_{i}^{0}),f(X_{i}^{1}),...,f(X_{i}^{N})). \tag{1}\] As the attacker can replace \(f(A_{i})\) by malicious data. For instance, the attacker can append a minor perturbation \(\delta_{i}\) to craft malicious data as \(f(A_{i})+\delta_{i}\), which will change the original perception result from \(y_{i}\) to \(y_{i}^{\prime}\): \[y_{i}^{\prime}=g(f(V_{i}),f(A_{i})+\delta_{i},f(X_{i}^{0}),f(X_{i}^{1}),...,f (X_{i}^{N})). \tag{2}\] Given a fitness function \(I\) evaluating attack success and attack constraints \(C\) restricting the perturbation, the attacker solves: \[\max_{\delta_{i}}I(y_{i}^{\prime})\quad\text{s.t.}\ C(\delta_{i}). \tag{3}\] ### Threat Model We assume that CAVs execute collaborative perception in a Vehicle-to-Vehicle (V2V) scenario. Our results can be easily generalized to vehicle-to-infrastructure (V2I) settings by replacing one or more vehicles with edge computing devices. We assume the attacker can physically control at least one vehicle participating in collaborative perception. This allows the attacker to gain privileges on the vehicle's software and hardware, enabling them to manipulate the sensors, tamper with the local execution of algorithms, and send arbitrary data through the network. In other words, attackers can directly alter the data to share, _i.e.,_, LiDAR point clouds, feature maps, and bounding boxes in early-fusion, intermediate-fusion, and late-fusion perception schemes, respectively. We focus on early-fusion and intermediate-fusion collaboration schemes where attackers need to subtly craft complicated structured data. In terms of perception models, as the attackers locally install the perception model for joining the collaborative perception, we assume they have white-box access (i.e., model parameters). Some of our proposed attacks require no model access or only inference API. Meanwhile, we assume the presence of benign vehicles which the attacker cannot invade. The assumption that the attacker would control all vehicles surrounding a victim vehicle on a busy road is deemed too impractical and financially prohibitive. We do not consider physical sensor attacks such as LiDAR spoofing [44] and GPS spoofing [74]. They are general threats to CAVs while we focus on new vulnerabilities brought by collaborative perception. Besides, the attacker cannot break the cryptographic protection thus cannot compromise the secure communication channels among vehicles. ### Attack Constraints In addition, the attacks must be realizable on real collaborative perception systems. Though Tu _et al._[72] proposed a feature-perturbing attack against intermediate-fusion systems, it violates attack constraints as follows. **Sensor physics and definition ranges**. We require the attacker to obey basic rules in terms of the data format, otherwise it is trivial to detect the anomalies. The attackers' LiDAR point clouds should have a reasonable distribution of point density and the angle of the lasers should comply with the LiDAR configuration. In addition, the point clouds must present reasonable occlusion effects, in order to bypass anomaly detection methods based on the occlusion features [69]. The attackers' shared intermediate features should be within the definition ranges, avoiding absurd values. **Targeted attacks**. The attacker should be able to designate a target region for either spoofing or removal attacks, in order to support delicate creation of hazardous scenarios. Otherwise, the untargeted and uncontrollable attack impact as presented in Tu _et al._[72] damages attack effectiveness and stealth. **Real-time temporal constraints**. Collaborative perception is an asynchronous multi-agent system where each vehicle produces LiDAR images in cycles but is not synchronized in time. Figure 2 illustrates a typical order of events in collaboration perception. To attack the victim's perception at frame i (\(y_{i}\)), the optimization of \(\delta_{i}\) has the following constraints: \(\bullet\)_Limited knowledge_. Optimization of \(\delta_{i}\) must be finished before the victim's processed LiDAR data \(V_{i}\) is generated. Therefore, attack generation cannot leverage the victim's data on the same frame. Similarly, data from other benign vehicles at frame \(i\) may not be available either. The attacker can for sure rely on the shared data in previous frames from all vehicles, provided that the data transmission delay is much smaller than the LiDAR cycle. Tu _et al._[72] assumes the availability of all data in the frame to attack thus it is impractical. \(\bullet\)_Real-time attack without observable delay_. The optimization of \(\delta_{i}\) takes time, especially when the attack involves online adversarial machine learning. To make sure \(\delta_{i}\) is produced and transmitted before the fusion stage of the victim, the attacker can either design fast real-time attacks or optimize the perturbation before frame \(i\) arrives. ## 4 Attack Methodology We present realistic data fabrication attacks against various types of collaborative perception. We first introduce a general framework for real-time targeted attacks in SS4.1 and elaborate on the details of ray casting attacks against early-fusion systems (SS4.2) and adversarial attacks against intermediate-fusion systems (SS4.3). Attackers can trivially send fake bounding boxes in late-fusion systems so we omit the discussion. ### Zero-delay Attack Scheduling As analyzed in SS3, the attacks must be effective to trigger safety hazards while fast enough to satisfy real-time constraints. To satisfy both requirements, we propose an attack framework as shown in Figure 3, whose key idea is to parallelize attack generation and perception processes. First of all, the attacker can identify the set of vehicles collaborated with the victim vehicle and align frame indices of their shared sensor data based on timestamps. The attack generation module is triggered on each LiDAR cycle. It first tracks the target region: (1) for object spoofing, the trajectory of the object to spoof is predefined; (2) for object removal, the attacker needs a simple object detection algorithm to localize the target object to remove. Then it optimizes the malicious perturbation that can be used to attack the victim's perception at the current frame. Note that the optimized perturbation is generated overtime and cannot be used to attack due to the real-time constraints (SS3.3). We need to transform the perturbation into one that has a similar attack impact on the next frame. In this way, the perturbation is ready to apply when the next frame arrives, introducing no additional delay to the original collaborative perception pipeline. As the attack generation occurs one frame in advance, it affords the attacker up to one LiDAR cycle time to complete the optimization. The optimization and transformation of the perturbation highly depend on the configuration of the collaborative system and will be discussed in later sections. ### Black-box Ray Casting Attack In early-fusion collaborative systems, CAVs share LiDAR point clouds. Thus, the attacker will perturb the location of LiDAR points directly but must obey the physical rules of LiDAR sensors as mentioned in SS3.3. Note that a white-box adversarial attack [30] is not applicable because (1) most perception models involve non-differential pre-processing and (2) even if the gradient can be approximated, the heavy computation can hardly achieve real-time attacks. **Insights**. First, we find that a higher point density on the object surface leads to more successful detection. Mainstream 3D object detection models learn spatial features from voxelized point groups (SS2). It is therefore natural that a higher point density strengthens the learned feature toward object classes. Second, a higher coverage on object surfaces also contributes to better detection, as the shape features of objects become more explicit. This is also one of the key benefits of collaborative perception, as multi-view LiDAR data allows for a more comprehensive perception of objects. Given the two insights, the object spoofing attack aims to spoof denser LiDAR points of objects and cover a larger surface area of the object. The goal of the object removal attack is to obscure the surface of the original object as thoroughly as possible. We confirm the insights in our ablation study (SS6.3.3). **Attack methods** The attacker pretends that an object is spoofed or removed and reconstructs the LiDAR point cloud via ray casting techniques. The traced rays follow the physical laws of the original lasers so the reconstructed point cloud is realistic. The spoofing attack requires no model access while the removal attack requires the model's inference API. The attack is demonstrated in Figure 4 and Algorithm 1. _Preparation of 3D object model_. The attacker first constructs a 3D model (_e.g.,_ a triangle mesh) of the object they wish to fabricate. In later attack steps, we will place the 3D model in the target region and cast malicious points on its surfaces. For object spoofing, the model can represent a real object such as a car. For object removal, we optimize a universal adversarial shape offline as the model. We initialize a cuboid triangle mesh and use a black-box genetic algorithm to optimize the perturbation on mesh vertices. As shown in Algorithm 1 (AdversarialShape), in each iteration, we launch the object removal attack on a dataset of attack cases and optimize the object model to maximize a fitness score representing the success of attacks (i.e., minimizing the confidence of detection proposals in the target region). A detailed explanation is in Appendix A.1. _Non-occlusion ray casting_. We set up a ray casting scenario where the 3D model is placed at the designated location and the rays are lasers in the attacker's LiDAR image. Though the predefined object models have fixed sizes, we will dynamically adjust size, location, and orientation of them to fit the target region during the scenario creation (Transform in Algorithm 1), making the object models universal for various attack situations. The ray casting algorithm calculates the points of intersection between the rays and the 3D model. To maximize point density on the target object, the ray casting is customized to ignore occlusion effects, ensuring that each ray is not blocked and goes through model surfaces to leave multiple intersection points. _Point sampling_. We resolve the occlusion violations by sampling one intersection point per ray. Specifically, for each ray with one or more intersection points with the 3D model, its original LiDAR point is replaced by one of the intersection points. The selection of intersection points is through customizable weighted random sampling. In our implementation, intersection points closer to benign vehicles have a higher probability of being selected. In this way, spoofed fake points tend to have a higher density close to benign points, increasing the chance of obscuring the original point distribution. Also, the randomness ensures high coverage on object surfaces. More details are presented in Appendix A.2. **Attack transformation**. To transform the attack into a future frame, we need to record the modified LiDAR points and corresponding ray angles. When the next frame is produced, the attacker removes points with the same ray angles, transforms recorded points to the new target region, and appends the transformed points. Since two frames have a minor time interval (100 ms), the transformation preserves physical laws. **Time constraint**. The attack generation can start when the attacker's LiDAR image is produced. Though _point sampling_ requires the locations of remote LiDARs, they can be predicted using simple linear velocity estimation. The ray casting should be done within one LiDAR cycle. ### White-box Online Adversarial Attack Intermediate-fusion systems require CAVs to exchange feature maps, the intermediate result of neural network processing. Such systems are immune to the black-box ray casting attacks (SS4.2) because the presence of benign feature maps will drop the attack success rate significantly, as demonstrated later in our experiments (SS6.3.3). Adversarial machine learning, on the other hand, is able to generate adversarial feature maps. The attack assumes that the attacker has white-box knowledge of perception models. **Insights**. We optimize a perturbation on the attacker's feature map by performing a backward pass in each LiDAR cycle and reusing the perturbation over frames as an online attack, similar to Tu et. al. [71]. We introduce two new ideas to achieve realistic real-time targeted attacks. First, we initialize the perturbation using results from black-box ray casting attacks, making the initial perturbation vector closer to the optimal choice. This step is crucial for achieving real-time attacks as it significantly reduces the number of optimization iterations required. Second, to restrict attack impact to a specific region, we mask the feature map. This is based on the fact that convolution networks preserve the relationship between feature map indices and real-world locations [49]. Another conventional approach to enforce spatial constraints is to add a regularization term to the loss function (_e.g.,_ penalize detection errors in non-attack regions). However, this requires multiple iterations to converge, making it unsuitable for real-time attacks. **Attack methods**. The attacker optimizes a perturbation on their feature map over continuous frames. For each frame, the attacker spoof/remove objects in the point cloud first as initialization, then updates the latest perturbation map through an iteration of projected gradient descent (PGD). As the target region moves, the perturbation is re-indexed accordingly in each cycle. The key steps are demonstrated in Figure 4. _Black-box initialization_. The attacker starts by modifying the raw point cloud. Unlike the ray casting attack in SS4.2, there is no restriction on this modification in terms of the physical laws. Therefore, the attacker tends to inject high-density high-coverage LiDAR points representing the 3D models mentioned in SS4.2, which can be prepared offline. _Feature map masking_. We make the assumption that each feature map index is associated with a voxel/pillar in the 3D real-world coordinate system. Given the target region, we extend the region by a fine-tuned parameter and extract corresponding feature indices. The masking operation ensures that only features with the selected indices are perturbed. If the index mapping is not explicit, it can be approximated by comparing the feature map before and after the black-box initialization and identifying the indices where the feature values have been altered. _Loss objective_. The optimization objective is to increase/decrease the score of the bounding box proposal on the labeled attack region, for spoofing/removing objects. We define the objective function as Equation 4, where \(Z^{\prime}\) denotes the set of bounding box proposals after the perturbation, \(z^{\prime}_{\text{g}}\) is the score associated with the proposal \(z^{\prime}\), and \(z_{t}\) represents the target region to attack. The objective function maximizes/minimizes the confidence score of proposals overlapping with the target. \[\begin{split} l_{spoo}(Z^{\prime})&=\sum_{z^{ \prime}\in Z^{\prime}}\text{IoU}(z^{\prime},z_{t})\cdot\log(1-z^{\prime}_{ \text{g}})\\ l_{remove}(Z^{\prime})&=-\sum_{z^{\prime}\in Z ^{\prime}}\text{IoU}(z^{\prime},z_{t})\cdot\log(1-z^{\prime}_{\text{g}})\\ \end{split} \tag{4}\] _Constraints on perturbation_. We clip the perturbation by restricting feature values to their normal range, which is measured on a set of non-attack test cases. As feature values do not explicitly deliver spatial semantics that can be used for anomaly detection, there is no need to restrict feature perturbation to minor thresholds. **Attack transformation**. Given the centers of target regions between two consecutive frames, one can get corresponding feature map indices \((x_{0},y_{0})\) and \((x_{1},y_{1})\) respectively. Then each index \((i,j)\) in the feature map is mapped to \((i-x_{0}+x_{1},j-y_{0}+y_{1})\). **Time constraint**. The PGD optimization needs feature maps shared from as many vehicles that cooperate with the victim. Assuming all benign vehicles continuously broadcast and process feature maps at a frequency equal to the LiDAR cycle (\(T\)) and the transmission delay is below a threshold \(t_{T}\), the optimization must be done within \(T-2t_{T}\). ## 5 Anomaly Detection We propose CAD, a Collaborative Anomaly Detection system to mitigate the security threats presented in SS4. We enumerate the design challenges in SS5.1. In SS5.2, we outline our system, followed by the details of key components in SS5.3 and SS5.4. ### Challenges As discussed in SS2, existing defense mechanisms [55, 23, 69] mainly focus on finding temporal or spatial inconsistencies but they cannot handle attackers who can generating fake data that conform with physics laws. We propose a cross-agent consistency check where all benign vehicles exchange evidence of anomalies to reveal adversarial behaviors jointly. To ensure the effectiveness, robustness, and generality of the proposed method, we have to overcome the following challenges. _Affordable bandwidth and computation cost_. Collaborative perception systems must finish a perception cycle within a hard deadline (_e.g.,_ 100 ms [53]). Therefore, CAD should only share minimal, essential data to save bandwidth and distribute data processing on different vehicles to minimize latency. Our method only shares small-sized metadata. _Detection of stealthy attacks_. As the attacks may inject malicious data into a specific small region in 3D space, fine-grained anomaly detection is required. For instance, spoofing a ghost vehicle affects a region of approximately 10 \(m^{2}\) while the perception range is over 4,000 \(m^{2}\). CAD uses fine-grained occupancy maps to precisely reveal abnormal regions. _Robustness to benign errors_. LiDAR data captured by different vehicles have slight differences in timestamps [73, 67]. CAD leverages motion estimation and prediction to synchronize occupancy maps. Localization error is another potential source of faults. As nowadays vehicle localization achieves an accuracy of less than 0.1 m [11], CAD can tolerate minor errors with proper threshold parameters. ### System Overview CAD is a system deployed on CAVs against data fabrication during collaborative perception. As shown in Figure 5, besides the original perception pipeline, CAVs are required to perform anomaly detection tasks in parallel. When a local LiDAR image is produced, each vehicle generates an occupancy map that labels on-road objects, free-to-drive regions, and invisible regions in the 2D space. Then the occupancy map is broadcast via a V2V wireless network. The occupancy map is represented in fine-grained polygons, balancing precision and transmission overhead. In addition, motion information of on-road objects is attached for synchronizing occupancy maps from different vehicles. After collecting occupancy maps from other vehicles, each vehicle launches two consistency checks. _Occupancy consistency check_ reveals inconsistencies of occupancy maps, _e.g.,_ one region identified as free and occupied by two different vehicles indicates that one of the participants is faulty or malicious. Occupancy maps are then merged into one, with inconsistent regions marked as unknown. _Perception-occupancy consistency check_ then ensures the results of collaborative perception are consistent with the merged occupancy map - bounding boxes should overlap with occupied regions instead of free regions; on-road occupied regions should be detected in at least one bounding box. Even though attackers can launch strong stealthy attacks and fake occupancy maps, the attack impact is always reflected by perception results and can be revealed as malicious by benign occupancy maps. ### Occupancy Map The occupancy map generation involves three steps: point segmentation, space segmentation, and motion estimation. _Point segmentation_. First, we eliminate less useful background points that are not on the road using HD maps provided by autonomous driving systems [7, 8]. Then, we apply ground fitting algorithms (_e.g.,_ RANSAC [36] to detect the ground plane and remove LiDAR points on it. By clustering the remaining points based on point density, we can identify all non-ground objects on the road, with each cluster representing a unique on-road object. The method has been proven to be effective in prior research [76, 83, 35]. _Space segmentation_. After identifying on-road objects, we generate a fine-grained representation of 2D space occupancy, which classifies the 2D space into three categories: _free_, _occupied_, and _unknown_. (1) Occupied regions are the convex hulls [24] of the object clusters. (2) Free regions represent the region surrounded by only ground points. We evenly divide the 2D space into equal sectors whose vertex is the LiDAR sensor location. The number of sectors can be adjusted for different levels of granularity. In each sector, we measure the distance from the LiDAR to the closest non-ground point and label the region within the distance as a free region. A basic implementation of free regions is described above, while we introduce an optimized implementation in Appendix A.3. (3) The remaining region is classified as unknown due to occlusion or the limited range of LiDAR sensors. Since the accuracy of segmentation and clustering drops as LiDAR points get sparser, in the implementation, we define a 2D space as unknown if its distance to the LiDAR sensor exceeds a threshold (_e.g.,_ 50 m). Unlike conventional grid-based occupancy maps [62, 45, 50], our occupancy map divides regions using polygon representation. Our approach offers two advantages over grid representation: (1) polygons can more precisely depict arbitrary shapes; (2) by adjusting the outline smoothing factor, polygon representation provides greater flexibility to strike an optimal balance between precision and size. _Motion estimation_. First, each CAV executes a multi-object tracking (MOT) process on object point clusters. Inspired by AB3DMOT [75], a baseline solution of MOT, we assign an affinity score to each object pair between two consecutive frames. This affinity score indicates the level of similarity considering factors such as distance and point density. Using the scores, MOT algorithms can match the same object across frames. Second, given two point clusters that refer to the same object but on two consecutive frames, we use point cloud registration to derive a transformation matrix between them. Formally, if two object clusters with timestamp \(t^{\prime}\) and \(t\) (\(t-t^{\prime}\approx T\) where \(T\) is LiDAR cycle time) are denoted as \(X_{t^{\prime}}\) and \(X_{t}\) respectively, the transformation matrix \(T_{t}\) satisfies \(X_{t}=T_{t}\cdot X_{t^{\prime}}\). We then standardize the matrix to _motion per time unit_ - divide translation and rotation extracted from \(T_{t}\) by the time gap \(t-t^{\prime}\) and reconstruct the matrix as \(T_{e}\). We define this operation as Scale: \(T_{e}=\textsc{Scale}(T_{t},\frac{1}{t-t^{\prime}})\). \(T_{e}\) represents the latest motion of the specific object and is attached to the corresponding occupied region in the occupancy map. Also, the maps should be transformed into a global coordination system as a consensus of all CAVs. ### Consistency Checks The processes of consistency checks are triggered simultaneously with the data fusion, involving occupancy map synchronization, occupancy consistency checks, and perception-occupancy consistency checks. _Occupancy map synchronization_. After receiving a set of occupancy maps with slightly different timestamps, each vehicle aims to synchronize all maps to the timestamp of the latest local LiDAR image. For each on-road occupied region in each occupancy map (except the local map), we first calculate its time gap to the target timestamp, denoted by \(\Delta_{t}\). We then transform the occupied region by applying the transformation \(\textsc{Scale}(T_{e},\Delta_{t})\), where \(T_{e}\) is the corresponding motion per time unit. After moving all occupied regions, we post-process the occupancy map by excluding new occupied regions from the original free regions to resolve conflicts. In this way, all occupancy maps can be directly merged as they have been synchronized spatially and temporally. Formally, we denote synchronized occupancy maps by \(M^{(i)}=(S^{(i)}_{O},S^{(i)}_{F})\) where \(i\in\{0,1,\ldots,N\}\) denotes vehicle IDs (\(t=0\) denotes the ego vehicle) and \(S_{O}/S_{F}\) denotes occupied/free regions. _Occupancy consistency check_ reveals inconsistencies among synchronized occupancy maps. A region is considered conflicted if it is identified as occupied by one vehicle and free by another. We can define conflicted regions as \[\epsilon_{occ}=\bigcup_{i,j\in 0\ldots N}S^{(i)}_{O}\cap S^{(j)}_{F}. \tag{5}\] Considering the inevitable imperfection of synchronization, in the implementation, CAD will ignore conflict regions whose area is below a threshold (_i.e.,_\(\sigma_{occ}\)). Alerts are raised indicating the uncertain risks on conflicted regions. Next, each vehicle generates one consistent occupancy map by merging available occupancy maps and dropping conflicted regions. Particularly, the occupancy map produced by the ego vehicle is trusted and retained in the merged map, unless sensors of the ego vehicle is detected as compromised by existing detection of LiDAR spoofing [55, 69]. The new occupancy map \(M^{\prime}=(S^{\prime}_{O},S^{\prime}_{F})\) is generated as: \[\begin{split} S^{\prime}_{O}&=S^{(0)}_{O}\cup( \bigcup_{i=1\ldots N}S^{(i)}_{O}-\epsilon_{acc}-S^{(0)}_{F})\\ S^{\prime}_{F}&=S^{(0)}_{F}\cup(\bigcup_{i=1\ldots N }S^{(i)}_{F}-\epsilon_{acc}-S^{(0)}_{O})\end{split} \tag{6}\] _Perception-occupancy consistency check_ aims to reveal inconsistencies between the perception results and the merged occupancy map based on two rules. First, free regions should have overlap with predicted object bounding boxes. According to LiDAR sensor physics, objects on the road, if observable, always leave LiDAR points above the ground and should be clustered as occupied regions. This rule can counter object spoofing attacks, as attackers may spoof fake objects in free regions perceived by benign vehicles. Second, occupied regions should be within predicted bounding boxes. Similarly, point clusters on roads are potential obstacles and should be detected to avoid a collision. It serves as a countermeasure against object removal attacks where attackers make real objects undetectable. By checking the two rules, alerts are raised on conflicted regions, similarly filtered by a threshold of area (_i.e.,_\(\sigma_{spoof}\) and \(\sigma_{remove}\)). Formally, if we denote predicted bounding boxes as \(Y\), alerted regions include: \[\epsilon_{spoof}=\bigcup_{y\in Y}y\cap S^{\prime}_{F}\quad\epsilon_{remove}= \bigcup_{s^{\prime}_{O}\in S^{\prime}_{O}}s^{\prime}_{O}-Y \tag{7}\] ### Limitations CAD is a mitigation other than elimination of our proposed attacks. First, CAD cannot work in certain extreme scenarios. The detection could be successful only when at least a benign CAV observes the attacked region. Otherwise, the attacked region is an occluded region for all benign CAVs thus no conflict will appear in Equation 7. Second, CAD detects but may not resolve the anomalies. Though the system may identify the possible attackers via majority voting, it is limited in effectiveness if benign CAVs do not dominate the road. ## 6 Evaluation We introduce our dataset creation in SS6.1 and the implementation details in SS6.2. Then, we present a comprehensive evaluation of the proposed attacks and defenses in SS6.3 and SS6.4. ### Data Collection **Adv-OPV2V**. OPV2V [79] is a benchmark dataset for collaborative perception algorithms, with data collected from a combination of simulators, CARLA [5] and SUMO [12]. We generate Adv-OPV2V from OPV2V, as a benchmark for testing collaborative perception attacks and defenses. We select 300 scenarios for object spoofing and removal attacks respectively. Each scenario features 10 consecutive frames and 3 to 5 CAVs among which one attacker and one victim are designated. Each scenario also has predefined attack targets, such as a trajectory of a ghost vehicle for object spoofing or a trajectory of an existing vehicle for object removal. To ensure the real-world impact of the attacks, we limit the distance between the victim and the target to less than 30 m. **Adv-MCity**. We create a real-world multi-vehicle collaborative perception dataset using testbed MCity [19], which is a real-world mock city for testing CAV applications. On real roads, we deploy 3 Lincoln MKZ vehicles as CAVs, which are equipped with OxTS RT3000v3 GPS, Velodyne VLP-32C LiDAR, and Cohda MK6C OBU as a C-V2X receiver. We also deploy several other vehicles as perception targets. We create 8 attack scenarios that contain potential safety hazards, with 4 for object spoofing and 4 for object removal. We collect LiDAR, GPS, and C-V2X network traces from all CAVs to allow for emulation of collaborative perception. ### Implementation **Collaborative perception models**. For Adv-OPV2V, we utilize pre-trained models provided by OPV2V, which employ n naive point cloud merging in early-fusion and attentive learning in intermediate-fusion. In early-fusion methods, the point clouds are naively concatenated together. In intermediate-fusion methods, the fusion is defined by the models. For Adv-MCity, we augment the OPV2V training data to approximate the LiDAR images collected in the testbed MCity and fine-tune the pre-trained models. During the training of models, an uniform noise of at most 0.2 m or 0.2\({}^{\circ}\) is injected to vehicle locations or rotations respectively, in order to better tolerate localization/synchronization errors in real scenarios, following the previous work [73, 58, 78]. **Attacks** are implemented in 4,874 lines of code (LOC) in Python. The adversarial shape generation is based on a classic genetic algorithm with a population size of 10 and for 5 generations. Adversarial attacks are based on Torch. We fine-tune the learning rate to 1 and optimize for a maximum of 25 iterations. The perturbation of feature maps is restricted in a 5 m\(\times\)5 m square centered by the target location. **Anomaly detection** is implemented in 1,629 LOC in Python, which uses polygon operations in shapely and implementation of RANSAC and DBSCAN from Open3D. The system parameters (_i.e._, \(\sigma_{occ}\), \(\sigma_{spoof}\), \(\sigma_{remove}\)) are not fixed but evaluated through the receiver operating characteristic (ROC) curve. **In-vehicle execution environment**. To demonstrate system deployment on real vehicles, we implement a collaborative perception framework based on Robot Operating System (ROS), consisting of 3,154 LOC in C++ responsible for V2V communication and basic sensor data processing. Our implementation of attacks and anomaly detection can be plugged into the framework as ROS nodes. For performance measurement, we use an in-vehicle machine with an Intel Xeon Silver 4110 CPU and an Nvidia RTX 2080 Ti GPU. Our implementation is open source at [https://github.com/zqzqz/AdvCollaborativePerception](https://github.com/zqzqz/AdvCollaborativePerception). ### Evaluation of Attacks We present our attack results in SS6.3.1. We further analyze the impacting factors in attacks (SS6.3.2) and present an ablation study (SS6.3.3). We realize attacks in the testbed MCity, evaluate the overhead (SS6.3.4) and conduct case studies (SS6.3.5). #### 6.3.1 Attack Results To evaluate attack effectiveness, we launch each proposed attack on 300 attack scenarios in Adv-OPV2V against baseline perception models using PointPillars [49] as the backbone. Attack results are listed in Table 3. In each attack scenario, we identify the best predicted bounding box having the largest Intersection over Union (IoU) with the target region. A spoofing attack is considered successful if the IoU is greater than zero while a removal attack is considered successful if the IoU is zero. For late-fusion systems, object spoofing is trivial to reach almost 100% success rate while object removal is hard as long as one benign vehicle observes the object. Our proposed attacks against early/intermediate-fusion are generally successful with a success rate above 86%. In addition, we illustrate the change of IoU and confidence score on target regions in Figure 6. We observe that attacks make a significant change in the two metrics. For spoofing attacks, the early-fusion ray casting attack achieves a larger IoU meaning more accurate spoofed bounding boxes while the intermediate-fusion adversarial attack pushes the confidence score to extremely high (> 0.8). The result indicates that attacker is easier to launch sophisticated attacks against early-fusion systems since attackers can directly manipulate the subtle spatial features - LiDAR points. The intermediate-fusion system enforces fewer constraints on malicious perturbation thus the upper bound of the attack impact is higher. The change of Average Precision (\(\Delta\)_AP_) on non-attack regions is minor, which means all attacks focus on perturbing the target region. However, \(\Delta\)_AP_ of intermediate-fusion attacks is higher because the perturbation on feature maps inevitably propagates to a larger region through convolution layers. We also reproduced the prior attack proposed by Tu _et al._[72] We use the loss function and attack parameters from the paper and the constraint of perturbation the same as our adversarial attack. We also allow the unrealistic attack constraints as discussed in SS3.2. Attack results on Adv-OPV2V are shown in Table 3 and Figure 6. The prior attack is successful in its attack goal to inject as many false perception bounding boxes as possible, by injecting on average 56.2 FPs and 3.6 FNs in each LiDAR frame. The overwhelming FPs drop _AP_ to nearly 1%. However, when attacking a certain region, it only yields 14%-22% success rate because the attack is untargeted fundamentally. The major problem of the \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Attack setting:} & \multicolumn{3}{c|}{Attack results} & \multicolumn{3}{c|}{Dense results} \\ \cline{2-7} Method-Fusion-Goal & Succ. & IoU & Score & AAP & Succ. & TPR & TPR \\ \hline [72]-Intt-Spoof & 21.7\% & 0.01 & 0.06 & -62.8\% & 100\% & 34.0\% & 10.3\% \\ [72]-Intt-Remove & 14.0\% & 0.47 & 0.34 & -61.8\% & 100\% & 39.7\% & 7.6\% \\ RC-Early-Spoof & 86.0\% & 0.55 & 0.38 & -40.4\% & 83.8\% & 80.9\% & 2.0\% \\ RC-Early-Remove & 87.3\% & 0.07 & 0.03 & -5.8\% & 81.2\% & 38.0\% & 5.6\% \\ Adv-Intt-Spoof & 90.0\% & 0.46 & 0.71 & -2.0\% & 83.4\% & 80.1\% & 2.0\% \\ Adv-Intt-Remove & 99.3\% & 0.02 & 0.01 & -3.9\% & 83.6\% & 42.5\% & 2.2\% \\ Naive-Last-Spoof & 98.7\% & 0.96 & 0.99 & 0 & 80.8\% & 84.8\% & 2.7\% \\ Naive-Last-Remove & 0.3\% & 0.78 & 0.53 & 0 & - & - \\ \hline \end{tabular} \end{table} Table 3: Performance of attacks and defenses on Adv-OPV2V. Figure 6: IoU/confidence on target region under the prior attack, our ray casting (RC) and adversarial (Adv) attacks. Figure 7: Different stealth of untargeted/targeted attacks. untargeted approach is stealth of attacks. The untargeted attack generates a significant number of abnormal bounding boxes that are out of the road or heading away from the lane direction, as shown in Figure 7. The uncontrollable attack impact can be easily recognized by either humans or automatic anomaly detection. #### 6.3.2 Impacting Factors **Visibility of the target region**. We hypothesize that the attack is more successful when the target region is clearly visible to the target but not benign CAVs. Intuitively, the target is more visible if it is closer to the LiDAR or there are more LiDAR points on it. To validate the hypothesis, we draw relationship between attack success rate and the two metrics in Figure 8. The result shows that the attack is more successful when the attacker is closer to the target while benign CAVs are further away, or the attacker has more LiDAR points in the target region while benign vehicles have fewer. The impact of the visibility is obvious in early-fusion systems but not intermediate-fusion systems. The difference is reasonable because for early-fusion schemes, more LiDAR rays interact with closer targets thus attackers can manipulate more LiDAR points without violating LiDAR sensor physics. **Benign errors**. It is worth nothing that attacks should tolerate errors in real systems. To simulate the worst-case synchronization errors, we delay any LiDAR frame by 100ms at a probability of 0.5. To simulate localization errors, we incorporate uniform noise into vehicle locations (\(0-0.2\) m) and orientations (\(0-0.2^{\circ}\)), following existing works [11, 78]. Network errors can manifest as delays, corruptions, or dropped messages, all of which hinder the proper sharing of data. If the attacker's malicious data fails to reach the victim, the attack on that frame will certainly fail. Conversely, if benign vehicles' data cannot reach the attacker, less data is used for attack optimization, potentially leading to less successful attacks. To simulate such scenarios, we randomly drop 10% of data sharing during the attacks. The 10% error rate is regarded as the highest threshold of acceptable network connection by previous studies [70, 89]. From the results in Figure 9, synchronization and localization errors have very minor impact on the attacks. Network errors decrease the success rate by 10-20%, with a 10% reduction attributable to the fact that 10% of the attacker's messages fail to reach the victim. Even with the barely acceptable network connection, our attacks can achieve at least 60% success rate, showing the robustness against benign errors. **Model configuration**. Our attacks are general for various collaborative perception models. In Figure 10, the attack success rate is stable if (1) replacing the backbone model by VoxelNet [90] in either early-fusion or intermediate-fusion methods; (2) changing the fusion network of intermediate-fusion system to V2VNet [73] or CoBEVT [77]. However, FPV-RCNN [82] involves a second-stage non-differential fusion on bounding box proposals (similar to late-fusion), making object removal hard. **Object types**. We generalize our attacks from vehicle targets to pedestrians and cyclists. As OPV2V [79] only has vehicles originally, we augment OPV2V to include pedestrians and cyclists by modifying the simulation settings and re-training the models. As shown in Figure 11, the attacks are generally effective for different object types. Especially, removing pedestrians is easier than removing other object types because they usually comprise a small number of LiDAR points and have a low detection confidence. **Number of attackers**. One attacker is strong enough to break collaborative perception. Adding another attacker can further increase the success rate of ray casting attacks and adversarial attacks by around 5% and 2%, respectively. #### 6.3.3 Ablation Study For each attack we propose, we provide a set of variants by removing one or more components from the original design. Attack results are summarized in Figure 12 and the complete quantitative results are in Appendix C. For ray casting attacks against early-fusion systems, we design the following variants. (1) _RC_. Baseline ray casting pretending the object to spoof/remove emerged/disappeared. Especially for object removal, _RC_ uses the adversarial shape while _NoAS-RC_ does not. (2) _Dense-A-RC_. Based on _Naive-RC_, make the spoofed points denser by placing the origin of rays only 5-meter away from the target during ray casting. (3) _Dense-All-RC_. More than _Dense-A-RC_, add multiple virtual LiDARs around the target to further increase point coverage. (4) _Sampled-RC_. Based on _RC_, do non-occlusion ray casting and point sampling as mentioned in SS4.2. (5) _Async-Sampled-RC_. The proposed attack. Based on _Sampled-RC_, optimization is done one frame before the attack happens. The attack results validate our assumptions and prove our design components are useful. _Point density_ and _Point coverage_ lead to stronger attacks. _Dense-A-RC_'s success rate is 12%/15% higher than _RC_ for spoofing/removal. _Dense-All-RC_'s success rate is 6%/31% higher than _Dense-A-RC_ for spoofing/removal. However, _Dense-A-RC_ and _Dense-All-RC_ are not stealthy attacks as spoofed points have abnormal den Figure 8: Attack success rate w.r.t. target visibility. sity. Therefore, we propose _Sampled-RC_, whose success rate is 14%/50% higher than the naive ray casting while preserving LiDAR's physical laws. Finally, our asynchronous attack scheduling makes _Async-Sampled-RC_ deployable in real-time systems, without a significant drop in success rate. In addition, the universal adversarial shape is crucial for object removal. Naively replacing object points with ground points only achieves a 5% success rate and the usage of adversarial shapes raises the number to 17%. For adversarial attacks against intermediate-fusion systems, we design the following variants. (1) _Adv_. Basic implementation of PGD. It does not constrain the attacker's knowledge or number of optimization steps and disables black-box initialization. Instead of using perturbation masking, we add a regularization term to achieve a targeted attack. (2) _Step1-Adv_. Based on _Adv_, do optimization for only one iteration. Parameters are set the same as SS6.2. (3) _Init-Step1-Adv_. Based on _Step1-Adv_, add black-box initialization. (4) _Async-Init-Step1-Adv_. Based on _Init-Step1-Adv_, optimization is done one frame before the attack happens. (5) _Online-Async-Init-Step1-Adv_. Our proposed attack (SS4.3). Online attack optimizing one perturbation vector over consecutive frames. We conclude the effectiveness of our key designs. _Adv_ is a standard white-box adversarial attack with the minimum constraints and maximum resources, representing the empirical upper bound of attack impact. Limiting optimization iteration to only one per frame, though significantly lower computation cost, drops attack success rate by 63%/48% for spoofing/removal. To address the problem, we propose black-box initialization. The design is very useful, especially for object spoofing: _Init-Step1-Adv_ achieves 53%/8% higher attack success rate than _Step1-Adv_. Finally, _Async-Init-Step1-Adv_ integrates the zero-delay attack scheduling without dropping attack effectiveness and _Online-Async-Init-Step1-Adv_ builds an online attack pipeline which further enhances the attacks. #### 6.3.4 Overhead We measure the execution latency of our attack algorithms in the in-vehicle execution environment. For ray casting attacks, 3D object model preparation is done offline. The non-occlusion ray casting takes 54 ms on average. Our implementation of ray casting is CPU-only and can be further improved by hardware acceleration. The point sampling takes only <3 ms. Attack transformation introduces a negligible overhead of <1 ms. For adversarial attacks, the point cluster for black-box initialization is prepared offline thus the initialization simply appends pre-computed points to the LiDAR image, incurring a negligible overhead of <1 ms. The one-step PGD optimization is computationally intensive and requires GPU resources, taking 67 ms on average. The total attack generation is finished in 89 ms on average within one LiDAR cycle. The cost of attack transformation is negligible. #### 6.3.5 Real-world Case Study Attacks must be realizable. We test attack algorithms by emulating driving scenarios using dataset Adv-MCity. In this section, we focus on case studies on two scenarios, as shown in Figure 13. All scenarios are described in Appendix B. _Object spoofing during right turn._ The victim CAV is turning right at green while the attacker CAV stops on another road. The attacker's goal is to spoof one fake vehicle to stop the victim, forming a denial-of-service (DoS). First, we launch ray casting attack assuming CAVs use early-fusion collaborative perception. Since the victim is far away from the attacker (>30 m), it is hard to directly spoof an object in front of the victim. However, the attacker can leverage the traffic rule implemented in CAVs by spoofing a moving vehicle whose trajectory blocks the victim's path. In 5 seconds, the attack succeeds to spoof the vehicle in 76% of frames. Baidu Apollo indeed stops the vehicle to yield the spoofed vehicle. Second, if CAVs use an intermediate-fusion system, the adversarial attack can achieve a stronger attack by spoofing an obstacle right in front of the victim in 92% of frames. _Object removal during lane merging_. The victim CAV is starting from a parking place to merge into the main road while another vehicle is going through from behind. Normally, the victim should yield the right of way. The attacker sits on another lane, aiming to remove the moving vehicle from the view of the victim. The ray casting attack succeeds in removing the vehicle in the first 45 frames but fails in the last 5 frames because the target is further. Nevertheless, it is too late when the victim perceives the target and Baidu Apollo reports a collision. Also, using the white-box adversarial attack against intermediate-fusion perception has a similar attack impact, removing the vehicle in 96% of frames. ### Evaluation of Anomaly Detection We evaluate effectiveness and efficiency in SS6.4.1 and SS6.4.3. We then compare CAD with existing defenses in SS6.4.4. We demonstrate the real-world deployment in SS6.4.5. #### 6.4.1 Defense Results We apply CAD on attacked frames in Adv-OPV2V. Note that CAD is supposed to detect both attacks and perception faults, as long as the predicted bounding box has no overlap with ground-truth or the ground-truth bounding box is not detected. We consider adaptive attacks, where the attacker fakes his/her occupancy map to avoid conflicts with other occupancy maps or detected bounding boxes. Therefore, _occupancy consistency check_ in SS5 cannot defend adaptive attacks but serves as input validation before merging occupancy maps. In Table 3, true positive rate (TPR) and false positive rate (FPR) are calculated on the whole LiDAR images, including the detection of both malicious attacks and benign perception faults. On the other hand, success rate measures the detection of only malicious attacks, which is the ratio of positive detection on the target region and the total number of attack scenarios. We also show the ROC curves in Figure 15. CAD is generally effective against various attack methods. If selecting thresholds \(\sigma_{spood}\) and \(\sigma_{remove}\) to maximize AUC score, CAD achieves FPR <3% and TPR >80%/38% against spoofing/removal while detecting around 90% of anomalies caused by our attacks. From the split-down of alarms in Figure 15, low TPR against removal threats is mainly caused by undetected benign perception faults which are out of the range of occupancy maps. The "false alarms" are mostly the cases where predicted bounding boxes is not accurate (IoU < 0.5). Though considered as normal cases by our criteria, they have significant differences with accurate object detection. We also apply CAD on the prior attack [72]. As the prior attack is untargeted and injects a few dozens of fake detection results in each LiDAR image, it is easy for CAD to reveal 100% of the attacked frames, as shown in Table 3. The low TPR (around 30%) is because the occupancy maps cannot cover lots of the far-away fake detection results. **Other adaptive attacks**. The attacker may exploit _occupancy consistency check_ to create as many conflicts as possible to minimize the coverage of the merged occupancy map and decrease TPR. However, if the occupancy conflict is with the victim's local map, the attacker is directly identified because the local data is trusted. This ensures a lower bound of TPR by using only local occupancy maps (71.7%/24.0%/70.4%/28.7%/78.1% against the attacks in Table 3). Also, occupancy conflicts obviously indicate the existence of attackers and are useful messages for other defense mechanisms such as reputation systems. The attacker may also choose to launch attacks at locations out of the coverage of occupancy maps. However, our experiments on Adv-OPV2V show that benign occupancy maps cover 95.6% in 30 meters and 99.9% in 10 meters around the victim. It is very little chance for the attacker to spoof/remove objects stealthily at a safety-critical distance. #### 6.4.2 Impacting Factors **Distance to LiDAR sensors**. As shown in Figure 17, over 80% false alarms are 60 meters away from any benign vehicles. Within the range of occupancy maps (50 meters in our configuration), CAD can stably make true detection. **Synchronization**. With injected synchronization errors as introduced in SS6.3.2, CAD's synchronization provides significant robustness. As shown in Figure 15, CAD is not effective without synchronization, having TPR 35%/15% against spoofing/removal when FPR is low (<5%). With synchronization, CAD achieves TPR 60%/40% against spoofing/removal, close to the detection rate on ideal synchronized data. **Localization errors**. With the injected localization errors as stated in SS6.3.2, we observe a minor decrease of accuracy (TPR -3.1%, FPR +0.2%), showing CAD's robustness. **Object types**. As CAD uses the area of conflicted regions as the key metric, smaller object sizes result in higher FPR, e.g., minor conflicts caused by errors of occupancy maps may be falsely considered as anomalies. By choosing the best AUC score of the ROC curve, CAD detects pedestrian spoofing, pedestrian removal, cyclist spoofing, and cyclist removal in FPR/TPR of 78.4%/14.5%, 38.2%/13.9%, 81.7%/6.5%, and 29.4%/6.2%, respectively. When compared with the detection of fake vehicles, CAD yields around 12%/4% higher FPR on pedestrians/cyclists while maintains TPR stable. **Number of attackers**. More attackers decrease the coverage of benign occupancy maps and cause more false negatives. #### 6.4.3 Overhead We measure the latency of the anomaly detection using the in-vehicle execution environment and recorded network traces [60]. Segmentation/clustering algorithms are relatively expensive but they can be further boosted using hardware acceleration [28]. Occupancy map transmission is as fast as 10ms. Each map contains around 300-1000 polygon vertices and lightweight metadata of object motion, in a small size of around 10 KB. Consistency checks are simple polygon operations that can be finished in 15ms. The end-to-end anomaly detection takes 92ms, which means the CAV can be aware of abnormal bounding boxes before the LiDAR cycle ends. #### 6.4.4 Comparison with Other Defense Approaches **CARLO**[69] is an anomaly detection algorithm detecting LiDAR spoofing attacks. Given the fact that detected bounding boxes should host solid objects, CARLO validates that the volume of "free space" (conical spaces between the LiDAR sensor and rendered points) in bounding boxes is under a threshold. In collaborative perception, we assume the victim CAV applies CARLO on each received LiDAR point cloud. Results are shown in Figure 19, which detects our proposed ray casting attack (Sampled RC) with TPR 77.7% and FPR 3.9%. However, attackers can adjust the rule of point sampling in SS4.2 to launch adaptive attacks. For instance, the attacker can restrict the number of points that can penetrate object surfaces to be <30% (Sampled RC adaptive). As a result, TPR decreases to 63.8% and FPR increases to 14.7% while the success rate only drops by 5.6%. If forbid ray penetration completely (Naive RC), CARLO is close to random guessing while the success rate of our attack is still above 70%. In contrast, CAD achieves higher TPR and, more importantly, is independent of attack methods. **LIFE**[55] is a hybrid anomaly detection system against sensor attacks. First, it checks the temporal consistency of depth camera images based on machine learning methods. As discussed in SS5.1, attackers have the capability to continuously launch attacks thus the check is fundamentally not useful. Besides, an object matching algorithm checks the consistency between objects detected in the camera and the LiDAR. In early-fusion systems, CAVs can launch object matching on remote LiDAR and local camera images. To reproduce LIFE, we use the same LiDAR segmentation as CAD and train a EfficientPS [59] model for camera image segmentation. We draw the ROC curve of object matching in Figure 20. LIFE's object matching achieves around 80% TPR and 26% FPR against early-fusion ray casting attacks. LIFE suffers from a higher FPR because multiple machine learning processes introduce more errors - inaccurate detection from either camera or LiDAR, which is usual on far-away objects, may trigger a false alarm. Compared with LIFE, CAD has a higher detection rate with much lower computation/bandwidth consumption, thanks to the collaboration among CAVs. **MDS**[23] is an anomaly detection framework assuming CAVs to share bounding boxes. Besides checks on message format and temporal consistency which are not relevant to our attacks, each CAV evaluates the consistency between the local occupancy map and final perception results, and also merges anomaly detection results from multiple CAVs by majority voting. However, the attackers can launch adaptive attacks to only send falsified data to the specific victim instead of all other CAVs, thus the majority voting is actually not helpful. Compared with CAD, the spatial check is restricted on the local occupancy map (without the sharing of occupancy maps) thus TPR is lower by 9-15%, as shown in Figure 21. CAD has no conflicts with the above defenses. Users can deploy multiple defenses to strengthen sensor data integrity. #### 6.4.5 Real-world Case Study We demonstrate CAD on the same attack scenarios discussed in SS6.3.5, shown in Figure 13. In the scenario of the right turn, though the spoofed object is in the blind spot of the victim (red), another benign vehicle (blue) observes that region and identifies the anomaly when it is 15 meters away from the victim. In the scenario of lane merging, the victim vehicle recognizes an object point cluster on the left but is not detected by the perception system, triggering a warning of object removal 2.1 seconds before a potential collision. In other scenarios in Appendix B, our anomaly detection can detect attacks at least 1.5 seconds before a collision or hard brake happens. The anomaly detection can be more robust when there are more benign vehicles on busy roads. Figure 16: Split-down of alarms of anomaly detection. Figure 17: #Alarms w.r.t. distance to benign LiDARs. Figure 18: Latency of anomaly detection. Conclusion In this work, we pioneer to examine the threats posed by data fabrication on collaborative perception systems. We unleash novel attacks that successfully spoof or remove on-road objects in various types of collaborative perception schemes and demonstrate the attack impact on real traffic scenarios. To mitigate the threats, we introduce a cross-vehicle validation solution powered by fine-grained occupancy maps, which detects anomalies seconds before potential road hazards occur. Our attempts of both attacks and defenses serve as a benchmark to spur future research on collaborative perception security. ## Acknowledgments We would like to thank our anonymous shepherd and reviewers for their valuable comments and feedback. This work was supported in part by NSF under CNS-1930041, CMMI-2038215, CNS-1932464, CNS-1929771, and CNS-2145493, USDOT CARMEN University Transportation Center (UTC), and the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks, Grant # 2112562.
2309.03858
Kähler--Einstein metrics on quasi-projective manifolds
Let $X$ be a compact K\"ahler manifold and $D$ be a simple normal crossing divisor on $X$ such that $K_X+D$ is big and nef. We first prove that the singular K\"ahler--Einstein metric constructed by Berman--Guenancia is almost-complete on $X \backslash D$ in the sense of Tian--Yau. In our second main result, we establish the weak convergence of conic K\"ahler--Einstein metrics of negative curvature to the above-mentioned metric when $K_X+D$ is merely big, answering partly a recent question posed by Biquard--Guenancia. Potentials of low energy play an important role in our approach.
Quang-Tuan Dang, Duc-Viet Vu
2023-09-07T17:18:19Z
http://arxiv.org/abs/2309.03858v1
# Kahler-Einstein metrics on quasi-projective manifolds ###### Abstract Let \(X\) be a compact Kahler manifold and \(D\) be a simple normal crossing divisor on \(X\) such that \(K_{X}+D\) is big and nef. We first prove that the singular Kahler-Einstein metric constructed by Berman-Guenancia is almost-complete on \(X\backslash D\) in the sense of Tian-Yau. In our second main result, we establish the weak convergence of conic Kahler-Einstein metrics of negative curvature to the above-mentioned metric when \(K_{X}+D\) is merely big, answering partly a recent question posed by Biquard-Guenancia. Potentials of low energy play an important role in our approach. _Keywords:_ Monge-Ampere equations, conic Kahler-Einstein metrics, almost-complete metrics, domination principle, analytic singularities. _Mathematics Subject Classification 2020:_ 32U15, 32Q15. ## 1 Introduction Let \(M\) be a \(n\)-dimensional (non-compact) Kahler manifold. Let \(\eta\) be a closed positive \((1,1)\)-current on \(M\). Following [7, 6], we say that \(\eta\) has a well-defined Ricci curvature if the non-pluripolar self-product \(\langle\eta^{n}\rangle\) of \(\eta\) is well-defined on \(M\) (see [9] for this notion) and for every local holomorphic volume form \(\Omega\) on \(M\), one can write \(\langle\eta^{n}\rangle=e^{-2f}\Omega\wedge\overline{\Omega}\) for some function \(f\in L^{1}_{\rm loc}\). In this case, we define \(\operatorname{Ric}\eta:=\frac{i}{\pi}\partial\bar{\partial}f\). The current \(\eta\) is said to be _singular Kahler-Einstein metric_ on \(M\) if \(\operatorname{Ric}\eta=\lambda\eta\) for some constant \(\lambda\in\mathbb{R}\). We are interested in studying (singular) Kahler-Einstein metrics in \(M\). We will focus on the most common case where \(M\) is the complement of a divisor in a compact Kahler manifold. Let us enter details. Let \((X,\omega)\) be a compact Kahler manifold of dimension \(n\). Let \(D\) be a normal simple crossing divisor on \(X\). By abuse of notation, we also denote by \(D\) the support of \(D\). About forty years ago, Cheng-Yau [12] established some necessary conditions for the existence of complete Kahler-Einstein metrics on non-compact manifolds (see also [34] for refinements). Based on this work, Kobayashi [30] proved that there exists a complete Kahler-Einstein metric of negative curvature (with Poincare singularities near \(D\)) on \(X\backslash D\) if \(K_{X}+D\) is ample. The uniqueness of such a metric follows from [43]. More generally, Tian-Yau [37] proved the existence and uniqueness of an almost complete Kahler-Einstein metric \(\tilde{\omega}_{D}\) on \(X\backslash D\) provided that \(K_{X}+D\) is big and nef, and \(K_{X}+D\) is ample modulo \(D\) (cf. Definition 4.2 below for almost complete metrics). It turns out that this metric is also complete by [45]. We note that by the condition that \(K_{X}+D\) is ample modulo \(D\), the non-Kahler locus of the Chern class of \(K_{X}+D\) must lie in the support of \(D\). We refer to [37] for more information and applications of this metric and to [1, 3, 2, 8, 19, 26, 28, 41, 42, 23, 16] for related results. In another approach, Berman-Guenancia [7] proved that there exists a unique closed positive current \(\omega_{D}\) of full Monge-Ampere mass (in the sense of [9]) in \(c_{1}(K_{X}+D)\) such that \[\operatorname{Ric}\omega_{D}=-\omega_{D}+[D] \tag{1.1}\] provided that \(K_{X}+D\) is big, where \([D]\) denotes the current of integration along \(D\). If \(K_{X}+D\) is additionally nef, then \(\omega_{D}\) is also smooth on the complement of \(D\) in the ample locus of \(K_{X}+D\) (see [7] or Lemma 4.4 below). In the case where \(K_{X}+D\) is ample, using the local description of \(\tilde{\omega}_{D}\) and the Monge-Ampere equation satisfied by it (see [30, Page 407]), one sees that \(\tilde{\omega}_{D}\) can be extended trivially through \(D\) as a closed positive \((1,1)\)-current of full Monge-Ampere mass in \(K_{X}+D\) and satisfies the same equation as \(\omega_{D}\). Hence \(\omega_{D}=\tilde{\omega}_{D}\). In other words, we obtain a global interpretation of the complete metric \(\tilde{\omega}_{D}\) if \(K_{X}+D\) is ample (see also [26] for more information). Here is our first main result giving an analogue in the case where \(K_{X}+D\) is big and nef. **Theorem 1.1**.: _Let \(D\) be a simple normal crossing divisor such that \(K_{X}+D\) is big and nef. Let \(E\) denote the non-ample locus of \(K_{X}+D\). Then there exists a unique almost-complete singular Kahler-Einstein metric \(\omega_{D}\) on \(X\backslash D\), and this metric satisfies the following properties:_ _(i) \(\omega_{D}\) is smooth Kahler metric on \(X\backslash(D\cup E)\),_ _(ii) \(\omega_{D}\) can be extended to a closed positive \((1,1)\)-current on \(X\) which is of full Monge-Ampere mass in the class \(c_{1}(K_{X}+D)\) and there holds_ \[\operatorname{Ric}\omega_{D}=-\omega_{D}+[D]\] _as currents on \(X\)._ We note that since \(\omega_{D}\) is of full Monge-Ampere mass, one has \[\int_{X\backslash(D\cup E)}\omega_{D}^{n}=\int_{X}\big{(}c_{1}(K_{X}+D)\big{)} ^{n},\] which is to say that the volume of the singular metric \(\omega_{D}\) on \(X\backslash D\) is finite and is equal exactly to the volume of the line bundle of \(K_{X}+D\). Theorem 1.1 shows that the almost-complete metric on \(X\backslash D\) is indeed equal to the above metric \(\omega_{D}\) constructed in [7]. It generalizes the previous result by Tian-Yau when \(K_{X}+D\) is big, nef and \(K_{X}+D\) is ample modulo \(D\) (because in this case one has \(E\subset D\)). We now come to the next part of the introduction in which we discuss a very recent question posed by Biquard-Guenancia [8] about the degeneration of conic Kahler-Einstein metrics. Our result gives a direct construction for the above metric \(\omega_{D}\) as the weak limit of conic Kahler-Einstein metrics. Let \(D\) be a simple normal crossing divisor on \(X\) such that \(K_{X}+D\) is big. By [9, 31, 44], there exists a unique closed positive current \(\omega_{\epsilon}\) of full Monge-Ampere mass in \(c_{1}(K_{X}+(1-\epsilon)D)\) so that \[\operatorname{Ric}\omega_{\epsilon}=-\omega_{\epsilon}+(1- \epsilon)[D], \tag{1.2}\] for some constant \(\epsilon\in(0,1)\) small enough. The equation is understood in the sense of currents. If \(K_{X}+D\) is additionally ample, then the metric \(\omega_{\epsilon}\) is, in fact, conic near \(D\) by [27] refining [10, 11, 29] (see also [25, 33, 36]). For this reason, we say that \(\omega_{\epsilon}\) is a Kahler-Einstein metric with a conic singularity of angle \(2\pi\epsilon\) along \(D\) even if \(K_{X}+D\) is merely big. It is thus natural to study the relation between \(\omega_{\epsilon}\) and \(\omega_{D}\) as \(\epsilon\to 0^{+}\). Biquard-Guenancia [8] asked whether the metric \(\omega_{\epsilon}\) converges to \(\omega_{D}\) as \(\epsilon\to 0\) in an appropriate sense. Under some additional assumptions on the positivity of \(K_{X}+D\), the local \(\mathcal{C}^{\infty}\) convergence was established in [8, 26, 28]. Here is our second main result proving the weak convergence of \(\omega_{\epsilon}\to\omega_{D}\) in the setting where \(K_{X}+D\) is merely big. **Theorem 1.2**.: _Let \(X\) be a compact Kahler manifold and \(D\) be a simple normal crossing divisor such that \(K_{X}+D\) is big. Let \(\omega_{\epsilon}\) be the Kahler-Einstein metric solving (1.2) for small \(\epsilon>0\). Then, we have the weak convergence \(\omega_{\epsilon}\to\omega_{D}\) as \(\epsilon\to 0^{+}\) in the sense of currents._ We note that Theorem 1.2 remains true in a slightly more general situation that \(D=\sum_{j=1}^{m}a_{j}D_{j}\) is a divisor whose support is simple normal crossing such that \(a_{j}\leq 1\) for every \(1\leq j\leq m\). This can be proved essentially along the same line as in our proof of Theorem 1.2. To simplify the presentation, we only consider a simple normal crossing divisor \(D\) as above. In our proof, we actually don't need the existence of solutions for (1.1) from [7] and directly show that the sequence of potentials of \((\omega_{\epsilon})_{\epsilon}\) is convergent in \(L^{1}\) as \(\epsilon\to 0^{+}\) (we indeed prove a much stronger property that the sequence of potentials is decreasing in capacity, see Definition 2.3) and the limit current will solve the equation (1.1). As far as we know, although it was known when the cohomology class \(K_{X}+(1-\epsilon)D\) is big and nef ([9]), it is still open whether the metric \(\omega_{\epsilon}\) is, in general, a genuine metric on the complement of some proper analytic subset in \(X\). Thus, we could not for the moment address the question of local smooth convergence of \(\omega_{\epsilon}\to\omega_{D}\) as \(\epsilon\to 0^{+}\). We would like to point out, however, that the weak convergence of a sequence of metrics as currents is usually the first step in proving the \(\mathcal{C}^{\infty}\) convergence locally outside a proper analytic subset in \(X\). In [8], under the assumption that \(c_{1}(K_{X}+(1-\epsilon)D)\) is semi-positive, the authors were able to use the domination principle to show that a suitably normalized sequence of potentials \((u_{\epsilon})_{\epsilon}\) is essentially decreasing to \(u_{D}\), hence obtaining the desired \(L^{1}\) convergence. In another context where \(K_{X}+D\) is ample, the \(L^{1}\) convergence of potentials of \(\omega_{\epsilon}\) was proved in [26], using a variational approach. Both these assumptions are not available in our setting. We now comment on our proofs of the main results. The proof of Theorem 1.1 is based on a uniform lower bound for solutions of certain classes of complex Monge-Ampere equations whose proof uses an idea from [18]. The almost-completeness of \(\omega_{D}\) is deduced from a more or less standard uniform \(\mathcal{C}^{\infty}\)-estimates for Monge-Ampere equations. For the uniqueness of \(\omega_{D}\), we need a slightly more general version of Yau's Schwarz lemma for singular Kahler-Einstein metric, see Theorem 4.1 below for details. The proof of Theorem 1.2 is much more involving because there is no available uniform \(\mathcal{C}^{\infty}\)-estimates (recall that \(K_{X}+D\) is only big). We will prove indeed a more general result (Theorem 5.1 below) about the stability of complex Monge-Ampere equations. In contrast to some well-known non-collapsing situations where the \(L^{1}\) convergence of potentials of metrics is relatively easy to establish, the difficulty in our present setting (i.e., in Theorem 5.1) lies in the fact that the complex Monge-Ampere equation corresponding to (1.2) has the measure on the right-hand side of unbounded mass on \(X\) if \(\epsilon=0\). Because of this reason, the expected \(L^{1}\) convergence was obtained in previous works only under some additional assumption. The strategy of our proof of Theorem 5.1 is to show that in the absence of any additional positivity (except the minimal assumption that \(K_{X}+D\) is big), one can still use a sort of domination principle to obtain a quasi-decreasing property of potentials of metrics in consideration, hence, get the desired convergence. The version of the domination principle we need is the quantitative one obtained recently in [22] (see Theorem 5.1 in Section 2). It is robust enough to allow one to compare quasi-plurisubharmonic functions under relatively relaxed situations. **Aknowledgment.** The authors would like to thank Henri Guenancia for fruitful discussions. Duc-Viet Vu is partially supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)-Projektnummer 500055552 and by the ANR-DFG grant QuaSiDy, grant no ANR-21-CE40-0016. Part of this work was done when Quang-Tuan Dang was visiting the Department of Mathematics and Computer Science at the University of Cologne. **Conventions.** The notations \(\lesssim\), \(\gtrsim\) stand for inequalities up to a multiplicative uniform positive constant. We use \(C\) for a positive constant, whose value may change from line to line. For a divisor \(D\), in some context we simply write \(D\) instead of its support \(\mathrm{Supp}(D)\). Denote \(d=\partial+\bar{\partial}\) and \(d^{c}=\frac{i}{2\pi}(\bar{\partial}-\partial)\) so that \(dd^{c}=\frac{i}{\pi}\partial\bar{\partial}\). ## 2 Quantitative domination principle In this section, we recall the quantitative domination principle obtained in [22]. Let \(X\) be a \(n\)-dimensional compact Kahler manifold equipped with a Kahler metric \(\omega\). A function \(u:X\to\mathbb{R}\cup\{-\infty\}\) is quasi-plurisubharmonic (qpsh) if it can be locally written as the sum of a plurisubharmonic function and a smooth function. Let \(\theta\) be a smooth closed real \((1,1)\)-form. We say that \(u\) is \(\theta\)-plurisubharmonic (\(\theta\)-psh) if it is qpsh and \(\theta_{u}:=dd^{c}u+\theta\geq 0\) in the sense of currents. We let \(\mathrm{PSH}(X,\theta)\) denote the set of \(\theta\)-psh functions on \(X\). Recall that the cohomology class \(\{\theta\}\) is _big_ if there exists \(\rho\in\mathrm{PSH}(X,\theta)\) such that \(dd^{c}\rho+\theta\geq\delta\omega\) for some small constant \(\delta>0\). A \(\theta\)-psh function \(u\) is said to have _minimal singularities_ if it is less singular than any other \(\theta\)-psh function. Define \[V_{\theta}:=\sup\{\varphi\in\mathrm{PSH}(X,\theta):\varphi\leq 0\}.\] One can see that \(V_{\theta}\) is a \(\theta\)-psh function with minimal singularities. Let \(u_{1},\ldots,u_{p}\), for \(1\leq p\leq n\) be \(\theta\)-psh functions and put \(\theta_{u_{j}}:=dd^{c}u_{j}+\theta\). We recall how to define the non-pluripolar product \(\theta_{u_{1}}\wedge\cdots\wedge\theta_{u_{n}}\). We write locally \(\theta_{u_{j}}=dd^{c}v_{j}\), where \(v_{j}\) is psh. By [5, 9], one knows that the sequence of positive currents \[\mathbf{1}_{\cap_{j=1}^{p}\{v_{j}>-k\}}dd^{c}\max\{v_{1},-k\}\wedge\cdots \wedge dd^{c}\max\{v_{p},-k\}\] is increasing in \(k\in\mathbb{N}\) and converges to a closed positive current which is independent of the choice of local potentials \(v_{j}\)'s. Thus we obtain a well-defined global closed positive current on \(X\) which is called the _non-pluripolar product_\(\theta_{u_{1}}\wedge\cdots\wedge\theta_{u_{p}}\) of \(\theta_{u_{1}},\ldots,\theta_{u_{p}}\). If \(p=n\), the resulting positive \((n,n)\)-current is a Borel measure putting no mass on pluripolar sets. For any \(u\in\mathrm{PSH}(X,\theta)\), the non-pluripolar complex Monge-Ampere measure of \(u\) is \[\theta_{u}^{n}:=(dd^{c}u+\theta)^{n}.\] Given a potential \(\phi\in\mathrm{PSH}(X,\theta)\), we let \(\mathrm{PSH}(X,\theta,\phi)\) denote the set of \(\theta\)-psh functions \(u\) such that \(u\leq\phi\). We also define by \(\mathcal{E}(X,\theta,\phi)\) the set of \(u\in\mathrm{PSH}(X,\theta,\phi)\) of full Monge-Ampere mass with respect to \(\phi\), i.e., \(\int_{X}\theta_{u}^{n}=\int_{X}\theta_{\phi}^{n}\). If \(\phi=V_{\theta}\) we simply denote by \(\mathcal{E}(X,\theta)\). The following is a version of the domination principle. **Proposition 2.1** (Non-quantitative domination principle).: _Let \(u\in\mathcal{E}(X,\theta)\) and \(v\in\mathrm{PSH}(X,\theta)\). If \(e^{-v}\theta_{v}^{n}\geq e^{-u}\theta_{u}^{n}\) then \(u\geq v\)._ Proof.: For every \(a>0\), we set \(v_{a}=\max(u,v-a)\). By assumptions, we have \(\theta_{u}^{n}\leq e^{-a}\theta_{v}^{n}\) on the set \(\{u<v-a\}\). The comparison principle yields \[\int_{\{u<v_{a}\}}\theta_{u}^{n}\leq\int_{\{u<v_{a}\}}e^{-a}\theta_{v_{a}}^{n }\leq\int_{\{u<v_{a}\}}e^{-a}\theta_{u}^{n}\] This means that \(u\geq v_{a}\) a.e-\(\theta_{u}^{n}\). By the domination principle [15] we obtain \(u\geq v_{a}\), hence \(u\geq v-a\) on \(X\). Since the latter holds for every \(a>0\), we get \(u\geq v\) as desired. We now recall a particular case of the quantitative domination principle established in [22]. Let \(\mathcal{W}^{-}\) be the set of convex, non-decreasing functions \(\chi:\mathbb{R}_{\leq 0}\to\mathbb{R}_{\leq 0}\) such that \(\chi(0)=0\) and \(\chi(-\infty)=-\infty\). Let \(\theta\) be a closed smooth \((1,1)\)-form in a big cohomology class. Let \(V_{\theta}\) be the \(\theta\)-psh function with minimal singularities defined above. Let \(\varrho:=\int_{X}\theta_{V_{\theta}}^{n}>0\), which is the volume of the cohomology class of \(\theta\). For \(\chi\in\mathcal{W}^{-}\) and \(u\in\mathcal{E}(X,\theta)\), let \[E_{\chi,\theta}^{0}(u):=-\varrho^{-1}\int_{X}\chi(u-V_{\theta})\theta_{u}^{n}\] which is called _the (normalized) \(\chi\)-energy_ of \(u\). For every Borel set \(E\) in \(X\), recall that the capacity of \(E\) is given by \[\mathsf{cap}(E)=\mathsf{cap}_{\omega}(E):=\sup_{\{w\in\mathrm{PSH}(X,\omega): 0\leq w\leq 1\}}\int_{E}\omega_{w}^{n}.\] **Theorem 2.2** ([22, Theorem 3.9]).: _(Quantitative domination principle) Let \(A\geq 1\) be a constant and let \(\theta\leq A\omega\) be a closed smooth real \((1,1)\)-form in a big cohomology class, and \(\varrho:=\int_{X}\theta_{V_{\theta}}^{n}>0\). Let \(B\geq 1\) be a constant, \(\tilde{\chi}\in\mathcal{W}^{-}\) and \(u_{1},u_{2}\in\mathcal{E}(X,\theta)\) such that \(\tilde{\chi}(-1)=-1\) and_ \[E^{0}_{\tilde{\chi},\theta}(u_{1})+E^{0}_{\tilde{\chi},\theta}(u_{2})\leq B.\] _Assume that there exists a constant \(0\leq c<1\) and a Radon measure \(\mu\) on \(X\) satisfying_ \[\theta_{u_{1}}^{n}\leq c\theta_{u_{2}}^{n}+\varrho\mu\] _on \(\{u_{1}<u_{2}\}\) and \(c_{\mu}:=\int_{\{u_{1}<u_{2}\}}d\mu\leq 1\). Then there exists a constant \(C>0\) depending only on \(n,X\) and \(\omega\) such that_ \[\text{cap}_{\omega}\{u_{1}<u_{2}-\epsilon\}\leq\frac{C\operatorname{vol}(X)( A+B)^{2}}{\epsilon(1-c)h^{on}(1/c_{\mu})},\] _for every \(0<\epsilon<1\), where \(h(s)=(-\tilde{\chi}(-s))^{1/2}\) for every \(0\leq s\leq\infty\)._ _In particular, if \(c_{\mu}=0\) then \(\text{cap}_{\omega}\{u_{1}<u_{2}-\epsilon\}=0\) for every \(\epsilon>0\), and then \(u_{1}\geq u_{2}\) on \(X\)._ The standard domination principle corresponds to the case where \(c=0\) and \(\mu=0\). We underline that it is crucial for us in applications later that we consider the situation where \(\mu\neq 0\). We continue with some more auxiliary results about the continuity of Monge-Ampere operators. We now work in the local context. Let \(\Omega\) be an open subset in \(\mathbb{C}^{n}\) and \(d\lambda\) denote the Lebesgue measure on \(\mathbb{C}^{n}\). **Definition 2.3**.: Let \((w_{j})_{j}\) be a sequence of Borel functions on \(\Omega\). We say that \((w_{j})_{j}\) is decreasing in capacity if for every open subset \(U\) in \(\Omega\), every compact \(K\Subset U\), and for every constant \(\delta>0\), for every \(\epsilon>0\) there exists an index \(j_{\epsilon}\) such that \[\text{cap}\big{(}\{w_{j}-w_{j^{\prime}}\leq-\delta\}\cap K,U\big{)}\leq\epsilon\] if \(j^{\prime}\geq j\geq j_{\epsilon}\). We remark that the capacity in the definition is in the sense of Bedford-Taylor [4]. **Proposition 2.4**.: _Let \((w_{j})_{j}\) be a sequence of uniformly bounded psh functions such that \((w_{j})_{j}\) is deceasing in capacity, and \(w_{j}\) converges to some bounded psh function \(w\) in \(L^{1}_{loc}\) as \(j\to\infty\). Then we have \((dd^{c}w_{j})^{k}\to(dd^{c}w)^{k}\) as \(j\to\infty\) for every \(1\leq k\leq n\)._ Proof.: We argue as in the proof of the standard continuity of Monge-Ampere operators for decreasing sequences; see e.g. [17]. Following the same lines in the aforementioned reference, we prove the desired limit by induction and assume now that \((dd^{c}w_{j})^{k-1}\) converges weakly to \((dd^{c}w)^{k-1}\) as \(j\to\infty\). To check the desired assertion for \(k\), it suffices to show that \(w_{j}(dd^{c}w_{j})^{k-1}\) converges weakly to \(w(dd^{c}w)^{k-1}\). Observe that we already have that any limit current of the sequence \(w_{j}(dd^{c}w_{j})^{k-1}\) is bounded from above by \(w(dd^{c}w)^{k-1}\) by induction hypothesis and Hartogs' lemma. We can w.o.l.g assume that \(-2\leq w_{j}\leq-1\) and \(-2\leq w\leq-1\). Denote by \(w_{j}^{\epsilon}\) the standard regularization of \(w_{j}\) by using convolution; cf. [17]. The problem is local. We thus can assume that \(\Omega\) is the unit ball in \(\mathbb{C}^{n}\). Let \(\psi(z):=|z|^{2}-1\). We can assume that all \(w_{j},w_{j}^{\epsilon}\) and \(w\) coincide with \(A\psi\) outside a compact \(K\) in \(\Omega\), where \(A\) is a big enough constant. We set \(\beta:=dd^{c}\psi>0\), which is the standard Kahler form in \(\mathbb{C}^{n}\). Let \(\delta>0\), \(\epsilon>0\) be constants. Choose an index \(j_{\epsilon,\delta}\) so that for every \(j\geq j_{\epsilon,\delta}\), \[\text{cap}(\{w_{j}-w\leq-\delta\}\cap K,\Omega)\leq\epsilon\] Thus there exists a constant \(C>0\) independent of \(\delta,\epsilon\) such that for every \(j\geq j_{\epsilon,\delta}\) we have \[\int_{\Omega}w(dd^{c}w)^{k-1}\wedge\beta^{n-k+1} \leq\int_{\Omega}(w_{j}+\delta)(dd^{c}w)^{k-1}\wedge\beta^{n-k-1} +C\text{cap}(\{w_{j}-w\leq-\delta\}\cap K,\Omega)\] \[\leq\int_{\Omega}(w_{j}^{\epsilon}+\delta)(dd^{c}w)^{k-1}\wedge \beta^{n-k-1}+C\epsilon\] \[\leq\int_{\Omega}wdd^{c}w_{j}^{\epsilon}\wedge(dd^{c}w)^{k-2} \wedge\beta^{n-k+1}+C(\epsilon+\delta)\] \[\leq\int_{\Omega}(w_{j}^{\epsilon}+\delta)dd^{c}w_{j}^{\epsilon} \wedge(dd^{c}w)^{k-2}\wedge\beta^{n-k+1}+C(\epsilon+\delta).\] Note that in the above estimates, we have used integration by parts and the fact that \(w\) and \(w_{j}^{\epsilon}\) both vanish on \(\partial\Omega\). Repeating this argument, we obtain \[\int_{\Omega}w(dd^{c}w)^{k-1}\wedge\beta^{n-k+1}\leq\int_{\Omega}w_{j}^{ \epsilon}(dd^{c}w_{j}^{\epsilon})^{k-1}\wedge\beta^{n-k+1}+C(\epsilon+\delta),\] for \(j\geq j_{\epsilon,\delta}\) and for some constant \(C>0\) big enough but independent of \(\epsilon\) and \(j\). Letting \(j\to\infty\), then \(\epsilon\to 0\), and then \(\delta\to 0\) we obtain \[\int_{\Omega}w(dd^{c}w)^{k-1}\wedge\beta^{n-k+1}\leq\liminf_{j\to\infty}\int_ {\Omega}w_{j}(dd^{c}w_{j})^{k-1}\wedge\beta^{n-k+1}.\] The proof thus completes. ## 3 Uniform lower bounds Let \(D\) be a simple normal crossing divisor in \(X\). Let \(h\) be a smooth Hermitian metric on \(\mathcal{O}_{X}(D)\) and \(s\) be a section of \(\mathcal{O}_{X}(D)\) defining \(D\). Let \(\alpha\) be a big cohomology class and \(\theta\) be a smooth representative in \(\alpha\). Let \((f_{j})_{j\geq 1}\) be an increasing sequence of continuous nonnegative functions converging pointwise to \(f\in\mathcal{C}^{0}(X)\) as \(j\to\infty\) such that \(f_{1}\not\equiv 0\). Let \((\theta_{j})_{j\in\mathbb{N}}\) be a sequence of smooth closed \((1,1)\)-form in big cohomology classes converging to \(\theta\) in the \(\mathcal{C}^{0}\) topology as \(j\to\infty\). Let \(u_{j}\in\mathcal{E}(X,\theta_{j})\) be a solution of the equation \[(dd^{c}u_{j}+\theta_{j})^{n}=e^{u_{j}}|s|_{h}^{-2}f_{j}\omega^{n} \tag{3.1}\] for every \(j\geq 0\). We have the following standard observation. **Lemma 3.1**.: _There exists a constant \(C>0\) such that \(u_{j}\leq C\) for every \(j\)._ Proof.: Let \(c_{j}:=\int_{X}f_{j}\omega^{n}\) which converges to \(c:=\int_{X}f\omega^{n}>0\) as \(j\to\infty\). By compactness, we have \[\int_{X}u_{j}f_{j}\omega^{n}\geq c_{j}\sup_{X}u_{j}+\int_{X}(u_{j}-\sup_{X}u_{j })f_{j}\omega^{n}\geq c_{j}\sup_{X}u_{j}+\int_{X}(u_{j}-\sup_{X}u_{j})f\omega^{ n}\geq c_{j}\sup_{X}u_{j}-C_{1},\] for some constant \(C_{1}\) independent of \(j\). Using Jensen's inequality and the fact that \(|s|_{h}\leq 1\), we have that \[\int_{X}u_{j}f_{j}\omega^{n}\leq c_{j}\log\frac{1}{c_{j}}\int_{X}e^{u_{j}}f_{j }\omega^{n}\lesssim\log\int_{X}(dd^{c}u_{j}+\theta_{j})^{n}=\log\int_{X}(dd^{c }V_{\theta_{j}}+\theta_{j})^{n}\leq C\] for a uniform constant \(C>0\) (independent of \(j\)). This provides a uniform upper bound for \(u_{j}\). Our main result in this section is Proposition 3.4 providing a uniform lower bound for \(u_{j}\). We use an idea in [18]. Let \(\gamma\in(0,1/2]\) be a small fixed constant. For a closed smooth form \(\eta\), we denote by \(\{\eta\}\) the cohomology class of \(\eta\), and if \(E\) is a divisor, we denote by \(\{E\}\) the cohomology class of \([E]\). Let \(\mathcal{I}_{D}\) be the ideal sheaf defining \(D\). From now on, we fix a Kahler current \(T=dd^{c}\rho+\theta\) in \(\alpha\) with analytic singularities associated to an analytic coherent sheaf \(\mathcal{I}_{T}\) on \(X\) and a constant \(\delta>0\) such that \[T\geq\delta\omega. \tag{3.2}\] **Lemma 3.2**.: _There exists \(\pi=\pi_{\gamma}:\widehat{X}\to X\) a composition of blowups with smooth centers and a smooth Kahler form \(\widehat{\omega}\) and an effective divisor \(E\) on \(\widehat{X}\) so that the following conditions are satisfied:_ _(i) the total transform \(\pi^{*}(\mathcal{I}_{D}\cdot\mathcal{I}_{T})\) of \(\mathcal{I}_{D}\cdot\mathcal{I}_{T}\) is generated by a divisor \(\tilde{D}\), and \(\operatorname{Supp}\tilde{D}\cup\operatorname{Supp}E\) is of simple normal crossings,_ _(ii) and_ \[\pi^{*}\{\theta\}=\{\widehat{\omega}\}+\{E\},\] _and_ \[\int_{\widehat{X}}\widehat{\omega}^{n}\geq\operatorname{vol}(\{\theta\})-\gamma. \tag{3.3}\] Proof.: By [9, Proposition 1.19], there exists \(\pi^{\prime}:\widehat{X}^{\prime}\to X\) a composition of blowups with smooth centers so that \[\pi^{*}\{\theta\}=\{\widehat{\omega}^{\prime}\}+\{E^{\prime}\},\] where \(\widehat{\omega}^{\prime}\) is a Kahler form on \(\widehat{X}^{\prime}\) and \(E^{\prime}\) is an effective divisor, and \[\int_{\widehat{X}}\widehat{\omega}^{\prime n}\geq\operatorname{vol}(\{\theta \})-\gamma/2. \tag{3.4}\] Let \(E^{\prime}\) be the exceptional divisor of \(\pi^{\prime}\) and \(\mathcal{I}_{E^{\prime}}\) be the ideal sheaf generated by \(E^{\prime}\). By performing some more blowups to principalize \(\mathcal{I}_{E}\cdot\pi^{\prime*}\mathcal{I}_{D}\cdot\mathcal{I}_{T}\), we obtain a composition of blowups \(\pi^{\prime\prime}:\widehat{X}\to\widehat{X}^{\prime}\) so that if \(\pi:=\pi^{\prime\prime}\circ\pi\), then \(\pi^{*}\mathcal{I}_{D}\cdot\mathcal{I}_{T}\) is given by some divisor \(\widehat{D}\) with simple normal crossings support and the support of this divisor also has simple normal crossing with exceptional divisors. Repeating arguments from [9, Proposition 1.19] shows that \(\pi\) satisfies the required properties. We denote by \(\widehat{D}\) the support of the divisor generating \(\pi^{*}\mathcal{I}_{D}\). Hence, \(\widehat{D}\) is of simple normal crossings. Let \(E\) be the exceptional divisor of \(\pi\) defined in Lemma 3.2. Let \[\mu_{j}:=f_{j}|s|_{h}^{-2}\omega^{n}.\] Since \(u_{j}\in\mathcal{E}(X,\theta_{j})\) is the solution of the equation \[(dd^{c}u_{j}+\theta_{j})^{n}=e^{u_{j}}\mu_{j},\] we get \[(\pi^{*}\theta_{j}+dd^{c}(u_{j}\circ\pi))^{n}=e^{u_{j}\circ\pi}\pi^{*}\mu_{j}.\] Let \(\widehat{s}\) be a section in \(\mathcal{O}_{\widehat{X}}(\widehat{D})\) defining \(\widehat{D}\), and fix a smooth Hermitian metric \(\widehat{h}\) on that line bundle such that \(|\widehat{s}|_{\widehat{h}}\) is very small (to be made precise later). Define \(\widehat{\mu}_{j}:=\pi^{*}\mu_{j}\), and \(\widehat{\mu}:=\pi^{*}\mu\) and \[\widehat{f}_{j}:=\pi^{*}\mu_{j}/(|\widehat{s}|_{\widehat{h}}^{-2}\widehat{ \omega}^{n}),\quad\widehat{f}:=\pi^{*}\mu/(|\widehat{s}|_{\widehat{h}}^{-2} \widehat{\omega}^{n}).\] **Lemma 3.3**.: _We have that \(\widehat{f}_{j},\widehat{f}\) are continuous functions and \(\widehat{f}_{j}\) increases pointwise to \(\widehat{f}\)._ Proof.: It suffices to work locally. Let \((\widehat{z}_{1},\dots,\widehat{z}_{n})\) be a local coordinate system around a point \(\widehat{a}\) in \(\widehat{X}\) and \((z_{1},\dots,z_{n})\) be a local coordinate system around \(\pi(\widehat{a})\) such that (i) \(s(z)=z_{1}\cdots z_{k}\) near \(\pi(\widehat{a})\), (ii) \(\widehat{D}\) is given near \(\widehat{a}\) by the equation \(\widehat{z}_{1}\cdots\widehat{z}_{m}=0\), (iii) \(z_{j}\circ\pi=\widehat{z}_{1}^{r_{1j}}\cdots\widehat{z}_{m}^{\prime_{mj}}\) for \(1\leq j\leq k\), and some positive integers \(r_{1j},\dots,r_{mj}\). It follows that \[\pi^{*}(dz_{j}\wedge d\bar{z}_{j})=\sum_{1\leq p,q\leq m}a_{pq}\widehat{z}_{p} ^{-1}\overline{\widehat{z}_{q}^{-1}}|z_{j}\circ\pi|^{2}d\widehat{z}_{p}\wedge d \overline{\widehat{z}_{q}},\] for some constants \(a_{pq}\) and \(1\leq j\leq k\). Hence \[\pi^{*}\omega^{n}=a\prod_{j=1}^{k}|z_{j}\circ\pi|^{2}\prod_{j=1}^{m}|\widehat{ z}_{j}|^{-2}\widehat{\omega}^{n}=a^{\prime}|s\circ\pi|_{\widehat{h}}^{2}| \widehat{s}|_{\widehat{h}}^{-2}\widehat{\omega}^{n},\] where \(a\) and \(a^{\prime}\) are smooth functions. We deduce that \(\widehat{f}_{j},\widehat{f}\) are continuous functions, and \(\widehat{f}_{j}\) increases pointwise to \(\widehat{f}\) as desired. Since \(\pi^{*}\theta-\widehat{\omega}\) is cohomologous to \([E]\), we get \[dd^{c}\varphi_{E}+\pi^{*}\theta-\widehat{\omega}=[E]\] for some negative \((\pi^{*}\theta-\widehat{\omega})\)-psh function \(\varphi_{E}\). Set \[\widehat{\phi}:=-(n+2)\log(-\log|\widehat{s}|_{\widehat{h}})+\varphi_{E}.\] We compute \[dd^{c}\widehat{\phi}=-\frac{(n+2)\Theta_{\widehat{h}}(\widehat{D})}{(-\log| \widehat{s}|_{\widehat{h}})}+(n+2)\frac{d\log|\widehat{s}|_{\widehat{h}}\wedge d ^{c}\log|\widehat{s}|_{\widehat{h}}}{(-\log|\widehat{s}|_{\widehat{h}})^{2}}+[ E]+\widehat{\omega}-\pi^{*}\theta. \tag{3.5}\] Consequently we have \[dd^{c}\widehat{\phi}+\pi^{*}\theta\geq 3\widehat{\omega}/4 \tag{3.6}\] because \(|\widehat{s}|_{\widehat{h}}\) is very small. Moreover, there holds \[e^{\widehat{\phi}}\widehat{\mu}\leq\frac{\widehat{f}\widehat{\omega}^{n}}{| \widehat{s}|^{2}(-\log|\widehat{s}|)^{n+2}}\lesssim g\widehat{\omega}^{n}\] using that \(|\widehat{s}|_{h}\leq 1\). By [18, Lemma 4.6] the density \(g\) satisfies \[\int_{\widehat{X}}g|\log g|^{n+1/2}f\widehat{\omega}^{n}<\infty.\] By [32] or [18, Theorem 1.5], we can find \(\varphi\) a bounded \(\widehat{\omega}/2\)-psh function satisfying \[(dd^{c}\varphi+\widehat{\omega}/2)^{n}=e^{\varphi+\widehat{\phi}}\widehat{\mu}.\] We note that \(\varphi\) is bounded globally. Define \[\widehat{\psi}:=\varphi+\widehat{\phi}\] which is a \(\pi^{*}\theta_{j}\)-psh function for \(j\geq j_{\gamma}\) large enough since it follows from (3.6) that \[dd^{c}\widehat{\psi}+\pi^{*}\theta\geq dd^{c}\varphi+3\widehat{\omega}/4\geq \widehat{\omega}/4,\] and \(\pi^{*}\theta_{j}-\pi^{*}\theta\leq\widehat{\omega}/4\) for \(j\) big enough (depending on \(\gamma\)). **Proposition 3.4**.: _We have_ \[u_{j}\circ\pi\geq\widehat{\psi}, \tag{3.7}\] _for \(j\) big enough. Moreover there exists a smooth Hermitian metric \(\widehat{h}\) (depending on \(\gamma\)) on \(\mathcal{O}_{\widehat{X}}(\widehat{D})\) such that_ \[\int_{\widehat{X}}(dd^{c}\widehat{\psi}+\pi^{*}\theta_{j})^{n}\geq\mathrm{vol }(\{\theta_{j}\})-2\gamma\] _for \(j\) big enough (depending on \(\gamma\))._ Proof.: Observe that \[(dd^{c}\widehat{\psi}+\pi^{*}\theta_{j})^{n}\geq(dd^{c}\varphi+\widehat{\omega}/2 )^{n}=e^{\varphi+\widehat{\phi}}\widehat{\mu}.\] This, combined with Lemma 3.3 and the domination principle (note that \(\widehat{\psi}\) might not be of full mass, but \(u_{j}\) is so) yields the first desired assertion. Indeed, from the above inequality, we have \[e^{-\widehat{\psi}}(dd^{c}\widehat{\psi}+\pi^{*}\theta_{j})^{n}\geq e^{-u_{j} \circ\pi}(dd^{c}u_{j}\circ\pi+\pi^{*}\theta_{j})^{n}.\] The domination principle (cf. Proposition 2.1) yields \(u_{j}\circ\pi\geq\widehat{\psi}\) on \(X\). We now check the second desired assertion. Put \(v_{1}:=-(n+2)\log(-\log|\widehat{s}|_{\widehat{h}}^{2})\), and \(v_{2}:=-\sqrt{-|\widehat{s}|_{\widehat{h}}}\). We choose \(\widehat{h}\) so that \(|\widehat{s}|_{\widehat{h}}^{2}\) is so small that \(v_{2}\) is \(\gamma\widehat{\omega}\)-psh function, and \(v_{1}\geq-(-v_{2})^{1/2}\). It follows that \(-(-v_{2})^{1/2}\in\mathcal{E}(X,\gamma\widehat{\omega})\) (by [13, Example 2.7]), hence, \(v_{1}\in\mathcal{E}(X,\gamma\widehat{\omega})\) by monotonicity of non-pluripolar products. Direct computations show \[dd^{c}\widehat{\psi}+\pi^{*}\theta=dd^{c}v_{1}+dd^{c}\varphi_{E}+dd^{c}\varphi +\pi^{*}\theta\] which is \[\geq(dd^{c}v_{1}+\gamma\widehat{\omega})+(dd^{c}\varphi+\widehat{\omega}/2)+( 1-\gamma-1/2)\widehat{\omega}.\] Let \(R\) be the right-hand side of the last inequality. Since \(v_{1}\in\mathcal{E}(X,\gamma\widehat{\omega})\) and \(\varphi\in\mathcal{E}(X,\widehat{\omega}/2)\), one infers that \[\int_{X}R^{n}=\int_{X}\widehat{\omega}^{n}\geq\operatorname{vol}(\{\theta\})-\gamma.\] Consequently \[\int_{X}(dd^{c}\widehat{\psi}+\pi^{*}\theta)^{n}\geq\operatorname{vol}(\{ \theta\})-\gamma.\] Since \(\theta_{j}\) is close to \(\theta\) in the sup norm, we deduce that \[\int_{X}(dd^{c}\widehat{\psi}+\pi^{*}\theta_{j})^{n}\geq\operatorname{vol}( \{\theta_{j}\})-2\gamma\] for \(j\) big enough. This finishes the proof. Let \(\psi_{\gamma}:=\pi_{*}\widehat{\psi}\) which is a \(\theta_{j}\)-psh function for \(j\) large enough. **Corollary 3.5**.: _Let \(u\) be a \(L^{1}\) limit of \((u_{j})_{j}\) as \(j\to\infty\). Then \(u\in\mathcal{E}(X,\theta)\) and_ \[(dd^{c}u+\theta)^{n}\geq e^{u}f|s|_{h}^{-2}\omega^{n}.\] Proof.: Without loss of generality, we can assume that \(u_{j}\to u\) in \(L^{1}\). By Proposition 3.4, we get \(u\geq\psi_{\gamma}\) and \(\int_{X}\theta_{\psi_{\gamma}}^{n}\geq\operatorname{vol}(\{\theta\})-C\gamma\) for some constant \(C>0\) independent of \(\gamma\). This, combined with the monotonicity of non-pluripolar products, gives \[\int_{X}\theta_{u}^{n}\geq\int_{X}\theta_{\psi_{\gamma}}^{n}\geq\operatorname {vol}(\{\theta\})-C\gamma\] for every constant \(\gamma>0\). Letting \(\gamma\to 0\) gives \(u\in\mathcal{E}(X,\theta)\). Let \(\epsilon>0\) be a constant and \(\theta_{\epsilon}^{\prime}:=\theta+\epsilon\omega\). Let \(j_{\epsilon}\in\mathbb{N}\) be so that \(\theta_{j}\leq\theta_{\epsilon}^{\prime}\) for every \(j\geq j_{\epsilon}\). Let \(u_{j}^{\prime}:=(\sup_{k\geq j}u_{k})^{*}\) which is \(\theta_{\epsilon}^{\prime}\)-psh for \(j\geq j_{\epsilon}\). Note that \(u_{j}^{\prime}\) decreases to \(u\). By extracting a subsequence if necessary, we can assume that \[(dd^{c}u_{j}^{\prime}+\theta_{\epsilon}^{\prime})^{n}\to\nu\] weakly as \(j\to\infty\). Using the fact that \(u\in\mathcal{E}(X,\theta)\) and \(u_{j}^{\prime}\) decreases to \(u\), we obtain that \[\|\nu-(dd^{c}u+\theta)^{n}\|\lesssim\mathrm{vol}(\{\theta_{\epsilon}^{\prime} \})-\mathrm{vol}(\{\theta\})\lesssim\epsilon, \tag{3.8}\] where \(\|\nu\|\) denotes the mass norm of a (signed) measure of \(\nu\). On the other hand, for every continuous function \(g\) with compact support in \(X\backslash W\) (\(W\) is the union of \(D\) and the non-Kahler locus of \(\{\theta\}\)), we have \[g(dd^{c}u_{j}^{\prime}+\theta_{\epsilon}^{\prime})^{n}\geq ge^{\inf_{k\geq j }u_{j}}f_{j}|s|_{h}^{-2}\omega^{n}\] which converges to \(ge^{u}f|s|_{h}^{-2}\omega^{n}\) as \(j\to\infty\) by the Lebesgue dominated convergence theorem (note that \(\inf_{k\geq j}u_{j}\) converges pointwise almost everywhere to \(u\)). Hence letting \(j\to\infty\) gives \[g\nu\geq ge^{u}f|s|_{h}^{-2}\omega^{n}.\] Combining this with (3.8) implies \[g(dd^{c}u+\theta)^{n}\geq ge^{u}f|s|_{h}^{-2}\omega^{n}-C\|g\|_{L^{\infty}}\epsilon\] for every constant \(\epsilon>0\). Letting \(\epsilon\to 0\), we obtain the desired inequality. This finishes the proof. Let \(g\) be a function on \(X\) and \(\eta\) be a smooth closed \((1,1)\)-form in a pseudoeffective class. If there is a function \(v\in\mathrm{PSH}(X,\eta)\) with \(v\leq g\), then we define \[P_{\eta}(g):=\big{(}\sup\{v\in\mathrm{PSH}(X,\eta):v\leq g\}\big{)}^{*}\] which is a well-defined \(\eta\)-psh function. Let \(\epsilon>0\) be a small constant. Let \(\theta_{\epsilon}^{\prime}:=\theta+\epsilon\omega\). Let \(j_{\epsilon}\in\mathbb{N}\) be such that \(\theta_{j}\leq\theta_{\epsilon}^{\prime}\) for every \(j\geq j_{\epsilon}\). Hence \(u_{j}\) is \(\theta_{\epsilon}^{\prime}\)-psh for \(j\geq j_{\epsilon}\). **Corollary 3.6**.: _Let_ \[u_{j,\epsilon}^{\prime\prime}:=\lim_{k\to\infty}P_{\theta_{\epsilon}^{\prime }}(\min\{u_{j},\ldots,u_{k}\}),\] _for \(j\geq j_{\epsilon}\). Then \(u_{j,\epsilon}^{\prime\prime}\) is a well-defined \(\theta_{\epsilon}^{\prime}\)-psh function and increases to some \(\theta_{\epsilon}^{\prime}\)-psh function \(u_{\epsilon}^{\prime\prime}\) as \(j\to\infty\) satisfying:_ _(i) \(u_{\epsilon}^{\prime\prime}\leq u\), where \(u\) is a \(L^{1}\)-limit of the sequence \((u_{j})_{j}\) as \(j\to\infty\),_ _(ii) \(u_{\epsilon}^{\prime\prime}\) decreases to some \(\theta\)-psh function \(u^{\prime\prime}\in\mathcal{E}(X,\theta)\) as \(\epsilon\) decreases to \(0\)._ Proof.: Since \(u_{j}\geq\psi_{\gamma}\), we see that \(u_{j,\epsilon}^{\prime\prime}\) is well-defined because \(\psi_{\gamma}\) is in the envelope defining \(P_{\theta_{\epsilon}^{\prime}}(\min\{u_{j},\ldots,u_{k}\})\) for \(k\geq j\) (and \((u_{j})_{j}\) is bounded from above uniformly in \(j\), see Lemma 3.1). The fact that \(u_{j,\epsilon}^{\prime\prime}\) is increasing (for \(\epsilon\) fixed) is clear. By the definition of the envelope \(P_{\theta_{\epsilon}^{\prime}}\), one sees that \(u_{\epsilon}^{\prime\prime}\) is decreasing as \(\epsilon\searrow 0\). Since \(u_{\epsilon}^{\prime\prime}\geq\psi_{\gamma}\), we see that \(u^{\prime\prime}:=\lim_{\epsilon\to 0}u_{\epsilon}^{\prime\prime}\geq\psi_{\gamma}\) for every \(\gamma\), hence, \(u^{\prime\prime}\in\mathcal{E}(X,\theta)\) Almost complete Kahler-Einstein metrics In this section, we prove Theorem 1.1. We begin by recalling the following definition for almost complete metrics. In what follows, we interchangeably use the terms Kahler forms and Kahler metrics. We start with a slightly more general version of Yau's Schwarz lemma for volume forms ([43] and also [38]). **Theorem 4.1**.: _Let \((M,\eta_{1})\) be a complete Kahler manifold with Ricci curvature bounded from below and the scalar curvature bounded from below by a constant \(-\frac{nK_{1}}{2\pi}\). Let \(N\) be a complex manifold with \(\dim N=\dim M\) and \(\eta_{2}\) be a closed positive \((1,1)\)-current on \(N\) having a well-defined Ricci curvature. Let \(f:M\to N\) be a non-degenerate holomorphic map. Assume that the following conditions are satisfied:_ _(i) \(2\pi\operatorname{Ric}\eta_{2}\leq-K_{2}\eta_{2}\) as currents for some constant \(K_{2}>0\),_ _(ii) There exists a closed subset \(E\) in \(N\) such that \(f^{-1}(E)\) is of zero Lebesgue measure in \(M\), and \(\eta_{2}\) is a smooth Kahler form on \(N\backslash E\), and \(\eta_{2}^{n}\) extends to a smooth form on \(N\)._ _Then we have \(K_{1}\geq 0\), and_ \[f^{*}\eta_{2}^{n}\leq\left(\frac{K_{1}}{K_{2}}\right)^{n}\eta_{1}^{n}\] We note that in (i) there is the factor \(2\pi\) because our \(\operatorname{Ric}\eta_{2}\) (see Introduction) is in fact equal to the \(\frac{1}{2\pi}\) times the usual Ricci curvature of the metric \(\eta_{2}\). Proof.: The proof is almost identical to that of [38, Thm. 1.2] (which goes back to [43]). Set \[u:=\frac{f^{*}\eta_{2}^{n}}{\eta_{1}^{n}}.\] Since \(\eta_{2}^{n}\) is a smooth form on \(N\), we see that \(u\) is a well-defined nonnegative smooth function on \(M\). Let \(\Delta\) denote the Laplacian with respect to \(\eta_{1}\). At any point \(x\in M\backslash f^{-1}(E)\), we compute (recall \(dd^{c}:=\frac{i}{\pi}\partial\bar{\partial}\)) \[\Delta u =u\Delta\log u+\frac{|\nabla u|_{\eta_{1}}^{2}}{u}\] \[=\frac{2n\pi dd^{c}\log u\wedge\eta_{1}^{n-1}}{\eta_{1}^{n}}+ \frac{|\nabla u|_{\eta_{1}}^{2}}{u}\] \[=2\big{(}-\operatorname{tr}_{\eta_{1}}f^{*}(2\pi\operatorname{ Ric}\eta_{2})+\operatorname{tr}_{\eta_{1}}(2\pi\operatorname{Ric}\eta_{1}) \big{)}u+\frac{|\nabla u|_{\eta_{1}}^{2}}{u}\] \[\geq 2K_{2}\operatorname{tr}_{\eta_{1}}f^{*}\eta_{2}\cdot u-2nK _{1}\cdot u\] \[\geq 2nK_{2}u^{1+\frac{1}{n}}-2nK_{1}u,\] using the arithmetic-geometric mean inequality. Since both sides of the last inequality is continuous on \(M\), we infer \[\Delta u\geq 2nK_{2}u^{1+\frac{1}{n}}-2nK_{1}u\] on \(M\). Now it suffices to repeat arguments in [43, Page 200-201] to obtain the desired estimate. Recall that a closed positive \((1,1)\)-current on a complex manifold \(M\) is said to be _a singular Kahler-Einstein metric of negative Ricci curvature_ on \(M\) if \(\eta\) has a well-defined Ricci curvature and \(\operatorname{Ric}\eta=-\eta\). **Definition 4.2**.: ([37] and also [12]) Let \(M\) be a Kahler manifold and let \(\eta\) be a singular Kahler-Einstein metric of negative Ricci curvature on \(M\). We say that \(\eta\) is _almost complete_ if there exists a proper analytic subset \(E\subset M\), and a sequence of complete smooth Kahler metrics \(\omega_{j}\) on \(M\) and a sequence of positive numbers \(t_{j}\) converging to \(1\) such that the following conditions are fulfilled: (i) \(\operatorname{Ric}\omega_{j}\geq-t_{j}\omega_{j}\) for every \(j\in\mathbb{N}\), (ii) \(\eta\) is the limit of \(\omega_{j}\) in the local \(\mathcal{C}^{\infty}\) topology on \(M\backslash E\) as \(j\to\infty,\) (iii) \(\eta\) is a smooth Kahler form on \(M\backslash E\). We stress that one should consider an almost-complete metric on \(M\) rather than a smooth metric on \(M\backslash E\). The key is the following property of almost-complete metrics. **Lemma 4.3** ([37]).: _Let \(M\) be a Kahler manifold. Then there exists at most an almost complete Kahler-Einstein metric \(\eta\) on \(M\) with \(\operatorname{Ric}\eta=-\eta\)._ Proof.: We reproduce the proof from [37] for readers' convenience. Suppose that \(\eta_{1},\eta_{2}\) are two almost complete Kahler metrics on \(M\) with \(\operatorname{Ric}\eta_{k}=-\eta_{k}\) for \(k=1,2\). By definition, there are sequences of complete Kahler metrics \(\omega_{jk}\) and positive real numbers \(t_{jk}\) converging to \(1\) such that \[\operatorname{Ric}\omega_{jk}\geq-t_{jk}\omega_{jk}\] and \(\omega_{jk}\) converges to \(\eta_{k}\) smoothly pointwise. By Theorem 4.1 applied to the identity map of \(M\), we get \[\eta_{1}^{n}\leq t_{j2}^{n}\omega_{j2}^{n},\quad\eta_{2}^{n}\leq t_{j1}^{n} \omega_{j2}^{n}\] for every \(j\in\mathbb{N}\). Letting \(j\to\infty\) and using the pointwise convergence of \(\omega_{jk}\) to \(\eta_{k}\) give \(\eta_{1}^{n}=\eta_{2}^{n}\). Thus \[\eta_{1}=-\operatorname{Ric}\eta_{1}=dd^{c}\log\eta_{1}^{n}=dd^{c}\log\eta_{2 }^{n}=-\operatorname{Ric}\eta_{2}=\eta_{2}.\] This finishes the proof. **Lemma 4.4**.: _Let \(D\) be a simple normal crossing divisor such that \(K_{X}+D\) is both big and nef. Let \(\epsilon\in[0,1)\) and \(f\) be a smooth function. Let \(\theta\) be a smooth representative of big cohomology class \(c_{1}(K_{X}+D)\). Let \(v_{\epsilon}\in\mathcal{E}(X,\theta+\epsilon\omega)\) be the unique solution of the equation_ \[(dd^{c}v_{\epsilon}+\theta+\epsilon\omega)^{n}=e^{v_{\epsilon}+f}|s|_{h}^{-2} \omega^{n}. \tag{4.1}\] _Let \(E\) be the non-Kahler locus of \(K_{X}+D\). Then the \(\mathcal{C}^{k}\)-norm of \(v_{\epsilon}\) on compact subsets in \(X\backslash(D\cup E)\) is bounded uniformly in \(\epsilon\) for every \(k\)._ Proof.: The domination principle (cf. Proposition 2.1) ensures that \(v_{\epsilon}\) decreases to \(v\) as \(\epsilon\searrow 0\). It follows that \(v_{\epsilon}\leq C\) for some uniform constant \(C>0\). Theorem 5.1 shows that \(v_{\epsilon}\) converges weakly toward \(v\) as \(\epsilon\to 0^{+}\). Moreover, \(v\) belongs to \(\mathcal{E}(X,\theta)\) and satisfies \[(dd^{c}v+\theta)^{n}=e^{v}|s|^{-2}f\omega^{n}.\] We set \(X_{0}:=X\backslash(D\cup E)\). We are going to prove that the family \((v_{\epsilon})\) is pre-compact in the \(\mathcal{C}^{\infty}_{\mathrm{loc}}(X_{0})\), where \(E\) is the non-ample locus of \(K_{X}+D\). This amounts to establishing \(\mathcal{C}^{k}_{\mathrm{loc}}(X_{0})\) estimates for all \(k\in\mathbb{N}\) thanks to the Arzela-Ascoli theorem. According to the Evans-Krylov theory and Schauder interior estimates (the so-called bootstrapping arguments for elliptic PDEs), obtaining local \(L^{\infty}\) and Laplacian estimate on \(X_{0}\) suffices. Since \(\theta\) represents a big cohomology class, we can find a \(\theta\)-psh function \(\rho\) with analytic singularities such that \(\rho\to-\infty\) near \(E\) and \(dd^{c}\rho+\theta\geq 2\delta\omega\) for some \(\delta>0\). We fix \(c>0\) so small that \(cdd^{c}\log|s|_{h}^{2}+\delta\omega\geq 0\). It follows from [14, Thm. 3.1, Step 1] that \[v\geq c\log|s|_{h}^{2}+\rho-C\] for \(C>0\) depending on \(X\), \(\omega\), \(n\), \(c\), \(\delta\) and a upper bound for \(\int_{X}e^{-2P_{\omega}(c^{-1}(v-V_{\theta}))}\omega^{n}\) where \(P_{\omega}(h)\) denotes the largest \(\omega\)-psh function lying below \(h\). Therefore, for every \(\epsilon\in(0,1]\), \[v_{\epsilon}\geq c\log|s|_{h}^{2}+\rho-C. \tag{4.2}\] We are now in a position to establish the local Laplacian estimate. We recall Siu-Yau's inequality (cf. [35]): let \(\tau\) and \(\tau^{\prime}\) are two Kahler forms on a complex manifold and let \(f\) be defined by \(\tau^{\prime n}=e^{f}\tau^{n}\). If the holomorphic bisectional curvature of \(\tau\) is bounded below by some constant \(B>0\), then \[\Delta_{\tau^{\prime}}\log\mathrm{tr}_{\tau}(\tau^{\prime})\geq\frac{\Delta_{ \tau}f}{\mathrm{tr}_{\tau}(\tau^{\prime})}-B\mathrm{tr}_{\tau^{\prime}}(\tau).\] We are going to apply with \(\tau=\omega\) and \(\tau^{\prime}=\omega_{\epsilon}^{\prime}:=dd^{c}v_{\epsilon}+\theta+\epsilon\omega\). We observe that the holomorphic bisectional curvature of \(\omega\) is obviously bounded on \(X\) by a constant \(B>0\), hence we obtain the following inequality \[\Delta_{\omega_{\epsilon}^{\prime}}\log\mathrm{tr}_{\omega}(\omega_{\epsilon} ^{\prime})\geq\frac{\Delta_{\omega}(v_{\epsilon}+F)}{\mathrm{tr}_{\omega}( \omega_{\epsilon}^{\prime})}-B\mathrm{tr}_{\omega_{\epsilon}^{\prime}}(\omega). \tag{4.3}\] We have \(|\Delta_{\omega}F_{\epsilon}|\leq nA\) for some uniform constant \(A>0\). Clearly, we have \[\Delta_{\omega}v_{\epsilon}=\mathrm{tr}_{\omega}(dd^{c}v_{\epsilon}+C_{0} \omega)-nC_{0}\geq-nC_{0},\] for some uniform \(C_{0}>0\). Combining this with the elementary inequality \(\mathrm{tr}_{\omega}(\omega_{\epsilon}^{\prime})\mathrm{tr}_{\omega_{\epsilon }^{\prime}}(\omega)\geq n\), we obtain \[\frac{\Delta_{\omega}(v_{\epsilon}+F)}{\mathrm{tr}_{\omega}(\omega_{\epsilon }^{\prime})}\geq-(A+C_{0})\mathrm{tr}_{\omega_{\epsilon}^{\prime}}(\omega_{ \epsilon}). \tag{4.4}\] Plugging (4.4) into (4.3), we thus obtain \[\Delta_{\omega_{\epsilon}^{\prime}}\log\mathrm{tr}_{\omega}(\omega_{\epsilon} ^{\prime})\geq-C_{1}\mathrm{tr}_{\omega_{\epsilon}^{\prime}}(\omega)\] for \(C_{1}=B+A+C_{0}\). We set \(w_{\epsilon}:=v_{\epsilon}-c\log|s|_{h}^{2}-\rho\). By (4.2), we have that \(w_{\epsilon}\) is bounded from below. We set \[\omega_{\epsilon}:=dd^{c}c\log|s|_{h}^{2}+dd^{c}\rho+\theta+\epsilon\omega.\] Since we have chosen \(c>0\) so small that \(dd^{c}c\log|s|_{h}^{2}+\delta\omega\geq 0\), hence \(\omega_{\epsilon}\geq\delta\omega\) for every \(\epsilon\). We observe that \(\Delta_{\omega_{\epsilon}^{\prime}}w_{\epsilon}=n-\operatorname{tr}_{\omega_{ \epsilon}^{\prime}}\omega_{\epsilon}\leq n-\delta\operatorname{tr}_{\omega_{ \epsilon}^{\prime}}\omega\). Therefore, we obtain \[\Delta_{\omega_{\epsilon}^{\prime}}(\log\operatorname{tr}_{\omega}(\omega_{ \epsilon}^{\prime})-(C_{1}\delta^{-1}+1)w_{\epsilon})\geq\operatorname{tr}_ {\omega_{\epsilon}^{\prime}}(\omega)-n(C_{1}\delta^{-1}+1). \tag{4.5}\] We are now in position to apply the maximum principle. Indeed, if we put \(C=C_{1}\delta^{-1}\) then since \(w_{\epsilon}\) tends to \(+\infty\) near \(D\cup E\) the function \(H:=\log\operatorname{tr}_{\omega}(\omega_{\epsilon}^{\prime})-(C+1)w_{\epsilon}\) therefore attains its maximum at some point \(x_{0}\in X_{0}\) (depending on \(\epsilon\)). At this point, the inequality (4.5) combined with the maximum principle yield \(\operatorname{tr}_{\omega_{\epsilon}^{\prime}}(\omega_{\epsilon})(x_{0})\leq n (C+1)\). Using the elementary inequality \(\operatorname{tr}_{\tau}(\tau^{\prime})\leq n\left(\frac{\tau^{\prime n}}{ \tau^{n}}\right)(\operatorname{tr}_{\tau^{\prime}}(\tau))^{n-1}\) for any two Kahler form \(\tau\), \(\tau^{\prime}\), one gets \[\log\operatorname{tr}_{\omega}(\omega_{\epsilon}^{\prime}) \leq\log\operatorname{tr}_{\omega}(\omega_{\epsilon}^{\prime})(x_ {0})+(C+1)(w_{\epsilon}-w_{\epsilon}(x_{0}))\] \[\leq(n-1)\log n(C+1)+\log n+w_{\epsilon}(x_{0})+F_{\epsilon}(x_{ 0})+(C+1)(w_{\epsilon}-w_{\epsilon}(x_{0}))\] \[\leq C_{3}+(C+1)w_{\epsilon}\] since \(w_{\epsilon}\) is uniformly bounded from below by (4.2). This implies that \(H\) is uniformly bounded from above, hence \[\operatorname{tr}_{\omega}(\omega_{\epsilon}^{\prime})\leq C_{3}e^{-(C+1)\rho} \frac{1}{|s|_{h}^{2c(C+1)}}\] using that \(v_{\epsilon}\) is uniformly bounded from above. Thus, we end up with uniformly (in \(\epsilon\)) positive constant \(C>0\) such that \[|\Delta_{\omega}v_{\epsilon}|\leq\frac{Ce^{-C\rho}}{|s|_{h}^{2C}}.\] In particular, for any compact \(K\subset\subset X_{0}\), one gets a uniform bound for \(\|\Delta_{\omega}v_{\epsilon}\|_{L^{\infty}(K)}\). We can apply a complex version of the Evans-Krylov estimate, due to Trudinger [39] (cf. [35, Chap. 2] in this context) to obtain a local \(\mathcal{C}^{\alpha}\) estimate on the metric \(\omega_{\epsilon}^{\prime}\) for \(0<\alpha<1\), i.e., \(\|\Delta_{\omega}v_{\epsilon}\|_{\mathcal{C}^{2,\alpha}(K)}\leq C_{K}\) for a uniform constant \(C_{K}>0\) (we also refer to [40] for a new proof). From this, we eventually differentiate the equation using Schauder estimates to obtain uniform estimates \[\sup_{\epsilon\in(0,1]}\|v_{\epsilon}\|_{\mathcal{C}^{j,\alpha}(K)}\leq C_{j, \alpha}(K)<+\infty,\] for each \(0<\alpha<1\), \(j\in\mathbb{N}\), which guarantee that \(v_{\epsilon}\) is relatively compact in \(\mathcal{C}^{\infty}_{\mathrm{loc}}(X_{0})\). The lemma is thus proved. End of the proof of Theorem 1.1.: Let \(s\) be a section of \(\mathcal{O}_{X}(D)\) defining \(D\), and \(h\) a smooth metric on \(\mathcal{O}_{X}(D)\) such that \(|s|_{h}\leq 1\). Let \(\eta\) be the Chern form of \(h\). Let \(\{\omega\}\) denote the cohomology class of \(\omega\). Let \(\theta\) be a smooth closed form in \(c_{1}(K_{X}+D)\) and \(\theta_{\epsilon}:=\theta+\epsilon\omega\) for \(\epsilon\in[0,1]\) (hence \(\theta_{0}=\theta\)). Thus \((\theta_{\epsilon})_{\epsilon}\) is a sequence of smooth closed forms such that \(\theta_{\epsilon}\in c_{1}(K_{X}+D)+\epsilon\{\omega\}\) and \(\theta_{\epsilon}\) converges to \(\theta\) in \(\mathcal{C}^{0}\)-topology as \(\epsilon\to 0\). For \(0\leq\epsilon\leq 1\), let \(u_{\epsilon}\in\mathcal{E}(X,\theta_{\epsilon})\) be the solution of \[(dd^{c}u_{\epsilon}+\theta_{\epsilon})^{n}=e^{2u_{\epsilon}+2F}|s|_{h}^{-2} \omega^{n},\] where \(F\) is a smooth function so that \[\eta-\operatorname{Ric}(\omega)-\theta=-dd^{c}F.\] We see that \(\omega_{\epsilon}:=dd^{c}u_{\epsilon}+\theta+\epsilon\omega\) satisfies \[\operatorname{Ric}\omega_{\epsilon}=-\omega_{\epsilon}+\epsilon\omega+[D]\] and \(\omega_{\epsilon}\) is of full Monge-Ampere mass in \(c_{1}(K_{X}+D)+\epsilon\{\omega\}\) for \(0\leq\epsilon\leq 1\). Observe that the setting in Section 3 applies to \((u_{\epsilon})_{\epsilon}\). Hence, by Corollary 3.5, we see that if \(u\) is the \(L^{1}\) limit of a subsequence \((u_{\epsilon_{j}})_{j}\) as \(j\to\infty\), then \(u\in\mathcal{E}(X,\theta_{0})\). Lemma 4.4 yields that \(u_{\epsilon_{j}}\) also converges to \(u\) in the local \(\mathcal{C}^{\infty}\) topology in \(X\backslash(D\cup E)\). Consequently, we get \[(dd^{c}u+\theta_{0})^{n}=e^{2u+2F}|s|_{h}^{-2}\omega^{n}\] on \(X\backslash(D\cup E)\). It follows that \(u=u_{0}\) by uniqueness (see [9] or [20]). Consequently \(u_{\epsilon}\) converges to \(u_{0}\) as \(\epsilon\to 0\) in \(L^{1}\) and \(u_{\epsilon}\) converges to \(u_{0}\) in the local \(\mathcal{C}^{\infty}\)-topology in \(X\backslash(D\cup E)\). Since \(\omega_{\epsilon}\) is a complete smooth Kahler metric (by [30]) on \(X\backslash D\) and \(\operatorname{Ric}\omega_{\epsilon}\geq-\omega_{\epsilon}\), we obtain that \(\omega_{0}\) is almost-complete, and the uniqueness of \(\omega_{0}\) follows from Lemma 4.3. The sought metric \(\omega_{D}\) is exactly \(\omega_{0}\) in the above discussion. This finishes the proof. ## 5 Stability of complex Monge-Ampere equations The goal of this section is to prove the following result. **Theorem 5.1**.: _Let \(D\) be a simple normal crossing divisor in \(X\). Let \(s\) be a section of \(\mathcal{O}_{X}(D)\) defining \(D\) and \(h\) be a smooth Hermitian metric on \(\mathcal{O}_{X}(D)\) such that \(|s|_{h}<1\). Let \(\alpha\) be a big cohomology class and \(\theta\) be a smooth representative in \(\alpha\). Let \((f_{j})_{j}\) be an increasing sequence of continuous nonnegative functions such that \(f_{1}\not\equiv 0\), and \(f_{j}\) converges pointwise to \(f\in\mathcal{C}^{0}(X)\) as \(j\to\infty\). Let \((\theta_{j})_{j\in\mathbb{N}}\) be a sequence of smooth closed \((1,1)\)-form in big cohomology classes converging to \(\theta\) in the \(\mathcal{C}^{0}\) topology as \(j\to\infty\). Let \(u_{j}\in\mathcal{E}(X,\theta_{j})\) be a solution of the equation_ \[(dd^{c}u_{j}+\theta_{j})^{n}=e^{u_{j}}|s|_{h}^{-2}f_{j}\omega^{n} \tag{5.1}\] _for every \(j\geq 0\). Then \((u_{j})_{j\in\mathbb{N}}\) is a decreasing sequence in capacity and \(u_{j}\) converges in \(L^{1}\) to a \(\theta\)-psh function \(u\in\mathcal{E}(X,\theta)\) satisfying_ \[(dd^{c}u+\theta)^{n}=e^{u}|s|_{h}^{-2}f\omega^{n}. \tag{5.2}\] _In particular if (5.1) admits a solution in \(\mathcal{E}(X,\theta_{j})\), then (5.2) also possesses a (unique) solution in \(\mathcal{E}(X,\theta)\)._ To see the relevance of this result to Theorem 1.2. We recall the Monge-Ampere equation satisfied by \(\omega_{\epsilon}\). Let \(\theta\) be a smooth closed form representing \(c_{1}(K_{X}+D)\). Thus \(\omega_{D}=dd^{c}u+\theta\) where \(u\) is a unknown \(\theta\)-psh function. Let \(s\) be a section in \(\mathcal{O}_{X}(D)\) defining \(D\). Let \(h\) be a smooth Hermitian metric on the latter line bundle. Let \(\eta\) be the Chern form of \(h\). We rescale \(h\) so that \(|s|_{h}^{2}\leq 1\). Observe that \[[D]=dd^{c}\log|s|_{h}+\eta,\] and by \(dd^{c}\)-Lemma one gets \[\eta-\operatorname{Ric}(\omega)-\theta=-dd^{c}F,\] for some smooth function \(F\) on \(X\). On the other hand, \[2\operatorname{Ric}(\omega_{D})=-dd^{c}\log(\omega_{D}^{n}/\omega^{n})+2 \operatorname{Ric}(\omega).\] Hence since \(\operatorname{Ric}\omega_{D}=-\omega_{D}+[D]\), one obtains \[\frac{1}{2}dd^{c}(-\log\omega_{D}^{n}/\omega^{n})+\operatorname{Ric}\omega=- \theta-dd^{c}u+[D]=-dd^{c}u+dd^{c}\log|s|_{h}-dd^{c}F+\operatorname{Ric}\omega.\] Equivalently, we obtain \[\omega_{D}^{n}=e^{2u+2F+c}|s|_{h}^{-2}\omega^{n}, \tag{5.3}\] for some constant \(c>0\). Considering \(u+c\) in place of \(u\) gives \[(dd^{c}u+\theta)^{n}=e^{2u+2F}|s|_{h}^{-2}\omega^{n}. \tag{5.4}\] By [7, Thm. 4.2], this equation admits a unique solution \(u\in\mathcal{E}(X,\theta)\). Now consider a constant \(\epsilon>0\) small enough so that the class \(K_{X}+(1-\epsilon)D\) is still big. Observe that \(\theta_{\epsilon}:=\theta-\epsilon\eta\) is a smooth representative of the class \(c_{1}(K_{X}+(1-\epsilon)D)\), It was shown in [9] that there exists a unique Kahler-Einstein metric \(\omega_{\epsilon}\) of full Monge-Ampere mass such that \[\operatorname{Ric}(\omega_{\epsilon})=-\omega_{\epsilon}+(1-\epsilon)[D], \tag{5.5}\] or equivalently \(\omega_{\epsilon}=dd^{c}u_{\epsilon}+\theta_{\epsilon}\), where \(u_{\epsilon}\in\mathcal{E}(X,\theta_{\epsilon})\) is the unique solution of the equation \[(dd^{c}u_{\epsilon}+\theta_{\epsilon})^{n}=e^{2u_{\epsilon}+2F}|s|_{h}^{-2(1 -\epsilon)}\omega^{n}=e^{2u_{\epsilon}+2F}|s|_{h}^{-2}(|s|_{h}^{2\epsilon} \omega^{n}). \tag{5.6}\] Note that although \(u_{\epsilon}-V_{\theta_{\epsilon}}\) is bounded for every \(\epsilon>0\), the function \(u-V_{\theta}\) could be unbounded in general, the structure of \(\omega_{D}\) near \(D\) is mostly like a Poincare metrics or a conic one; cf. [27]. One sees, hence, that Theorem 5.1 is more general than Theorem 1.2. We now proceed with the proof of Theorem 5.1. **Remark 5.2**.: _In the setting of Theorems 1.2, by applying Theorem 5.1 to the equation (5.6), we see that (5.4) admits a solution in \(\mathcal{E}(X,\theta)\). This gives an alternative proof of [7, Thm. 4.2] without using the variational method._ **Remark 5.3**.: _Let \((u_{\epsilon})_{\epsilon}\) be the sequence of potentials of the twisted Kahler-Einstein metrics \(\omega_{\epsilon}\) in the proof of Theorem 1.1. We already know that \(u_{\epsilon}\) converges to \(u\) (the potential of \(\omega_{D}\)) locally in \(\mathcal{C}^{\infty}\)-topology in \(X\backslash(D\cup E)\). Direct application of Theorem 5.1 to equations defining \(\omega_{\epsilon}\) implies that \(u_{\epsilon}\) decreases in capacity to \(u\). This global property is much stronger than \(L^{1}\)-convergence of \(u_{\epsilon}\) to \(u\)._ ### Discussions Our goal here is to give an informal discussion about the difficulties in the proof of Theorem 5.1 (hence of Theorems 1.2) and explain our strategy to overcome these issues. Set \(\mu_{j}:=f_{j}|s|_{h}^{-2}\omega^{n}\). The difficulty in proving the convergence in Theorem 5.1 is that, unlike some standard situations, the sequence of measures \((\mu_{j})_{j}\) is not convergent in the mass norm (or even in the weak sense). This is due to the fact that \(f|s|_{h}^{-2}\omega^{n}\) could be of infinite mass on \(X\). To be more precise, a usual way to prove the \(L^{1}\) convergence of potentials of \((u_{j})_{j}\) is as follows (see, e.g., [9]). Let \(\delta>0\) be a fixed positive constant. Without loss of generality, we can assume that \(u_{j}\) converges to some \(u^{\prime}\) in \(L^{1}\) (ignore for the moment that \(u_{j}\) might a priori converge to \(-\infty\)). Hence \(u_{j}\) is \((\theta+\delta\omega)\)-psh for every \(j\) big enough. We consider \(u^{\prime}_{j}:=(\sup_{j^{\prime}\geq j}u_{j})^{*}\) which decreases to \(u^{\prime}\). We have \[(dd^{c}u^{\prime}_{j}+\theta+\delta\omega)^{n}\geq e^{\inf_{j^{\prime}\geq j} u_{j}}\mu_{j}.\] It follows that for every (continuous) function \(g\geq 0\), there holds \[\liminf_{j\to\infty}\int_{X}g(dd^{c}u^{\prime}_{j}+\theta+\delta\omega)^{n} \geq\liminf_{j\to\infty}\int_{X}ge^{\inf_{j^{\prime}\geq j}u_{j}}\mu_{\epsilon}.\] If one had that \(\mu_{j}\) converges to \(\mu:=|s|_{h}^{-2}f\omega^{n}\) in the mass norm, then it would follow from the last estimate that \[\liminf_{j\to\infty}\int_{X}g(dd^{c}u^{\prime}_{j}+\theta+\delta\omega)^{n} \geq\liminf_{j\to\infty}\int_{X}ge^{u^{\prime}}\mu,\] and hence, by letting \(\delta\to 0\), \[(dd^{c}u^{\prime}+\theta)^{n}=e^{u^{\prime}}\mu,\] from this, one would conclude that \(u^{\prime}=u\). Nevertheless, as already mentioned, it is not true that \(\mu_{j}\to\mu\) in the mass norm because \(\mu(X)\) might be equal to infinity. Our strategy is to use the quantitative domination principle from [22] because the standard domination principle is not sufficient for our purpose. Let us explain why. Since \(|s|_{h}^{2}\leq 1\), we get \[(dd^{c}u_{j_{1}}+\theta_{j_{1}})^{n}\leq e^{u_{j_{1}}-u_{j_{2}}}(dd^{c}u_{j_{2 }}+\theta_{j_{2}})^{n} \tag{5.7}\] if \(j_{1}\leq j_{2}\). So, it is tempting to make use of domination principles. In order to do so, one needs to consider \(u_{j_{j}}\) in the same cohomology class. Fix now a constant \(\epsilon>0\). Put \(\theta^{\prime}_{\epsilon}:=\theta+\epsilon\omega\). Let \(j_{\epsilon}\in\mathbb{N}\) be such that \(\theta-\epsilon\omega\leq\theta_{j}\leq\theta^{\prime}_{\epsilon}\) for \(j\geq j_{\epsilon}\). Consider \(j_{\epsilon}<j_{1}\leq j_{2}\). Hence \(u_{j_{s}}\) is \(\theta^{\prime}_{\epsilon}\)-psh functions for \(s=1,2\). **Lemma 5.4**.: _We have_ \[-C\epsilon\leq\int_{X}(dd^{c}V_{\theta_{j}}+\theta^{\prime}_{\epsilon})^{n}- \int_{X}(dd^{c}V_{\theta^{\prime}_{\epsilon}}+\theta^{\prime}_{\epsilon})^{n} \leq 0,\] _for some constant \(C>0\) independent of \(j\)._ Proof.: Since \(V_{\theta_{j}}\leq V_{\theta^{\prime}_{\epsilon}}\), we get \[\int_{X}(dd^{c}V_{\theta_{j}}+\theta^{\prime}_{\epsilon})^{n}-\int_{X}(dd^{c}V_{ \theta^{\prime}_{\epsilon}}+\theta^{\prime}_{\epsilon})^{n}\leq 0\] by monotonicity of non-pluripolar products. The other desired inequality follows from the fact that \(\operatorname{vol}(\{\theta_{j}\})\geq\operatorname{vol}(\{\theta-\epsilon \omega\})\geq\operatorname{vol}(\theta+\epsilon\omega)+O(\epsilon)\). By Lemma 3.1, we can assume that \(u_{j}\leq 0\) for every \(j\in\mathbb{N}\). There are two issues when we work with \(\theta^{\prime}_{\epsilon}\). The first problem is that \(u_{j_{s}}\) is no longer in \(\mathcal{E}(X,\theta^{\prime}_{\epsilon})\) for \(s=1,2\), but belongs to \(\mathcal{E}(X,\theta^{\prime}_{\epsilon},V_{\theta_{j_{s}}})\), and \(V_{\theta_{j_{s}}}\leq V_{\theta^{\prime}_{\epsilon}}\) in general. The second problem is that (5.7) is no longer true if \(\theta_{j_{s}}\) is replaced by \(\theta^{\prime}_{\epsilon}\). For a constant \(M>0\), direct computations show that on \(\{u_{j_{1}}\leq u_{j_{2}}-M\}\) there holds \[(dd^{c}u_{j_{1}}+\theta^{\prime}_{\epsilon})^{n} \leq e^{u_{j_{1}}-u_{j_{2}}}(dd^{c}u_{j_{2}}+\theta_{j_{2}})^{n}+O (\epsilon)\] \[\leq e^{-M}(dd^{c}u_{j_{2}}+\theta^{\prime}_{\epsilon})^{n}+O(e^ {-M}\epsilon)+O(\epsilon),\] where for a constant \(A>0\) we denote by \(O(A)\) a measure of mass bounded by \(A\). This estimate suggests that the quantitative domination principle can be applied. To this end, one must however have that \(u_{j_{s}}\in\mathcal{E}(X,\theta^{\prime}_{\epsilon})\), which is not true in general. For that reason, we consider a big constant \(k>0\) and put \[u_{j,k}:=\max\{u_{j},V_{\theta^{\prime}_{\epsilon}}-k\}\in\mathcal{E}(X, \theta^{\prime}_{\epsilon}).\] **Lemma 5.5**.: _There is a constant \(A>0\) independent of \(\epsilon,k\) such that for \(j_{1},j_{2}\) big enough, we have_ \[\mathbf{1}_{\{u_{j_{1},k}\leq u_{j_{2},k}-M\}}(dd^{c}u_{j_{1},k}+\theta^{ \prime}_{\epsilon})^{n}\leq\mathbf{1}_{\{u_{j_{1},k}\leq u_{j_{2},k}-M\}}e^{- M}(dd^{c}u_{j_{2},k}+\theta^{\prime}_{\epsilon})^{n}+R_{j_{1}},\] _where \(R_{j_{1}}\) is a positive measure whose mass is given by_ \[c(j_{1},k,\epsilon):=\|R_{j_{1}}\|=A\epsilon+\int_{\{u_{j_{1}}\leq V_{\theta^ {\prime}_{\epsilon}}-k\}}(dd^{c}u_{j_{1},k}+\theta^{\prime}_{\epsilon})^{n}.\] Proof.: Observe \[\{u_{j_{1},k}\leq u_{j_{2},k}-M\}\subset K:=\{u_{j_{1}}\leq u_{j_{2}}-M\}\cap \{u_{j_{2}}>V_{\theta^{\prime}_{\epsilon}}-k\}.\] Thus \[\mathbf{1}_{K}(dd^{c}u_{j_{1},k}+\theta^{\prime}_{\epsilon})^{n} \leq\mathbf{1}_{K\cap\{u_{j_{1}}>V_{\theta^{\prime}_{\epsilon}}-k \}}(dd^{c}u_{j_{1}}+\theta^{\prime}_{\epsilon})^{n}+\mathbf{1}_{K\cap\{u_{j_{1 }}\leq V_{\theta^{\prime}_{\epsilon}}-k\}}(dd^{c}u_{j_{1},k}+\theta^{\prime}_{ \epsilon})^{n}\] \[\leq\mathbf{1}_{K\cap\{u_{j_{1}}>V_{\theta^{\prime}_{\epsilon}}-k \}}e^{-M}(dd^{c}u_{j_{2},k}+\theta^{\prime}_{\epsilon})^{n}\] \[\quad+O(\epsilon)+\mathbf{1}_{K\cap\{u_{j_{1}}\leq V_{\theta^{ \prime}_{\epsilon}}-k\}}(dd^{c}u_{j_{1},k}+\theta^{\prime}_{\epsilon})^{n}.\] This finishes the proof. If we want to apply the quantitative domination principle to \(u_{\epsilon_{j},k}\), we need to check that \(c(j_{1},k,\epsilon)\) is "small", precisely, \[\limsup_{k\rightarrow\infty}\limsup_{\epsilon\to 0}\limsup_{j_{1} \rightarrow\infty}c(j_{1},k,\epsilon)=0.\] Verifying this limit is one of main difficulties in our proof. We will go into details in next sections. ### Low energy estimate We start with a fact about energy. The following monotonicity of energy is a direct consequence of [21, Lemma 3.2] (see also [24]). **Lemma 5.6**.: _Let \(\eta\) be a smooth closed \((1,1)\)-form in a big cohomology class. Let \(u,v\in\mathcal{E}(X,\eta)\) such that \(u\leq v\). Let \(\chi:\mathbb{R}_{\leq 0}\to\mathbb{R}_{\leq 0}\) be a convex increasing function such that \(\chi(0)=0\). Then we have_ \[-\int_{X}\chi(v-V_{\eta})(dd^{c}v+\eta)^{k}\wedge\omega^{n-k}\leq-2^{k}\int_{X }\chi(u-V_{\eta})(dd^{c}u+\eta)^{k}\wedge\omega^{n-k}.\] Let \(\epsilon>0\) be a small constant. Let \(\theta^{\prime}_{\epsilon}:=\theta+\epsilon\omega\). Let \(j_{\epsilon}\in\mathbb{N}\) be such that \(\theta_{j}\leq\theta^{\prime}_{\epsilon}\) for every \(j\geq j_{\epsilon}\). Hence \(u_{j}\) is \(\theta^{\prime}_{\epsilon}\)-psh for \(j\geq j_{\epsilon}\). Note that \(u_{j}\in\mathcal{E}(X,\theta_{j})\). Let \[u^{\prime\prime}_{j,\epsilon}:=\lim_{k\to\infty}P_{\theta^{\prime}_{\epsilon} }(\min\{u_{j},\ldots,u_{k}\}),\] for \(j\geq j_{\epsilon}\). By Corollary 3.6, we have that \(u^{\prime\prime}_{j,\epsilon}\) is a well-defined \(\theta^{\prime}_{\epsilon}\)-psh function and increases to some \(\theta^{\prime}_{\epsilon}\)-psh function \(u^{\prime\prime}_{\epsilon}\) as \(j\to\infty\) satisfying that \(u^{\prime\prime}_{\epsilon}\) decreases to some \(\theta\)-psh function \(u^{\prime\prime}\in\mathcal{E}(X,\theta)\) as \(\epsilon\) decreases to \(0\). Let \(\tau:\mathbb{R}_{\leq 0}\to\mathbb{R}_{\leq 0}\) be an increasing convex function such that \(\tau(0)=0\) and \(\tau(-\infty)=-\infty\) and \[-\int_{X}\tau(u^{\prime\prime}-V_{\theta})(dd^{c}u^{\prime\prime}+\theta)^{n }<\infty. \tag{5.8}\] Observe that \(u_{j}\geq u^{\prime\prime}_{j,\epsilon}\) for every \(j\geq j_{\epsilon}\). Let \[u_{j,k}:=\max\{u_{j},V_{\theta^{\prime}_{\epsilon}}-k\},\quad u^{\prime\prime }_{j,\epsilon,k}:=\max\{u^{\prime\prime}_{j,\epsilon,k},V_{\theta^{\prime}_{ \epsilon}}-k\}\] for \(k>0\). We define \(u^{\prime\prime}_{\epsilon,k}\) and \(u^{\prime\prime}_{k}\) respectively for \(u^{\prime\prime}_{\epsilon}\) and \(u^{\prime\prime}\) similarly. **Lemma 5.7**.: _We have_ \[\limsup_{j\to\infty}\int_{X}-\tau(u_{j,k}-V_{\theta^{\prime}_{\epsilon}})(dd^ {c}u_{j,k}+\theta^{\prime}_{\epsilon})^{n}\leq 4^{n}\int_{X}-\tau(u^{\prime \prime}_{k}-V_{\theta^{\prime}_{\epsilon}})(dd^{c}u^{\prime\prime}_{k}+ \theta^{\prime}_{\epsilon})^{n}.\] _for every \(k\geq 0\)._ Proof.: We have \(u_{j,k}\geq u^{\prime\prime}_{j,\epsilon,k}\). Hence using Lemma 5.6 gives \[\int_{X}-\tau(u_{j,k}-V_{\theta^{\prime}_{\epsilon}})(dd^{c}u_{j,k}+\theta^{ \prime}_{\epsilon})^{n}\leq 2^{n}\int_{X}-\tau(u^{\prime\prime}_{j,\epsilon,k}-V_{ \theta^{\prime}_{\epsilon}})(dd^{c}u^{\prime\prime}_{j,\epsilon,k}+\theta^{ \prime}_{\epsilon})^{n}.\] Letting \(j\to\infty\) and using the fact that \(u^{\prime\prime}_{j,\epsilon,k}\) increases to \(u^{\prime\prime}_{\epsilon,k}\) (hence we get correspondingly the continuity of energy), one gets \[\limsup_{j\to\infty}\int_{X}-\tau(u_{j,k}-V_{\theta^{\prime}_{\epsilon}})(dd^ {c}u_{j,k}+\theta^{\prime}_{\epsilon})^{n}\leq 2^{n}\limsup_{j\to\infty}\int_{X}- \tau(u^{\prime\prime}_{j,\epsilon,k}-V_{\theta^{\prime}_{\epsilon}})(dd^{c}u^ {\prime\prime}_{j,\epsilon,k}+\theta^{\prime}_{\epsilon})^{n}\] which is equal to \(2^{n}I_{\epsilon,k}\), where \[I_{\epsilon,k}:=\int_{X}-\tau(u^{\prime\prime}_{\epsilon,k}-V_{\theta^{\prime}_{ \epsilon}})(dd^{c}u^{\prime\prime}_{\epsilon,k}+\theta^{\prime}_{\epsilon})^{n}.\] Now observe that \(u^{\prime\prime}_{\epsilon,k}\) decreases to \(u^{\prime\prime}_{k}\). It follows that \[I_{\epsilon,k}\leq 2^{n}\int_{X}-\tau(u^{\prime\prime}_{k}-V_{\theta^{\prime}_ {\epsilon}})(dd^{c}u^{\prime\prime}_{k}+\theta^{\prime}_{\epsilon})^{n}.\] The proof is thus complete. **Lemma 5.8**.: _There exists a constant \(C>0\) such that we have_ \[\limsup_{\epsilon\to 0}\int_{X}-\tau(u^{\prime\prime}_{k}-V_{\theta^{\prime}_{ \epsilon}})(dd^{c}u^{\prime\prime}_{k}+\theta^{\prime}_{\epsilon})^{n}\leq C,\] _for every \(k\geq 0\)._ Proof.: Let \(v^{\prime\prime}_{k}:=\max\{u^{\prime\prime},V_{\theta}-k\}\geq u^{\prime\prime}\). We note that \(u^{\prime\prime}_{k}\) depends on \(\epsilon\) but \(v^{\prime\prime}_{k}\) is not. Furthermore, we have \(u^{\prime\prime}_{k}=v^{\prime\prime}_{k}\) on \(\{u^{\prime\prime}>V_{\theta^{\prime}_{\epsilon}}-k\}\) (because \(V_{\theta}\leq V_{\theta^{\prime}_{\epsilon}}\)). Consequently, one can decompose \[\int_{X}-\tau(u^{\prime\prime}_{k}-V_{\theta^{\prime}_{\epsilon} })(dd^{c}u^{\prime\prime}_{k}+\theta^{\prime}_{\epsilon})^{n} =\int_{\{u^{\prime\prime}>V_{\theta^{\prime}_{\epsilon}}-k\}}- \tau(u^{\prime\prime}_{k}-V_{\theta^{\prime}_{\epsilon}})(dd^{c}u^{\prime \prime}_{k}+\theta^{\prime}_{\epsilon})^{n}+\] \[\int_{\{u^{\prime\prime}\leq V_{\theta^{\prime}_{\epsilon}}-k\}}- \tau(u^{\prime\prime}_{k}-V_{\theta^{\prime}_{\epsilon}})(dd^{c}u^{\prime \prime}_{k}+\theta^{\prime}_{\epsilon})^{n}\] \[=\int_{\{u^{\prime\prime}>V_{\theta^{\prime}_{\epsilon}}-k\}}- \tau(v^{\prime\prime}_{k}-V_{\theta^{\prime}_{\epsilon}})(dd^{c}v^{\prime \prime}_{k}+\theta^{\prime}_{\epsilon})^{n}\] \[\quad-\tau(-k)\int_{\{u^{\prime\prime}\leq V_{\theta^{\prime}_{ \epsilon}}-k\}}(dd^{c}u^{\prime\prime}_{k}+\theta^{\prime}_{\epsilon})^{n}.\] Denote by \(I_{1},I_{2}\) the first and second terms in the right-hand side of the last equality. Recall that \(T=dd^{c}\rho+\theta\) is a Kahler current with analytic singularities in the class of \(\theta\), where \(\rho\leq 0\). Since \(dd^{c}\rho+\theta\geq\delta\omega\) for \(\delta>0\) we see that \(\epsilon^{1/2}\rho+(1-\epsilon^{1/2})V_{\theta^{\prime}_{\epsilon}}\) is a negative \(\theta\)-psh function for \(\epsilon\) small enough. Since the latter is less than \(V_{\theta}\), we infer \[V_{\theta^{\prime}_{\epsilon}}\leq(1-\epsilon^{1/2})^{-1}V_{\theta}-\epsilon^{ 1/2}(1-\epsilon^{1/2})^{-1}\rho\leq V_{\theta}-2\epsilon^{1/2}\rho,\] for \(\epsilon>0\) small enough. Since \(\tau\) is convex and \(\tau(0)=0\), one sees that \[\tau(b+c)\geq\tau(b)+\tau(c)\] for every \(b,c\in\mathbb{R}_{\leq 0}\). Applying the last inequality to \(b:=v^{\prime\prime}_{k}-V_{\theta}\) and \(c:=V_{\theta}-V_{\theta^{\prime}_{\epsilon}}\), one obtains \[-\tau(v^{\prime\prime}_{k}-V_{\theta^{\prime}_{\epsilon}})\leq-\tau(v^{\prime \prime}_{k}-V_{\theta})-\tau(V_{\theta}-V_{\theta^{\prime}_{\epsilon}})\leq- \tau(v^{\prime\prime}_{k}-V_{\theta})-\tau(2\epsilon^{1/2}\rho).\] It follows that \[I_{1}\leq\int_{X}-\tau(v^{\prime\prime}_{k}-V_{\theta})(dd^{c}v^{\prime\prime}_{k }+\theta^{\prime}_{\epsilon})^{n}-\int_{X}\tau(2\epsilon^{1/2}\rho)(dd^{c}u^{ \prime\prime}_{k}+\theta^{\prime}_{\epsilon})^{n}\] Letting \(\epsilon\to 0\) gives \[\limsup_{\epsilon\to 0}I_{1}\leq\int_{X}-\tau(v^{\prime\prime}_{k}-V_{\theta})(dd ^{c}v^{\prime\prime}_{k}+\theta)^{n}\leq 2^{n}\int_{X}-\tau(u^{\prime\prime}-V_{ \theta})(dd^{c}u^{\prime\prime}+\theta)^{n} \tag{5.9}\] because \(\tau(0)=0\) and \((dd^{c}u^{\prime\prime}_{k}+\theta)^{n}\) is a Monge-Ampere measure of bounded potentials. We treat \(I_{2}\). Direct computations show \[I_{2} =-\tau(-k)\int_{X}(dd^{c}u^{\prime\prime}_{k}+\theta^{\prime}_{ \epsilon})^{n}+\tau(-k)\int_{\{u^{\prime\prime}>V_{\theta^{\prime}_{\epsilon}} -k\}}(dd^{c}u^{\prime\prime}_{k}+\theta^{\prime}_{\epsilon})^{n} \tag{5.10}\] \[=-\tau(-k)\int_{X}(dd^{c}V_{\theta^{\prime}_{\epsilon}}+\theta^{ \prime}_{\epsilon})^{n}+\tau(-k)\int_{\{u^{\prime\prime}>V_{\theta^{\prime}_{ \epsilon}}-k\}}(dd^{c}u^{\prime\prime}+\theta^{\prime}_{\epsilon})^{n}\] \[\leq-\tau(-k)\bigg{(}\operatorname{vol}(\{\theta^{\prime}_{ \epsilon}\})-\operatorname{vol}(\{\theta\})+\operatorname{vol}(\{\theta\})- \int_{\{u^{\prime\prime}>V_{\theta^{\prime}_{\epsilon}}-k\}}(dd^{c}u^{\prime \prime}+\theta)^{n}\bigg{)}\] \[=-\tau(-k)\bigg{(}\operatorname{vol}(\{\theta^{\prime}_{\epsilon} \})-\operatorname{vol}(\{\theta\})+\int_{\{u^{\prime\prime}\leq V_{\theta^{ \prime}_{\epsilon}}-k\}}(dd^{c}u^{\prime\prime}+\theta)^{n}\bigg{)}\] \[\leq-\tau(-k)\bigg{(}\operatorname{vol}(\{\theta^{\prime}_{ \epsilon}\})-\operatorname{vol}(\{\theta\})+|\tau(-k)|^{-1}\int_{\{u^{\prime \prime}\leq V_{\theta^{\prime}_{\epsilon}}-k\}}-\tau(u^{\prime\prime}-V_{ \theta^{\prime}_{\epsilon}})(dd^{c}u^{\prime\prime}+\theta)^{n}\bigg{)}\] whose limsup as \(\epsilon\to 0\) is bounded by a constant independent of \(k\). Combining (5.10) and (5.9) gives the desired assertion. Combining Lemmas 5.8 and 5.7, we obtain the following crucial estimate. **Proposition 5.9**.: _There exists a constant \(C>0\) independent of \(k\) such that we have_ \[\limsup_{\epsilon\to 0}\limsup_{j\to\infty}\int_{X}-\tau(u_{j,k}-V_{\theta^{ \prime}_{\epsilon}})(dd^{c}u_{j,k}+\theta^{\prime}_{\epsilon})^{n}\leq C.\] **Corollary 5.10**.: _Let \(c(j_{1},k,\epsilon)\) be the term defined in Lemma 5.5. Then we have_ \[\limsup_{k\to\infty}\limsup_{\epsilon\to 0}\limsup_{j_{1}\to\infty}c(j_{1},k, \epsilon)=0\] Proof.: Observe \[\limsup_{\epsilon\to 0}\limsup_{j_{1}\to\infty}c(j_{1},k,\epsilon) =\limsup_{\epsilon\to 0}\limsup_{j_{1}\to\infty}\int_{\{u_{j_{1}},k \leq V_{\theta^{\prime}_{\epsilon}}-k\}}(dd^{c}u_{j_{1},k}+\theta^{\prime}_{ \epsilon})^{n}\] \[\leq|\tau(-k)|^{-1}\limsup_{\epsilon\to 0}\limsup_{j_{1}\to\infty} \int_{\{u_{j_{1}}\leq V_{\theta^{\prime}_{\epsilon}}-k\}}-\tau(u_{j_{1},k}-V_{ \theta^{\prime}_{\epsilon}})(dd^{c}u_{j_{1},k}+\theta^{\prime}_{\epsilon})^{n}\] \[\leq|\tau(-k)|^{-1}C\] by Proposition 5.9. Letting \(k\to\infty\) gives the desired equality. We now finish the proof of Theorem 5.1. End of the proof of Theorem 5.1.: We apply the quantitative domination principle to \(u_{j,k}\) in the class of \(\{\theta^{\prime}_{\epsilon}\}\) and \(\tilde{\chi}:=\tau\) (we can certainly choose \(\tau\) so that \(\tau(-1)=-1\)). Let \[B(j_{1},j_{2},k,\epsilon):=-\int_{X}\tau(u_{j_{1},k}-V_{\theta^{\prime}_{ \epsilon}})(dd^{c}u_{j_{1},k}+\theta^{\prime}_{\epsilon})-\int_{X}\tau(u_{j_{ 2},k}-V_{\theta^{\prime}_{\epsilon}})(dd^{c}u_{j_{2},k}+\theta^{\prime}_{ \epsilon}).\] By Proposition 5.9, one has \[\limsup_{\epsilon\to 0}\limsup_{j_{1},j_{2}\to\infty}B(j_{1},j_{2},k, \epsilon)\leq C, \tag{5.11}\] where \(C>0\) is a constant independent of \(k\). Let \(0<\kappa\leq 1/4\) be a constant. By Theorem 2.2 applied to \(u_{j_{1},k},u_{j_{2},k}-\kappa\) (noticing also Lemma 5.5), there exists a constant \(C_{1}>0\) independent of \(j_{1},j_{2},k,\epsilon,\kappa\) such that \[\text{cap}_{\omega}\big{(}u_{j_{1},k}-u_{j_{2},k}\leq-2\kappa\big{)}\leq C_{1} \kappa^{-2}\frac{\big{(}B(j_{1},j_{2},k,\epsilon)\big{)}^{2}+1}{h^{\circ n} \big{(}1/c(j_{1},k,\epsilon)\big{)}},\] with \(h(t):=(-\tau(-t))^{1/2}\), and \(j_{1},j_{2}>j_{\epsilon}\) big enough. Since \[\text{cap}_{\omega}(u_{j}<V_{\theta^{\prime}_{\epsilon}}-k)\leq\text{cap}_{ \omega}(u_{j}<-k)\lesssim k^{-1},\] one infers \[\text{cap}_{\omega}\big{(}u_{j_{1}}-u_{j_{2}}\leq-2\kappa\big{)}\lesssim\kappa ^{-2}\frac{\big{(}B(j_{1},j_{2},k,\epsilon)\big{)}^{2}+1}{h^{\circ n}\big{(}1 /c(j_{1},k,\epsilon)\big{)}}+k^{-1}.\] Hence \[\limsup_{j_{1},j_{2}\to\infty}\text{cap}_{\omega}\big{(}u_{j_{1}}-u_{j_{2}} \leq-\kappa\big{)}\lesssim\kappa^{-2}\limsup_{j_{1},j_{2}\to\infty}\frac{ \big{(}B(j_{1},j_{2},k,\epsilon)\big{)}^{2}+1}{h^{\circ n}\big{(}1/c(j_{1},k, \epsilon)\big{)}}+k^{-1}\] for every \(\epsilon\in(0,1]\) and \(k\in\mathbb{N}\). Letting \(\epsilon\to 0\) and using (5.11), we get \[\limsup_{j_{1},j_{2}\to\infty}\text{cap}_{\omega}\big{(}u_{j_{1}}-u_{j_{2}} \leq-\kappa\big{)}\lesssim\kappa^{-2}\frac{1}{h^{\circ n}\big{(}1/c(k)\big{)}} +k^{-1},\] for every \(k\), where \[c(k):=\limsup_{\epsilon\to 0}\limsup_{j_{1}\to\infty}c(j_{1},k,\epsilon).\] By Corollary 5.10, we get \(\limsup_{k\to\infty}c(k)=0\). It follows that \[\limsup_{j_{1},j_{2}\to\infty}\text{cap}_{\omega}\big{(}u_{j_{1}}-u_{j_{2}} \leq-\kappa\big{)}\lesssim\limsup_{k\to\infty}\kappa^{-1}\frac{1}{h^{\circ n} \big{(}1/c(k)\big{)}}+k^{-1}=0,\] for every constant \(\kappa>0\). In other words, the sequence \((u_{j})_{j}\) is decreasing in capacity. Let \(\psi_{\gamma}\) be the function defined right before Corollary 3.5. Since \(u_{j}\geq\psi_{\gamma}\) which is locally bounded outside the union \(W\) of \(D\) and the non-Kahler locus of \(\{\theta\}\), we see that admits a subsequence converging in \(L^{1}\). Let \(u\) be, hence, a \(L^{1}\)-limit of \((u_{j})_{j}\). Corollary 3.5 tells us that \(u\in\mathcal{E}(X,\theta)\). Since \(u_{j}\geq\psi_{\gamma}\), we also get \(u\geq\psi_{\gamma}\) and hence \(u\) is locally bounded outside \(W\). Proposition 2.4 now implies that \[(dd^{c}u_{j}+\theta_{j})^{n}\to(dd^{c}u+\theta)^{n}\] weakly on \(X\backslash W\) as \(j\to\infty\). It follows that \[(dd^{c}u+\theta)^{n}=e^{u}|s|_{h}^{-2}f\omega^{n}\] on \(X\backslash W\). The equality holds indeed on \(X\) because the non-pluripolar products have no mass on \(W\), which is a pluripolar set. Hence we get \((dd^{c}u+\theta)^{n}=e^{u}|s|_{h}^{-2}f\omega^{n}\) on \(X\) and \(u\in\mathcal{E}(X,\theta)\). This is at most one such \(u\) by the domination principle. This yields that \(u_{j}\to u\) in \(L^{1}\) and \(u\) satisfies the required Monge-Ampere equation. This ends the proof of Theorem 5.1.
2309.04851
Leaf: Modularity for Temporary Sharing in Separation Logic (Extended Version)
In concurrent verification, separation logic provides a strong story for handling both resources that are owned exclusively and resources that are shared persistently (i.e., forever). However, the situation is more complicated for temporarily shared state, where state might be shared and then later reclaimed as exclusive. We believe that a framework for temporarily-shared state should meet two key goals not adequately met by existing techniques. One, it should allow and encourage users to verify new sharing strategies. Two, it should provide an abstraction where users manipulate shared state in a way agnostic to the means with which it is shared. We present Leaf, a library in the Iris separation logic which accomplishes both of these goals by introducing a novel operator, which we call guarding, that allows one proposition to represent a shared version of another. We demonstrate that Leaf meets these two goals through a modular case study: we verify a reader-writer lock that supports shared state, and a hash table built on top of it that uses shared state.
Travis Hance, Jon Howell, Oded Padon, Bryan Parno
2023-09-09T17:46:58Z
http://arxiv.org/abs/2309.04851v1
# Leaf: Modularity for Temporary Sharing in Separation Logic (Extended Version) ###### Abstract. In concurrent verification, separation logic provides a strong story for handling both resources that are owned exclusively and resources that are shared persistently (i.e., forever). However, the situation is more complicated for temporarily shared state, where state might be shared and then later reclaimed as exclusive. We believe that a framework for temporarily-shared state should meet two key goals not adequately met by existing techniques. One, it should allow and encourage users to verify new sharing strategies. Two, it should provide an abstraction where users manipulate shared state in a way agnostic to the means with which it is shared. We present Leaf, a library in the Iris separation logic which accomplishes both of these goals by introducing a novel operator, which we call _guarding_, that allows one proposition to represent a shared version of another. We demonstrate that Leaf meets these two goals through a modular case study: we verify a reader-writer lock that supports shared state, and a hash table built on top of it that uses shared state. ## 1. Introduction Multi-threaded concurrent programs are difficult to get right. One challenging pattern in such programs is _read-sharing_, i.e., allowing multiple threads to simultaneously read mutable shared state as long as no other thread is actively writing. This common optimization reduces thread contention and is often considered critical for scaling concurrent performance. While the general idea is commonly deployed, the concrete instantiations vary wildly. For example, even the implementation of a conceptually simple reader-writer lock grows quite complicated as various kinds of scaling issues are considered (Calciu et al., 2013; Dice and Kogan, 2019; Guerraoui et al., 2019; Hsieh and Weih, 1992; Kashyap et al., 2017; Liu et al., 2014; Shirako et al., 2012). As a concrete instance, one possible optimization uses multiple reference counters, each on its own cache line, to reduce thread contention for readers (Calciu et al., 2013). And yet the challenge does not stop at reader-writer locks; for instance, the node-replication algorithm of Calciu et al. (2017) might allow simultaneous read-access to particular entries of a ring buffer, and this protocol does not resemble a lock in the slightest. Given all of this complexity, we would naturally like to verify, in a modular fashion, both the read-sharing implementations and the programs that use them. One effective tool for reasoning about concurrent programs is _concurrent separation logic (CSL)_(O'Hearn, 2007; Reynolds, 2002) which lets us reason naturally about exclusive ownership of memory and, more generally, arbitrary resources. This is a great fit for (mutually exclusive) locks: it is easy to write a specification of a lock that allows the client to obtain ownership of a resource (e.g., permission to access some part of memory, or a more complicated invariant) which is returned upon releasing the lock, allowing the resource to be transferred between threads. But how do we handle simultaneous read-sharing? We want some way of reasoning about this _simultaneous, shared_ access--e.g., we want to talk about memory access where we can only read but not write, or invariants which must be preserved until the shared access is released. CSL's exclusive ownership does not work here, since the desired type of ownership is not exclusive. Some more recent CSLs also have a concept called _persistent knowledge_(Bizjak and Birkedal, 2018), which allows indefinite sharing, but this on its own does not suffice either: whatever sharing mechanism we use needs a way to _reclaim_ exclusive access once all shared access has been revoked. Two of the earliest proposed methods to represent shared state while allowing reclamation are _fractional permissions_[11] and _counting permissions_[12]. These representations have the benefit of being easy to construct and prove sound, but they do not form complete proof strategies for complex sharing protocols like the above, which are likely to have considerably more state that evolves in complex ways and cannot be expressed through fractional permissions or reference counters alone. Furthermore, the desire for modularity means there is pressure for specifications to converge on one particular representation so that they can interoperate; this limits flexibility since different representations might be more suitable for different applications. How can we support a wide gamut of sharing techniques--so the user can choose the right tool for the job, even building their own representation if necessary--while also achieving modularity? Our approach is to take the core idea of fractional and counting permissions, generalize it, and a provide a uniform way to reason about it. That core idea, we argue, is that both involve propositions which are able to "stand in" for some kind of shared resource, but which are suitable for manipulation within the substructural separation logic. This enables them to handle the temporality of the sharing. This motivates us to extract the essence of this "standing-in" relationship and brings us to our contribution, the Leaf logic. Leaf introduces a novel operator \(\mapsto\), which we call _guarding_, to represent the relationship between an exclusively owned proposition and the shared proposition it represents. Leaf handles both sides of this abstraction. First, we show how the user can _deduce_ nontrivial \(\mapsto\) relationships by constructing arbitrary sharing protocols. Such protocols include ones based on the above-mentioned patterns, as well as custom protocols that are tailored to particular implementations. Our approach is inspired by _custom ghost state_[13, 10, 14, 15, 16, 17, 18], a class of flexible separation logic techniques. We call the protocols of our new formulation _storage protocols_. Second, we show how to _make use of \(\mapsto\)_ relationships, through general rules that are agnostic to the underlying sharing mechanism, thus enabling modular specifications. This approach is symbiotic between ghost state and read-sharing: by applying custom-ghost-state techniques, Leaf supplies a general form for read-sharing mechanisms; meanwhile, some ghost-state constructions become simpler because they can rely on Leaf's read-sharing, without needing to include their own bespoke sharing mechanisms. Storage protocols can support complex algorithms found in real-world concurrent software systems. As we discuss in SS6, Leaf's storage protocols have already been used in the IronSync framework [15], which is enabled by Leaf's systematic approach to read-shared custom ghost state. IronSync targets production-scale, high-performance concurrent systems, e.g., a multi-threaded page cache (reaching 3M ops / second) or the node-replication algorithm mentioned above (3M ops / second across 192 threads). These results confirm that high-performance applications contain sophisticated, domain-specific read-sharing patterns; demonstrate that Leaf's storage protocols can handle them; and provide evidence that system developers find Leaf's perspective on temporarily read-shared resources useful. In this paper, though, we will mostly focus on the technical formalism of this perspective, so we use a smaller, self-contained example that is simple enough to explain in full, but still complex enough to show the utility of Leaf. Specifically, we make the following contributions: * We present Leaf, which has built-in deduction rules for temporarily shared state and a mechanism for user-defined sharing protocols based on ghost state, which we call _storage protocols_. * We show how storage protocols capture existing patterns, e.g., fractional and counting permissions. * We illustrate Leaf's modular specifications through a case study of a reader-writer lock and a hash table, demonstrating several different facets of sharing: * The hash table itself is shared between threads. * The hash table is composed of many reader-writer locks, which inherit that sharedness. * The memory cells in the reader-writer lock further inherit that sharedness, and they are accessed atomically by multiple threads. * The reader-writer locks allow temporary, shared read-only access to the hash table's memory slots. Furthermore, this is done via a clean, modular specification of the reader-writer lock. * We prove the soundness of Leaf and provide it as a library in the Iris framework (Jung et al., 2018) in Coq.1 We mechanize our case studies in Coq. These proofs are available as open source and in our supplementary materials. Footnote 1: [https://github.com/secure-foundations/leaf](https://github.com/secure-foundations/leaf) ## 2. Overview Leaf is constructed as a library in the _Iris separation logic_(Jung et al., 2018). We generally use Iris notation, where applicable, and all the standard Iris proof rules apply. We assume familiarity with separation logic basics (the connectives \(*\), \(\neg*\), and so on), but we will review key Iris features as they come up. ### Leaf Introduction: Resource Guarding The primary question we need to unravel is how to talk generally about "a shared \(P\)" for any proposition \(P\). Here, the proposition \(P\) might be something simple, like the permission to access a certain memory location \(\ell\) and read a specific value \(v\), denoted \(\ell\hookrightarrow v\), or it might be a more complex invariant. In order to make shared state reclaimable, we connect the "shared \(P\)" to some exclusively owned (not persistent) proposition. We do this via a relationship \(G\nrightarrow P\), pronounced \(G\)_guards_\(P\). \(G\nrightarrow P\) is itself a proposition; informally, it means that \(G\) can be used as a "shared \(P\)." Hence, if some program proof needs to operate over a "shared \(P\)," it can instead take \(G\) as an exclusively owned precondition, and use the relationship \(G\nrightarrow P\) when it needs to use \(P\). Later, \(G\) might be consumed (disallowing further shared access to \(P\)), and eventually the exclusive ownership of \(P\) might be reclaimed. In general, then, when we want to write a proof that operates over some read-only \(P\) in a way that abstracts over the way \(P\) is shared, we can write the proof to take ownership of some arbitrary Iris proposition \(G\) : _iprop_ where \(G\nrightarrow P\). To codify this pattern, we use a shorthand, \([X]\)\(\{P\}\)\(e\)\(\{Q\}\), to mean \(\forall G\) : _iprop_. \(\{P*G*(G\nrightarrow X)\}\)\(e\)\(\{Q*G\}\). This can be read as "if command \(e\) executed, with \(P\) owned at the beginning, and with \(X\) shared, then \(Q\) is owned at the end." We indicate shared resources in purple, though this is only a visual aid, and it has no syntactic meaning. For example, a program logic might allow writing to a memory location given exclusive ownership of \(\ell\hookrightarrow v\), but allow reading from it given _shared_ ownership of \(\ell\hookrightarrow v\). Leaf specifies this as: \[\begin{array}{ll}\textsc{\textsc{\small\small\small\small\small\small\small Heap -Write}}&\textsc{\textsc{\small\small\small\small\small\small Heap-Read- Shared}}\\ \{\ell\hookrightarrow v\}\ \ell\gets v^{\prime}\ \{\ell\hookrightarrow v^{\prime}\}&[\ell \hookrightarrow v]\ \{\}\!\ell\ \{r.v=r\}\end{array}\] Here, \(\ell\gets v^{\prime}\) is the command to write to the reference \(\ell\), while \(!\ell\) reads it. A bound variable in a postcondition, e.g. \(r\) here, represents the command's return value, so \(\textsc{\textsc{\small\small\small\small Heap-Read-Shared}}\) says, if we have a shared \(\ell\hookrightarrow v\) and read from \(\ell\), then we obtain a value equal to \(v\). ### Example: A Reader-Writer Lock Specification The reader-writer lock spec in Figure 1 illustrates several facets of our guarding system. The API of this lock has six functions: rwlock_new and rwlock_free are the constructor and destructor, respectively; lock_exc and unlock_exc are intended to allow exclusive, write access to some underlying resource; lock_shared and unlock_shared are intended to allow shared, read-only access. Exactly what this "resource" is may be determined by the client. Holding the spec together is the proposition \(\mathsf{IsRwLock}(rw,\gamma,F)\), which roughly says that the value \(rw\) is a reader-writer lock with a unique identifier \(\gamma\). \(F\) is used to specify the resource being protected--we will return to this in a moment. Note that when a new reader-writer lock is constructed (via rwlock_new) the client obtains exclusive ownership over \(\mathsf{IsRwLock}(rw,\gamma,F)\); on the other hand, the operations that are meant to run concurrently all take \(\mathsf{IsRwLock}(rw,\gamma,F)\) as _shared_. The destructor, rwlock_free, again requires non-shared ownership, as naturally it should not be able to run concurrently with other operations. Now, the client needs to specify what sort of resource they want to protect. For example, the client might want to protect access to some location in memory, say \(\ell\), so they would use the lock to protect resources of the form \(\ell\hookrightarrow v\). To allow the client to choose the kind of resource they want to protect, our specification lets the client, upon construction of the lock, provide a _proposition family_\(F:X\to\mathit{iProp}\) parameterized over some set \(X\). In the above example, we might have \(F=\lambda x:\mathit{Value}\). \(\ell\hookrightarrow x\) for some fixed \(\ell\) determined at the time of the rwlock_new() call. In the specification, observe that we then use \(F(x)\), for some \(x\), to represent the resource when it is obtained from the lock by lock_exc. Upon calling unlock_exc, the client then has to return some \(F(x^{\prime})\), where \(x^{\prime}\) might be different than \(x\). This makes sense, because lock_exc is supposed to be a write-lock, so the client should be able to manipulate the given resource at will, provided it restores the lock's invariants. Acquiring the shared lock is more interesting, since we have to acquire some \(F(x)\) resource in a _shared_ way. This is where the \(\nRightarrow\) operator comes in: rather than receiving \(F(x)\) directly, the client obtains a special resource \(\mathsf{Sh}(\gamma,x)\) (for some \(x\)), for which we have \(\mathsf{Sh}(\gamma,x)\nRightarrow F(x)\). Thus, the client has shared access to \(F(x)\) as long as it has the \(\mathsf{Sh}\), which must be relinquished upon release of the lock. We view \(\mathsf{Sh}\) as a separation logic analogue of a _lock guard_, an object in some locking APIs [The cppreference Team, 2011; The Rust Team, 2014] which exists for the duration of a held lock. Indeed, this inspires the _guarding_ name. Notice the choice of parameter set \(X\) and proposition family \(F\) determines exactly what it means to be "read-only," because it is the value \(x:X\) which is fixed until the client releases the shared lock. For example, we might set \(X=\mathbb{Z}\) and \(F=\lambda x:\mathbb{Z}\). \(\ell\hookrightarrow x\). Then the client can take a shared lock Figure 1. Example specification for a reader-writer lock using Leaf notation. In §4, we show how to prove this specification for a particular implementation. and obtain shared \(\ell\hookrightarrow x\) for some fixed integer \(x\), which cannot change until the lock is released. On the other hand, they might set, say, \(X=\mathbb{Z}_{2}\) and \(F=\lambda x:\mathbb{Z}_{2}\). \(\exists n\). \((\ell\hookrightarrow n)*(n=x\mod 2)\). In this case, upon taking the shared lock, they receive \(\ell\hookrightarrow v\), but now only the _parity_ of \(v\) is fixed. In fact, in this situation, the user would be able to update \(v\) to another value of the same parity (provided they do so in an atomic operation). The RwLock spec raises two questions: How can the client do interesting things when they have \(\operatorname{Sh}(\gamma,x)\), i.e., a "shared \(F(x)\)", rather than exclusive ownership of \(F(x)\)? Secondly, how can we verify a realistic lock implementation against this spec, which requires the deduction of a nontrivial guard relationship \(\operatorname{Sh}(\gamma,x)\nmapsto F(x)\)? Let us tackle these in turn. ### Utilizing Shared State How does the user actually benefit from shared state, i.e., state (like \(F(x)\)) under a guard operator, as in \((\operatorname{Sh}(\gamma,x)\nmapsto F(x))\) from the previous example? In general, if \(G\nmapsto P\), Leaf aims to let \(G\) be usable in any operation that could have used \(P\), provided that \(P\) is not modified. Such an operation might be given by the Iris operator called the _view shift_, as in \(P*A\nRightarrow P*B\). In general, the view shift (\(\nRightarrow\)) effectively says we can give up the resources on the left side to obtain the resources on the right. In the example, though, with \(P\) on both the left and right sides, \(P\) is _not_ consumed, although it _is_ needed to perform the operation. In this case, we could use \(G\) in place of \(P\); i.e., we would have \(G*A\nRightarrow G*B\). Frequently, in order to perform such updates, we need to first compose multiple pieces of shared state together; for example, suppose we employ a fine-grained locking scheme, where a thread might hold multiple pieces of state from different locks in shared mode, which _all together_ are needed to perform a certain update or deduction. Ordinarily, we would compose the corresponding propositions with separating conjunction (\(*\)), but here, the pieces, being shared, might come from the same source and not actually be separated. To get around this, we use overlapping conjunction (\(\wedge\)) rather than \(*\) when dealing with shared state. It turns out that constructing sound deduction rules to use \(\wedge\) is subtle; the rule we give in SS3.2 requires a specific technical condition. We will show how all this works together through our fine-grained hash table example (SS5). ### Deducing Guard Relationships Towards verifying an implementation of a reader-writer lock, the most salient technical question is how we can construct nontrivial propositions like \(\operatorname{Sh}(\gamma,x)\) and prove guard relationships on them. To tackle this question, it helps to first look at simpler examples of nontrivial guard relationships. As such, let us take a look at _fractional permissions_(Boyland, 2003) and _counting permissions_(Bornat et al., 2005), two of the oldest known methods used to account for reclaimable read-shared permissions for memory (and other resources). Figure 2. Fractional and counting permissions expressed by the \(\nmapsto\) operator. The \(\nmapsto\) means we can perform an _update_ to exchange exclusive ownership of one side for the other, while \(\nplus\) means both sides are equivalent. \(\mathcal{F}\)_rac_ and _Count_ are arbitrary _namespaces_ (§3.4). In Leaf, these laws are derived from storage protocols (§3.4). #### 2.4.1. Fractional and Counting Examples In the fractional paradigm, the points-to proposition is labeled with a rational number \(q:Q\). These propositions combine additively: \((\ell\xrightarrow{\text{frac}}q\ v)*(\ell\xrightarrow{\text{frac}}q^{\prime}\ v) \dashv\dashv(\ell\xrightarrow{\text{frac}}q+q^{\prime}\ v)\), where the \(\dashv\) is bidirectional entailment, i.e., the two sides are equivalent. Write permission is given by \(\ell\xrightarrow{\text{frac}}v\) and read permission is given by any \(\ell\xrightarrow{\text{frac}}q\ v\) where \(q>0\). The idea is that the \(\ell\xrightarrow{\text{frac}}_{1}v\) can be split into multiple fractional pieces, which can be handed out and used in a read-only fashion, and then put back together to obtain write access, allowing the user to change \(v\). Intuitively, the reason this works is that one cannot reclaim write access without gathering _all_ the read-only pieces, since all of them are needed to sum back to \(1\). Thus anyone holding onto a read-only piece cannot have the value changed out from under them by another thread. Counting permissions, on the other hand, does not allow arbitrary splitting, but instead uses a centralized counter, \(\ell\xrightleftharpoons[]{\text{root}}_{n}v\ (n:\mathbb{N})\) to keep track of the number of extant read-only permissions, denoted \(\ell\xrightleftharpoons[]{\text{root}}v\). The user can increment the counter to obtain another read-only permission, or perform the inverse: \((\ell\xrightleftharpoons[]{\text{root}}_{n}v)\dashv\dashv(\ell \xrightarrow{\text{count}}_{n+1}v)*(\ell\xrightleftharpoons[]{\text{root}}v)\). Meanwhile, \(\ell\xrightleftharpoons[]{\text{root}}_{0}v\) gives write permission; i.e., we can write as long as there are zero read permissions in existence. In Leaf, we can express both these patterns using \(\nRightarrow\), as shown in Figure 2. The idea of the "write permission" is expressed by saying that \(\ell\xrightleftharpoons[]{\text{frac}}_{1}v\) can be exchanged for \(\ell\xrightleftharpoons[]{\text{root}}v\), and vice versa; the "read permission" is expressed by the guards relationship: \((\ell\xrightleftharpoons[]{\text{frac}}q\ v)\nRightarrow_{\text{frac}}(\ell \xrightleftharpoons[]{\text{root}}v)\). (We explain the \(\mathcal{F}rac\) label later.) The same approach works for the counting permissions. Note that with this setup, we do not need to prove the heap read and write rules for the fractional and counting permissions individually. Rather, we simply apply the more general Heap-Writte and Heap-Read-Shared rules from earlier along with any guard relationship, such as one from Figure 2. #### 2.4.2. Nontrivial Guarding with Storage Protocols Now, we return to our question from earlier: how can we, in general, soundly construct propositions like \((\ell\xrightleftharpoons[]{\text{frac}}q\ v)\), \((\ell\xrightleftharpoons[]{\text{root}}v)\), or \(\text{Sh}(\gamma,x)\)? What are the primitive deduction rules for \(\nRightarrow\), and in particular, what are the rules that allow us to prove nontrivial \(\nRightarrow\) relations on those propositions? All of these can be constructed by via a Leaf formalism called a _storage protocol_, so named because they allow the user to "store" propositions (e.g., by \((\ell\xrightleftharpoons[]{\text{root}}v)\nRightarrow_{\mathcal{F}rac}( \ell\xrightleftharpoons[]{\text{frac}}0)\)) and then access them in a shared manner. The core idea is based off of _custom ghost state_, a concept wherein a user may define their own resources and derive update relations \((\nRightarrow)\). Storage protocols extend the concept to also allow the derivation of guard relations \((\nRightarrow)\). The propositions constructed by the protocol are able to guard arbitrary propositions that have no intrinsic notion of being shareable. For example, in the fractional example, \(\ell\xrightleftharpoons[]{\text{root}}v\) has no notion of being shareable; rather, _given_\(\ell\xrightleftharpoons[]{\text{root}}v\), without knowing anything about its definition, Leaf allows us to construct the \(\ell\xrightleftharpoons[]{\text{frac}}q\ v\) proposition with a particular guard relationship to \(\ell\xrightleftharpoons[]{\text{root}}v\). This is an instance of the same feature that lets us have \(\text{Sh}(\gamma,x)\nRightarrow F(x)\) parameterized by an arbitrary proposition family \(F\). ### Outline Throughout the paper, we explore these examples in more detail. We first formally introduce \(\nRightarrow\) and its elementary deduction rules, and sketch how we can derive rules like Heap-Read-Shared within Leaf. We then introduce our new formulation of custom ghost state, allowing the deduction of nontrivial \(\nRightarrow\) propositions, such as those in Figure 2. We show how to verify a reader-writer lock, proving the specification (Figure 1) holds for a particular implementation. To illustrate Leaf's modular specifications, we then build another application on top of the reader-writer lock. Finally, we discuss our construction of Leaf within Iris and the definition of \(\nRightarrow\), proving our laws sound. ## 3. The Leaf Logic We begin our presentation by reviewing the concept of custom ghost state. The formulation we present first is (largely) standard, yet still significant within Leaf. Then we will dive into Leaf's \(\nrightarrow\) operator and our new extension of custom ghost state. ### Custom Ghost State (Background) _Custom ghost state_ in Iris is a mechanism through which the user can soundly construct their own resource with custom update rules. We present a simplified version of it here, based primarily on the "Iris 1.0" formulation (Jung et al., 2015). The construction is parameterized by a _partial commutative monoid_ (PCM) whose elements form the basis of the resource. Formally, a PCM is a set \(M\) (the _carrier_ set) with a composition operator \(\cdot:M\times M\to M\) which is associative and commutative, and with a unit element \(\epsilon\). We let \(a\preceq b\triangleq(\exists c.\ a\cdot c=b)\). The partiality is represented by a _validity predicate_\(\mathcal{V}:M\rightarrow\mathsf{Bool}\), where \(\mathcal{V}(\epsilon)\) and \(\forall a,b.\ a\preceq b\land\mathcal{V}(b)\Rightarrow\mathcal{V}(a)\). We also define a derived relation \(\nrightarrow\) called the _frame-preserving update_, \[a\nrightarrow b\triangleq\forall c.\ \mathcal{V}(a\cdot c)\Rightarrow \mathcal{V}(b\cdot c).\] Essentially, \(a\) can transition to \(b\) if for any valid way of "completing" state, the state would remain valid after the transition. For any such \(M\), Iris shows the rules in Figure 3 are sound for a proposition written \(\{\stackrel{{\bullet}}{{a}}\}^{\mathsf{Y}}_{i}\) for \(a:M\). The \(\gamma:\)_Name_ is a _ghost name_ (sometimes called _ghost location_) from an arbitrary, infinite set of available names. These rules show, for instance, that the compositional structure \((\cdot)\) of the monoid determines the compositional structure within the logic, i.e., \(\{\stackrel{{\bullet}}{{a}}\}^{\mathsf{Y}}_{i}\) is equivalent to \(\{\stackrel{{\bullet}}{{a}}\}^{\mathsf{Y}}_{i}\ast\{\stackrel{{ \bullet}}{{b}}\}^{\mathsf{Y}}_{i}\). Likewise, an update \(a\nrightarrow\)\(b\) means that we can exchange \(\{\stackrel{{\bullet}}{{a}}\}^{\mathsf{Y}}_{i}\) for \(\{\stackrel{{\bullet}}{{b}}\}^{\mathsf{Y}}_{i}\) as resources within the logic: \(\{\stackrel{{\bullet}}{{a}}\}^{\mathsf{Y}}_{i}\equiv\stackrel{{ \bullet}}{{b}}\stackrel{{\bullet}}{{b}}\stackrel{{ \bullet}}{{b}}\stackrel{{\bullet}}{{b}}\) as given by PCM-Update. The operator \(\nrightarrow\) is called _view shift_, and it essentially means we can give up the resource on the left-hand to obtain the resource on the right. VS-Hoare says we can perform such updates at any program point during a proof. Note that the view shifts can also be annotated with a _mask_, denoted \(\nrightarrow_{\mathcal{E}}\); we discuss this further in the next section. Example 3.1 ().: An archetypal PCM is the _exclusive monoid_, \(\textsc{Excl}(X)\), for a given set \(X\). The elements of \(\textsc{Excl}(X)\) are made out of the following symbols: \[\epsilon\mid\mathsf{ex}(x)\mid\dot{\varepsilon}\qquad\text{with }\forall x,y.\ \mathsf{ex}(x)\cdot\mathsf{ex}(y)=\dot{\varepsilon}\text{ and }\forall a,a\cdot \epsilon=a\text{ and }a\cdot\dot{\varepsilon}=\dot{\varepsilon}\] Here, \(\epsilon\) is the unit element, representing ownership of nothing, the value \(\mathsf{ex}(x)\) represents exclusive ownership of a state \(x\), and \(\dot{\varepsilon}\) represents the impossible "conflict" state of multiple ownership claims. The elements \(\epsilon\) and \(\mathsf{ex}(x)\) are all considered "valid," while \(\dot{\varepsilon}\) is "invalid," i.e., \(\mathcal{V}(\dot{\varepsilon})=\mathsf{False}\). One can show that for any \(x,y:X\), \(\mathsf{ex}(x)\nrightarrow\mathsf{ex}(y)\), which implies the view shift, \(\{\stackrel{{\bullet}}{{\mathsf{ex}(x)}}\}^{\mathsf{Y}}_{i}\) by PCM-Update. That is, given ownership of the state, one can freely update it. _Using the overlapping conjunction._ We make a point to include a rule for overlapping conjunction, since in dealing with shared state we often have the potential for overlap. PCM-And lets us deduce that \(\{\stackrel{{\bullet}}{{x}}\}^{\mathsf{Y}}_{i}\stackrel{{ \bullet}}{{y}}\stackrel{{\bullet}}{{y}}\stackrel{{ \bullet}}{{y}}\stackrel{{\bullet}}{{z}}\ invariant_, and this means that having \(G\) allows shared access to \(I\). Guards, like view shifts, are annotated with a mask \(\mathcal{E}\), as we discuss below. The basic rules for \(\Join_{\mathcal{E}}\) are given in Figure 4. For example, Guard-Refl says that a \(P\) represents a shared \(P\), while Guard-Trans says that if \(Q\) is a shared \(R\), then a shared \(Q\) is also a shared \(R\). Guard-Pers and Unguard-Pers show how persistent propositions can move into or out from under a \(\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\J\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\J\Join\Join\Join\Join\J\Join\Join\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\J\Join\Join\J\Join\J\Join\Join\J\Join\Join\J\Join\J\Join\J\Join\J\Join\Join\J\Join\J\Join\J\Join\J\Join\Join\J\Join\J\Join\Join\J\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\Join\J\Join\J\Join\Join\Join\Join\Join\J\Join\Join\Join\J\Join\Join\Join\Join\J\Join\Join\Join\Join\Join\J\Join\Join\Join\Join\J\Join\Join\Join\J\Join\Join\J\Join\Join\Join\Join\J\Join\Join\Join\J\Join\Join\J\Join\Join\Join\J\Join\Join\Join\Join\J\Join\Join\J\Join\Join\Join\Join\Join\Join\J\Join\Join\Join\Join\Join\Join\Join\J\Join\Join\Join\Join\Join\Join\J\Join\Join\J\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\oin\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\oin\Join\Join\Join\Join\oin\Join\Join\Join\Join\Join\Join\oin\Join\Join\oin\Join\Join\Join\oin\Join\Join\Join\Join\Join\Join\oin\Join\Join\Join\Join\oin\Join\Join\Join\Join\Join\Join\Join\Join\Join\oin\Join\oin\Join\Join\Join\oin\Join\Join\Join\Join\oin\Join\Join\oin\Join\Join\Join\oin\Join\oin\Join\Join\Join\oin\Join\Join\oin\Join\Join\Join\Join\Join\Join\Join\oin\Join\oin\Join\Join\Join\Join\Join\Join\Join\oin\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\oin\Join\Join\oin\Join\Join\Join\Join\oin\Join\Join\oin\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\oin\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\oin\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\oin\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\oin\Join\Join\Join\Join\Join\Join\Join\oin\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\Join\oin\Join\Join\Join\oin\Join\Join\Join\Join\Join\oin\Join\Join\Join\oin\Join\Join\Join\Join\oin\Join\Join\oin\Join\Join\oin\Join\oin\Join\Join\oin\Join\Join\oin\Join\Join\oin\Join\oin\Join\Join\oin\Join\oin\Join\oin\Join\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\Join\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\oin\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\Join\oin\oin\Join\oin\Join\oin\oin\Join\oin\Join\oin\oin\Join\oin\Join\oin\oin\Join\oin\Join\oin\oin\Join\oin\oin\Join\oin\Join\oin\oin\Join\oin\oin\Join\oin\oin\Join\oin\oin\Join\oin\oin\Join\oin\Join\oin\oin\Join\oin\oin\Join\oin\oin\Join\oin\oin\Join\oin\oin\Join\oin\Join\oin\oin\Join\oin\Join\oin\oin\Join\oin\Join\oin\oin\Join\oin\oin\Join\oin\oin\Join\oin\oin this is similar to invariant reentrancy, so it should be no surprise that we solve the problem the same way, that is, via mask sets. _Guards and implications._ One might expect a rule where we use \(G\nmapsto I\) and \(I\vdash J\) to conclude \(G\nmapsto J\). This _does_ work when we can write \(I=J*J^{\prime}\) for some \(J^{\prime}\)(guard-split), but it does not hold in general: consider, for example, a judgment such as \((\ell\leftrightarrow 1)\vdash(\exists x.\ \ell\leftrightarrow x)\). It would be unsound if a user sharing \(\ell\leftrightarrow 1\) could "downgrade" it to the right-hand side; that user could then update \(\ell\) to a different value and invalidate the proposition the other users were relying on. Interestingly, it turns out that there are some propositions \(J\) such that any judgment \(I\vdash J\)_can_ always be "split" into \(I=J*J^{\prime}\). Specifically, this happens whenever \(J\) is of the form \(\smash{\raisebox{-0.86pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-0.86pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-0.86pt}{\scalebox{1.0}{$\bullet$}} \raisebox{-0.86pt}{\scalebox{1.0}{$\bullet$}}}^{\prime}\) or a conjunction thereof. We call these _point propositions_ and indicate them by \(\mathsf{point}(J)\)(PointProp-Own, PointProp-Sep). For such \(J\), we can indeed conclude \(G\nmapsto J\)(guard-Implies). _Overlapping Conjunction._ How can we compose shared state? We certainly cannot have a rule like \((G\nmapsto_{\mathcal{E}}A)*(H\nmapsto_{\mathcal{E}}B)\vdash((G*H)\nmapsto_{ \mathcal{E}}(A*B))\). After all, \(A\) and \(B\) might be shared from the same source and thus not be properly separated. Somewhat surprisingly, this rule is not even sound if we require the masks to be disjoint. (See Appendix B.4 for a concrete counterexample.) Instead of using \(*\), we use \(\wedge\). One might instead conjecture a \(\wedge\)-based rule like \((G\nmapsto_{\mathcal{E}}A)*(G\nmapsto_{\mathcal{E}}B)\vdash(G\nmapsto_{ \mathcal{E}}A\wedge B)\); this rule still is not sound on its own (again, see Appendix B.4 for a concrete counterexample), but fortunately, it becomes sound as long as we add another point proposition condition (guard-And). This rule is especially useful in combination with PCM-And, which can be used to deduce the premise of Guard-And. ### Using \(\nmapsto\) in a program logic _Deriving heap rules._ Iris is not a separation logic for a single programming language; rather, it is a general separation logic _framework_ which can be used to instantiate a program logic for any user-provided programming language. In other words, rules like the following, which might be considered "primitive" rules within a program logic, can actually be derived soundly within Iris. \[\begin{array}{ll}\textsc{Heap-Ref}&\textsc{Heap-Free}&\textsc{Heap-Write}& \textsc{Heap-Read}\\ \{\}\ \mathsf{ref}(v)\ \{\mathsf{f}.\ \ell\leftrightarrow v\}&\{\ell \leftrightarrow v\}\ \mathsf{free}(v)\ \{\}&\{\ell\leftrightarrow v\}\ \mathsf{f}\leftarrow v^{\prime}\ \{\ell\leftrightarrow v^{\prime}\}&\{\ell \leftrightarrow v\}\ \mathsf{if}\ \{r.\ \ell\leftrightarrow v*v=r\}\end{array}\] Let us overview this process, and then explain how it works with Leaf's \(\nmapsto\) in the picture. To instantiate a program logic, the user provides their programming language and its operational semantics. Here, we consider heap semantics operating over a state given by \(\sigma:Loc\stackrel{{\mathrm{fin}}}{{\longrightarrow}}\)_Value_, with allocation (ref), deallocation (free), assignment (\(\leftarrow\)) and reading (!). Next, the user gives meaning to the heap state \(\sigma\) within the separation logic by defining an interpretation of the heap state as a proposition, \(\mathcal{H}(\sigma):\mathit{iProp}\), along with propositions to be manipulated by the user within the program logic (here, \(\ell\leftrightarrow v\)). Finally, they prove the primitive heap rules via corresponding updates or entailments. For example, the following suffice to show the above four Hoare rules. \[\mathcal{H}(\sigma) \Rightarrow\exists\ell.\ \mathcal{H}(\sigma[\ell:=v])*(\ell \leftrightarrow v^{\prime})\] (AllocUpd) \[\mathcal{H}(\sigma)*(\ell\leftrightarrow v) \Rightarrow\mathcal{H}(\sigma\backslash\{\ell\})\] (FreeUpd) \[\mathcal{H}(\sigma)*(\ell\leftrightarrow v) \vdash(\sigma(\ell)=v)\] (ReadEq) \[\mathcal{H}(\sigma)*(\ell\leftrightarrow v) \Rightarrow\mathcal{H}(\sigma[\ell:=v^{\prime}])*(\ell \leftrightarrow v^{\prime})\] (WriteUpd) Thus, it suffices for the user to construct \(\mathcal{H}(\sigma)\) and \(\ell\leftrightarrow v\) so that the above hold; this can be done via a custom PCM construction, using PCM-Valid to prove ReadEq, PCM-Update to prove WriteUpd and FreeUpd, and PCM-Alloc to prove AllocUpd. Now, the new rule we want to construct is, for any \(\ell,v,\mathcal{E},\mathcal{E}_{1}\), \[\begin{array}{l}\textsc{Heap-Read-Shared}\\ \ [\ell\leftrightarrow v]_{\mathcal{E}}\ \mathsf{\{}}\mathsf{\{}}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\} \}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}{\} \mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\mathsf{\}\)\(\mathsf{\}\mathsf{\}\mathsf{\}\ Expanding the notation, this is equivalent to, for any \(G:\)_iProp_, \[\{G*(G\nrightarrow_{\mathcal{E}}(\ell\hookrightarrow v))\}\ \mathsf{if}\ \{v^{\prime}.G*(v=v^{\prime})\}_{\mathcal{E}\cup\mathcal{E}_{1}}\] This follows from, \[(G\nrightarrow_{\mathcal{E}}(\ell\hookrightarrow v))\ \vdash\ \mathcal{H}(\sigma)*G \nrightarrow_{\mathcal{E}\cup\mathcal{E}_{1}}\mathcal{H}(\sigma)*G*(\sigma( \ell)=v)\] and this in turn follows from Unguard-Pers and ReadEq. Notably, we do not need to re-do the construction of \(\mathcal{H}(\sigma)\) or \(\ell\hookrightarrow v\) to support the derivation of Heap-Read-Shared. Along with the new deduction rules for \(\nrightarrow\), the old construction "just works." _Atomic Invariants_. Propositions shared via guarding can serve as _atomic invariants_; i.e., we can obtain _exclusive_ ownership of a shared proposition for the duration of an atomic operation, as long as we restore the invariant at the end of the operation. \[\begin{array}{c}\text{\sc Guard-Atomic-Inv}\\ \hline[\cdots]\ \{P*X\}\ e\ \{Q*X\}_{\mathcal{E}_{1}}\quad\mathcal{E}\cap \mathcal{E}_{1}=\emptyset\quad\text{$\epsilon$ is atomic}\\ \hline[X]_{\mathcal{E}}\ [\cdots]\ \{P\}\ e\ \{Q\}_{\mathcal{E}\cup\mathcal{E}_{1}}\end{array}\] _Non-Atomic Memory_. The heap semantics in the preceding example use sequentially consistent, atomic heap operations. But what about other memory ordering models? We can also apply Leaf to heap semantics that model _non-atomic memory access_, i.e., memory accesses for which data races are entirely disallowed, alongside atomic operations. Non-atomic memory has been modeled before in Iris, e.g., by RustBelt (Jung et al., 2017), which models each non-atomic operation as two execution steps in order to detect overlapping operations. We can apply Leaf to this situation, and prove Heap-Read-Shared for non-atomic reads; however, the proof is slightly more challenging than it is for the purely-atomic heap semantics, primarily because the heap semantics have to model non-atomic reads as effectful operations. To get around this, we need to be slightly clever in our definition of \(\mathcal{H}(\sigma)\); see Appendix 0.G for a sketch. ### Storage Protocols Leaf's _storage protocol_ is a formulation of custom ghost state whose unique feature is its laws allowing deductions of nontrivial \(\nrightarrow\) propositions. Storage protocols are similar to the ghost state presented earlier, which embeds elements of a monoid as separation logic propositions \(\{\stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{ {\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{ {\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\stackrel{{ \bullet}}{{\boldsymbol{\chi}}}:\stackrel{{\bullet}}{{ \boldsymbol{\chi}}}:\stackrel{{\bullet}}{{\boldsymbol{\chi}}}: \stackrel{{\bullet}}{{\boldsymbol{\chi}}}:\ \(\mathsf{SP}\)-\(\mathsf{Update}\)), operating on \(\langle p\rangle^{\gamma}\) and \(F(s)\). Specifically, a deposit of \(s\) allows an update that gives up ownership of \(F(s)\), while a withdraw of \(s\) allows an update that obtains ownership of \(F(s)\). Now, with the ability to "deposit" elements into the protocol and "withdraw" them, we can add the ability to have shared access to those stored elements: the derived relation \(p\nrightarrow s\) gives rise to \(\langle p\rangle^{\gamma}\nrightarrow F(s)\) by \(\mathsf{SP}\)-\(\mathsf{Guard}\). Let us unpack the definition of \(\nrightarrow\) to understand intuitively why this should work: \(p\nrightarrow s\) is defined as \(\forall q\). \(\mathcal{C}(p\cdot q)\Rightarrow s\preceq\mathcal{S}(p\cdot q)\); this essentially says that \(p\) is a "witness" that the value \(s\) is stored; i.e., if we have ownership of \(p\), then any "completion" of the state \(p\cdot q\), which is valid according to the validity function \(\mathcal{C}\), must be storing something \(\geq s\). _Initialization of a protocol._ When initializing a protocol (\(\mathsf{SP}\)-\(\mathsf{Alloc}\)) we get to specify \(F\), and we also get to specify the initial element \(s:S\) while giving up \(F(s)\), the initial proposition to be stored. Inheriting yet another trick from Iris, we can also specify a _namespace_\(\mathcal{N}\), a subset of _Name_, that the Figure 5. Storage protocols and derived relations. resulting protocol name has to be in. Using fixed namespaces (such as \(\mathcal{F}\!\mathit{rac}\) or \(\mathit{Count}\) from Figure 2) is often more convenient than managing individual names \(\gamma\) that cannot be known _a priori_. Example 3.2 (Fractional protocol for a single proposition): To start simple, let us suppose we have a single proposition, \(Q\), we would like to manage. Set the protocol monoid \(P\triangleq\mathbb{Q}_{\geq 0}\) and the storage monoid \(S\triangleq\mathbb{N}\), with composition as addition in both cases and a unit of \(0\). Set \(\mathcal{C}\) to be true exactly on the integers, and for integers \(n\), set \(\mathcal{S}(n)\triangleq n\). Let \(\mathcal{V}\) always be true. Now, we have the exchange \((1,0)\mathrel{\mathop{\kern 0.0pt\sim\kern-3.0pt\cdot}\limits}(0,1)\) (also written as a withdraw, \(1\mathrel{\mathop{\kern 0.0pt\sim\kern-3.0pt\cdot}\limits}(0,1)\)) and the reverse, \((0,1)\mathrel{\mathop{\kern 0.0pt\sim\kern-3.0pt\cdot}\limits}(1,0)\) (also written as a deposit, \((0,1)\mathrel{\mathop{\kern 0.0pt\sim\kern-3.0pt\cdot}\limits}1\)). Finally, for any \(q>0\), we have \(q\mapsto 1\). This is the key property that says the fraction \(q\) can act as a read-only element, and it follows from the following argument: if \(q^{\prime}\geq q\) and \(\mathcal{C}(q^{\prime})\) holds, then \(q^{\prime}\) is an integer and \(\mathcal{S}(q^{\prime})=q^{\prime}\geq 1\). Finally, set the proposition family \(F(n)\triangleq\mathrel{\mathop{\kern 0.0pt\sim\kern-3.0pt\cdot}\limits}^{n}Q\), i.e., \(Q\) conjoined \(n\) times. Now we can say that, \[\begin{array}{ll}\mathsf{True}\Rightarrow\exists\gamma_{\cdot}\;\mathrm{sto }(\gamma,F)&\text{(via SP-Alloc)}\\ \mathrm{sto}(\gamma,F)\vdash\triangleright Q\Rightarrow_{\gamma}\langle 1\rangle^{ \gamma}&\text{(via SP-Deposit)}\\ \mathrm{sto}(\gamma,F)\vdash\langle 1\rangle^{\gamma}&\text{(via SP-Withdraw)}\\ \mathrm{sto}(\gamma,F)\vdash\langle q\rangle^{\gamma}\nvdash_{\gamma}\triangleright Q &\text{(via SP-Guard)}\end{array}\] Example 3.3 (Fractional memory permissions): Assume points-to propositions \(\ell\hookrightarrow v\) are given. We wish to construct \(\ell\stackrel{{\mathsf{frac}}}{{\longrightarrow}}q\)\({}_{0}\) and the laws as given in Figure 2. We take \(P=(\mathit{Loc}\times\mathit{Value})\stackrel{{\mathsf{fin}}}{{ \longrightarrow}}\mathbb{Q}_{\geq 0}\) and \(S=(\mathit{Loc}\times\mathit{Value})\stackrel{{\mathsf{fin}}}{{ \longrightarrow}}\mathbb{N}\), defining \(\cdot,\mathcal{V},\mathcal{C},\mathcal{S}\) elementwise using the definitions of the previous example. Define \(F\) such that \(F(\{(\ell,q)\mapsto 1\})=\ell\hookrightarrow v\). Instantiate the protocol to obtain a location \(\gamma\in\mathcal{F}\!\mathit{rac}\), and then set \(\ell\stackrel{{\mathsf{frac}}}{{\longrightarrow}}_{q}v\triangleq \langle[(\ell,v)\mapsto q]\rangle^{\gamma}\). From here we can derive the appropriate withdraw, deposit, and guard. The counting protocol (Figure 2) can be done similarly. (See Appendix C for details.) Example 3.4 (Forever Protocol): The most basic sharing pattern is to make something freely shareable forever (analogous to Iris invariants \(\boxed{Q}\)). We can express this succinctly by guarding \(Q\) with \(\mathsf{True}\colon Q\Rightarrow(\mathsf{True}\nvdash_{\mathcal{F}\!\mathit{ Forever}}\triangleright Q)\). To derive this, we use a storage protocol: let the protocol monoid \(P\) be the trivial monoid \(\{\epsilon\}\) and the storage monoid \(S\) be \(\textsc{Excl}(1)\), with \(\mathcal{S}(\epsilon)\triangleq\mathsf{ex}(1)\). This protocol has no interesting updates; it can only be initialized. Let \(F(\mathsf{ex}(1))\triangleq Q\). By SP-Alloc we have \(Q\Rightarrow\exists\gamma_{\cdot}\;\mathrm{sto}(\gamma,F)\ast\langle \epsilon\rangle^{\gamma}\ast(\gamma\in\mathcal{F}\!\mathit{orever})\). Now we can chain \(\mathsf{True}\nvdash\langle\epsilon\rangle^{\gamma}\) (by Guard-Pers) and \(\langle\epsilon\rangle^{\gamma}\nvdash_{\mathcal{F}\!\mathit{orever}}( \triangleright Q)\) (by SP-Guard). ### Handling the later modality \(\triangleright\) Note that some rules in Figure 4(b) use the _later modality_, \(\triangleright\), a feature of step-indexed logics like Iris. In Leaf, \(\triangleright\) allows us to dynamically specify the proposition families \(F\) during protocol initialization (SP-Alloc); without \(\triangleright\), this would be unsound. If we gave up that ability and instead specified all families _a priori_, we could remove \(\triangleright\) from the rest of the rules. (This is analogous to Iris requiring \(\triangleright\) for dynamically allocated invariants.) Leaf provides rules to eliminate \(\triangleright\) from within guards: \[\begin{array}{ll}\mathsf{times}(P)\nvdash_{\mathcal{E}}P\;\mathsf{(Later-Guard)}& \qquad\mathsf{times}(P)\nvdash_{\mathcal{E}}P\;\mathsf{(Later-Guard)}&\qquad \mathsf{times}(P)\nvdash_{\mathcal{E}}P\;\mathsf{(Later-Pers-Guard)}\\ \mathsf{times}(P)\nvdash_{\mathcal{E}}P\;\mathsf{(Later-Guard)}&\qquad \mathsf{times}(P)\nvdash_{\mathcal{E}}P\;\mathsf{(Later-Guard)}\end{array}\] Timelessness(Krebbers et al., 2017) is a technical condition that effectively says a proposition is "independent of the step-index," which makes it easier to account for \(\triangleright\) modalities. Timelessness holds for both the PCM-based \(\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times}\mathsf{times} \mathsf{ ## 4. Rwlock Example: Verifying a custom protocol for sharing state Our two case studies are arranged to show the two "halves" of Leaf: the first one (this section) demonstrates the verification of a sharing protocol that lets the client acquire shared state, while our second one (SS5) shows how a client can make use of the shared state. Figure 6. Implementation of a reader-writer lock. We assume all heap operations are atomic, including CAS (compare-and-swap) and FetchAdd. Figure 7. Example execution of two threads using a shared reader-writer lock. This is an “ideal” execution, without contention or retries. First, we see Thread 1 acquire exclusive lock. This gives them exclusive control over the resource, which they can therefore modify (here, changing it from \(y\) to \(x\)) before releasing the lock. We then see both threads acquire a shared lock, where they simultaneously have read access to the \(x\) resource. On the left, we annotate each step with the ghost resource update from Figure 8 it corresponds to. Specifically, in this section, we verify a reader-writer lock, one which is slightly more complicated than that which is captured directly by a standard permission logic, a situation which the storage protocol was designed for. The implementation of our reader-writer lock in shown in Figure 6, with an example execution trace in Figure 7. The implementation's main complication here is the fact that acquiring a lock is a two-step process: a thread might increment the reference counter, but then fail to acquire the lock in the second step. Hence, the physical value of the reference counter may not match the number of extant read-references. Initially, this design might seem strange--why not just put all the data in a single atomic field to simplify the design? However, the use of distinct fields is an essential element of more complex lock designs, such as the multi-counter design mentioned in the introduction, where each counter goes on a different cache line. In fact, simply having an intermediate state at all captures most of the complexity of the multi-counter design, so we use the single-counter implementation here--see Appendix E for details on the multi-counter case. Proof Overview.: We tackle the proof in two stages: first, we devise some useful ghost resources; then, we use those resources in the program logic to prove the implementation meets the specification (Figure 1). The key is to find the right resources and their relationships that we need. The RwLock specification (Figure 1) already indicates three propositions that we need to construct in one way or another: \(\mathsf{IsRwLock}(rw,\gamma,F)\), \(\mathsf{Exc}(\gamma)\), and \(\mathsf{Sh}(\gamma,x)\). We also have \(\mathsf{Sh}(\gamma,x)\nmapsto_{\gamma}F(x)\) as a desired property. This gives us a reason to use a storage protocol: it allows us to construct new resources and derive \(\nmapsto\) relationships. Since the storage protocol and its resulting resources are more elaborate than in the previous examples, we will take the time here to explain exactly how to come up with the protocol. First, we naturally need a component to represent the lock's internal state, which we call \(\mathsf{Fields}(\gamma,\mathit{exc},\mathit{rc},x)\), containing both the \(\mathit{exc}\) and \(\mathit{rc}\) fields, and also the stored value, \(x\). We can tie the first two fields to the physical, in-memory values with a proposition like the following: \[\mathsf{IsRwLock}(rw,\gamma,F)\triangleq\exists\mathit{exc},\mathit{rc},x \cdot\mathsf{Fields}(\gamma,\mathit{exc},x)\ast(\mathit{rw.exc}\hookrightarrow \mathit{exc})\ast(\mathit{rw.rc}\hookrightarrow\mathit{rc})\ast\ldots\] Next, we use resources to represent the intermediate states that occur during lock acquisition. For example, write-lock acquisition has a moment where we have set \(\mathit{exc}\) but not observed \(\mathit{rc}\), likewise, read-lock acquisition has a temporary state where we have incremented \(\mathit{rc}\) but not observed \(\mathit{exc}\). We use \(\mathsf{ExcPending}(\gamma)\) and \(\mathsf{ShPending}(\gamma)\) to represent these states. So for example, to prove \(\mathsf{lock\_exc}\), which has the intended specification, \[[\mathsf{IsRwLock}(rw,\gamma,F)]\{\}\ lock\_exc(rw)\{\mathsf{Exc}(\gamma) \ast\exists x\cdot\triangleright F(x)\}\] its proof outline should look something like, \[[\exists\mathit{exc},\mathit{rc},x\cdot\mathsf{Fields}(\gamma,\mathit{exc}, \mathit{rc},x)\ast(\mathit{rw.exc}\hookrightarrow\mathit{exc})\ast(\mathit{rw.rc}\hookrightarrow\mathit{rc})]\] \[\{\}\] do \[\mathsf{let}\ \mathit{success}=\mathsf{CAS}(\mathit{rw.exc},\mathsf{False}, \mathsf{True})\] until \(\mathit{success}\) \[\{\mathsf{ExcPending}(\gamma)\}\] do \[\mathsf{let}\ r=\mathit{!rw.rc}\] until \(r=0\) \[\{\exists x\cdot\mathsf{Exc}(y)\ast\triangleright F(x)\}\] With shared access to \(\mathit{rw.exc}\hookrightarrow\mathit{exc}\) and \(\mathit{rw.rc}\hookrightarrow\mathit{rc}\), we can use \(\textsc{Guard-Atomic-Inv}\) to perform the requisite atomic \(\mathsf{CAS}\) and atomic load. However, because all these resources are shared, we cannot hold onto them for the duration spanning both operations at once. Therefore, when performing the CAS, the triple \((\mathit{exc},\mathit{rc},x)\) used might be different than the triple used for the later load instruction. This is why we cannot track the intermediate "pending" state as part of the Fields resource, and need to use a separate \(\mathsf{ExcPending}\) resource for the thread. With this sketch in place, we can observe some of the operations we need: for the CAS operation, we update \(\mathit{exc}\) from \(\mathsf{False}\) to \(\mathsf{True}\) and should somehow obtain \(\mathsf{ExcPending}(\gamma)\) in the process. So we need: \[\mathsf{Fields}(\gamma,\mathsf{False},\mathit{rc},x)\nRightarrow \mathsf{Fields}(\gamma,\mathsf{True},\mathit{rc},x)*\mathsf{ExcPending}(\gamma)\] For the second step, reading the \(\mathit{rc}\) value, we find we need: \[\mathsf{Fields}(\gamma,\mathit{exc},0,x)*\mathsf{ExcPending}(\gamma) \nRightarrow\mathsf{Fields}(\gamma,\mathit{exc},0,x)*\mathsf{Exc}(\gamma)* \mathsf{P}(x)\] This update requires us to observe that \(\mathit{rc}=0\), though it does not change \(\mathit{rc}\) or any of the other fields. It does, however, move us from the pending-exclusive state to the actual exclusive-lock state, while also acquiring exclusive ownership of the protected resource, as was our goal. Figure 8 shows all of the operations that we need, including those we could determine from a similar analysis of the lock_shared implementation. This also includes an additional proposition, \(\mathsf{RwFamily}(\gamma,F)\), that ties \(\gamma\) to the proposition family \(F\). Now, we just need to use a storage protocol to construct the resources of Figure 8 and prove these the desired updates. Then we can complete the Hoare proofs based on the above plan. Step 1: Constructing the ghost resources via a storage protocol.The first step in building a storage protocol is to determine the storage monoid and the protocol monoid. In our example, the storage monoid \(S\) can be \(\mathsf{Excl}(X)\); i.e., there is either one thing stored, or there is not. Our primary effort, then, is the protocol monoid \(P\). We define \(P\) to have a component for each class of proposition it needs to support. First up is the Fields proposition, and we know there should always be one such; therefore, we can represent it with \(\mathsf{Excl}\). Next, there should always be at most one of \(\mathsf{ExcPending}\) or \(\mathsf{Exc}\), so we can use \(\mathsf{Excl}\) for these as well. Meanwhile, there might be any number of ShPending propositions at a given time, so we can use \(\mathbb{N}\) for these. Figure 8: A custom resource derived via the storage protocol mechanism, designed for our particular implementation. Finally, for the Sh propositions, we can have any number, but they need to agree on the value of \(x\). For this, we use a monoid \(\operatorname{AgN}(X)\), which tracks a single value and a count. It is given by, \[\begin{array}{c}\epsilon\mid\operatorname{agn}(x,n)\mid\dot{\epsilon}\qquad \text{where $x:X$ and $n:\mathbb{N}$, $n\geq 1$}\\ \text{with $\operatorname{agn}(x,n)\cdot\operatorname{agn}(x,m)=\operatorname{agn} (x,n+m)$ and for $x\neq y$, $\operatorname{agn}(x,n)\cdot\operatorname{agn}(y,m)=\dot{\epsilon}$}\end{array}\] All in all, we can now declare our protocol monoid and name its important elements: \[\begin{array}{c}P\triangleq\operatorname{Excl}(\operatorname{Bool}\times \mathbb{Z}\times X)\times\operatorname{Excl}(1)\times\operatorname{Excl}(1) \times\mathbb{N}\times\operatorname{AgN}(X)\\ \begin{array}{ccccc}\operatorname{fields}(\mathit{exc},\mathit{rc},x)& \triangleq&(\operatorname{ex}((\mathit{exc},\mathit{rc},x)),&\epsilon,& \epsilon,&0,&\epsilon)\\ \operatorname{exc}\operatorname{Pending}&\triangleq&(\epsilon,&\operatorname{ ex},&\epsilon,&0,&\epsilon)\\ \operatorname{exc}&\triangleq&(\epsilon,&\epsilon,&\operatorname{ex},&0,& \epsilon)\\ \operatorname{shPending}&\triangleq&(\epsilon,&\epsilon,&\epsilon,&1,&\epsilon)\\ \operatorname{sh}(x)&\triangleq&(\epsilon,&\epsilon,&\epsilon,&0,&\operatorname{ agn}(x,1))\end{array}\end{array}\] Now, we need to define \(\mathcal{S}\) and \(\mathcal{C}\). First, \(\mathcal{S}\) determines the element stored; this is given by \(x\) in the fields state, unless the lock is currently exclusively taken, in which case the storage is empty: \[\mathcal{S}(\operatorname{ex}((\mathit{exc},\mathit{rc},x)),\_,\epsilon,\_,\_ \_)\triangleq\operatorname{ex}(x)\qquad\mathcal{S}(\operatorname{ex}(( \mathit{exc},\mathit{rc},x)),\_,\operatorname{ex},\_,\_)\triangleq\epsilon\] Next, we define \(\mathcal{C}\) to be False if any entry is \(\dot{\iota}\) or the first entry is \(\epsilon\); otherwise, \[\begin{array}{c}\mathcal{C}((\operatorname{ex}((\mathit{exc},\mathit{rc},x )),\mathit{ep},\mathit{ep},\mathit{sp},\mathit{sp}))\triangleq&(\mathit{rc} =\mathit{sp}+\mathit{count}(s))\land(\neg\mathit{exc}\Rightarrow\mathit{ep} =\epsilon\wedge\epsilon=\epsilon)\\ \land&(\mathit{exc}\Rightarrow(\mathit{ep}=\operatorname{ex}\lor\epsilon= \operatorname{ex})\land\neg(\mathit{ep}=\operatorname{ex}\land\epsilon= \operatorname{ex}))\land(\mathit{e}=\operatorname{ex}\Rightarrow s=\epsilon) \land(\forall y,n.\,s=\operatorname{agn}(y,n)\Rightarrow x=y)\end{array}\] (where \(\mathit{count}(\operatorname{agn}(x,n))=n\) and \(\mathit{count}(\epsilon)=0\)) These predicates can be stated in plain English: the reference count \(\mathit{rc}\) is the total number of threads with the shared lock or in the process of acquiring it; the \(\mathit{exc}\) field indicates whether any thread has the exclusive lock or is in the process of acquiring it; an exclusive lock cannot be taken at the same time as a shared lock; the value taken by a shared lock should match the fields's \(x\) value. Now, with the storage protocol established, we can embed these elements as propositions: we let \(\operatorname{Fields}(\gamma,\mathit{exc},\mathit{rc},x)\triangleq\langle \operatorname{fields}(\mathit{exc},\mathit{rc},x)\rangle^{\gamma}\) and so on. We also let \(\operatorname{RwFamily}(\gamma,F)\triangleq\operatorname{sto}(\gamma,F^{\prime})\), where \(F^{\prime}(\operatorname{ex}(x))\triangleq F(x)\) and \(F^{\prime}(\epsilon)\triangleq\operatorname{True}\). Now, in order to show our desired reader-writer lock rules (Figure 8), it suffices to show the following updates (by SP-Update): \[\begin{array}{c}\operatorname{fields}(\mathit{False},\mathit{rc},x) \rightsquigarrow\operatorname{fields}(\mathit{True},\mathit{rc},x)\cdot \operatorname{excPending}\\ \operatorname{fields}(\mathit{exc},\mathit{rc},x)\rightsquigarrow \operatorname{fields}(\mathit{exc},\mathit{rc}+1,x)\cdot\operatorname{shPending}\\ \operatorname{fields}(\mathit{False},\mathit{rc},x)\cdot\operatorname{shPending} \rightsquigarrow\operatorname{fields}(\mathit{False},\mathit{rc},x)\cdot \operatorname{sh}(x)\\ \operatorname{fields}(\mathit{exc},\mathit{rc},x)\cdot\operatorname{sh}(y) \rightsquigarrow\operatorname{fields}(\mathit{exc},\mathit{rc}-1,x)\\ \operatorname{fields}(\mathit{exc},\mathit{rc},x)\cdot\operatorname{shPending} \rightsquigarrow\operatorname{fields}(\mathit{exc},\mathit{rc}-1,x)\end{array}\] As well as a withdraw (by SP-Withdraw) and a deposit (by SP-Deposit): \[\begin{array}{c}\operatorname{fields}(\mathit{exc},0,x)\cdot \operatorname{excPending}\rightsquigarrow(\operatorname{fields}(\mathit{exc},0,x) \cdot\operatorname{exc},\operatorname{ex}(x))\\ (\operatorname{fields}(\mathit{exc},\mathit{rc},y)\cdot\operatorname{exc}, \operatorname{ex}(x))\rightsquigarrow\operatorname{fields}(\mathit{ False},\mathit{rc},x)\end{array}\] And finally, a guard (by SP-GUARD): \[\operatorname{sh}(x)\rightsquigarrow\operatorname{ex}(x)\] Finally, we can prove all these just by expanding the definitions and using the logical invariants encoded in \(\mathcal{C}\). Let us summarize exactly what the storage protocol gave us in this particular proof strategy. We wanted to construct some set of ghost resources with certain relationships, mostly updates (\(\nRightarrow\)), representing specific implementation details of the lock, with a single \(\nRightarrow\) proposition that enables sharing. The storage protocol shows how to reduce those desired relationships to proof obligations (\(\nRightarrow\), \(\nRightarrow\), \(\nRightarrow\), \(\nRightarrow\)) about monoids that can be expressed in first-order logic. These obligations all encode properties that should map cleanly to an intuitive property of the system, e.g., the \(\nRightarrow\) proposition intuitively means "from the intermediate pending state, if \(rc=0\), then the stored resource can be withdrawn," while \(\mathsf{sh}(x)\nRightarrow\mathsf{ex}(x)\) intuitively means "any reader agrees with the source-of-truth on what the shared value is." These properties all rely on our definition of \(\mathcal{S}\), a predicate that encodes which states of the system are well-formed. _Step 2: Verifying the implementation._ To verify the implementation (Figure 6) against the spec (Figure 1), we first need to nail down a definition for \(\mathsf{IsRwLock}(rw,\gamma,F)\). Since \(\gamma\) is meant to be the unique identifier for the reader-writer lock, we can have it be the same as the ghost name \(\gamma\) from the RwLock logic, and likewise \(F\), the family of propositions protected in the lock, be the same as \(F\), the family of propositions protected by the RwLock protocol. The propositions representing the reader-writer lock should, as a whole, include the proposition \(\mathsf{RwFamily}(\gamma,F)\), the permission to access the \(rw.exc\) and \(rw.rc\) memory cells, and the Fields proposition which has the ghost data to match the contents of the memory cells. \[\mathsf{IsRwLock}(rw,\gamma,F)\triangleq\mathsf{RwFamily}(\gamma,F)*\exists exc,rc,x.\ \mathsf{Fields}(\gamma,exc,rc,x)*(rw.exc\hookrightarrow exc)*(rw.rc \hookrightarrow rc)\] The proof for \(\mathsf{rwlock\_new}\) is then straightforward: the implementation allocates the \(rw.exc\) and \(rw.rc\) memory, and we can ghostily instantiate the RwLock protocol via \(\mathsf{Rw\textsc{-}Init}\). Likewise, the proof for \(\mathsf{rwlock\_free}\) is straightforward, since its precondition requires that the caller has exclusive access to \(\mathsf{IsRwLock}\), so we can destructure it and use the exclusive \(\hookrightarrow\) propositions in order to call \(\mathsf{free}\). The proofs for the other methods, which must operate over a _shared_\(\mathsf{IsRwLock}\), are more interesting. Recall the proof outline for \(\mathsf{lock\_exc}\): \[[\mathsf{RwFamily}(\gamma,F)*\exists exc,rc,x.\ \mathsf{Fields}(\gamma,exc,rc,x)*(rw.exc \hookrightarrow exc)*(rw.rc\hookrightarrow rc)]\] \[\{\}\] do \[\mathsf{let\ success}=\mathsf{CAS}(rw.exc,\mathsf{False},\mathsf{ True})\] ( \[\mathsf{Rw\textsc{-}Exc\textsc{-}Begin}\] until \[\mathit{success}\] \[\{\mathsf{ExcPending}(\gamma)\}\] do \[\mathsf{let\ r}=\!rw.rc\] ( \[\mathsf{Rw\textsc{-}Exc\textsc{-}Acquire}\] until \[\mathit{r}=0\] \[\{\exists x.\ \mathsf{Exc}(\gamma)*F(x)\}\] The gist is that, in the first half, we apply \(\mathsf{Rw\textsc{-}Exc\textsc{-}Begin}\) (in the case that the \(\mathsf{CAS}\) succeeds) to obtain \(\mathsf{ExcPending}(\gamma)\), and in the second half, we apply \(\mathsf{Rw\textsc{-}Exc\textsc{-}Acquire}\) to complete the acquisition and obtain the desired state \(\mathsf{Exc}(\gamma)*F(x)\). Let us walk through the first half in detail. Since CAS is atomic, we can apply Guard-Atomic-Inv to "open" the shared proposition for the duration of the atomic operation. Thus, we need to show, \[\{\text{RwFamily}(\gamma,F)*\exists exx,rc,x.\text{Fields}(\gamma,exc,rc,x)*( rw.exc\hookrightarrow exc)*(rw.rc\hookrightarrow rc)\}\] \[\texttt{CAS}(rw.exc,\text{False},\text{True})\] \[\{\text{\emph{success.}}\ ((\text{\emph{success}}=\text{True}*\text{ExcPending }(\gamma))\vee(\text{\emph{success}}=\text{False}))*\] \[\text{RwFamily}(\gamma,F)*\exists exc,rc,x.\text{Fields}(\gamma,exc,rc,x)*(rw.exc \hookrightarrow exc)*(rw.rc\hookrightarrow rc)\}\] If CAS succeeds, we have \(exc=\text{False}\), so we apply Rw-Exc-Begin. This ensures we have \(\text{ExcPending}(\gamma)\) in the _success_\(=\text{True}\) case. Otherwise, we do nothing, and the program loops. The second half of lock_exc, where we atomically read \(rw.rc\), is the same, using Rw-Exc-Acquire. We can use a similar outline for lock_shared: \[\begin{array}{l}[\text{RwFamily}(\gamma,F)*\exists exc,rc,x.\text{Fields}( \gamma,exc,rc,x)*(rw.exc\hookrightarrow exc)*(rw.rc\hookrightarrow rc)]\\ \{\}\\ \text{do}\\ \{\}\\ \text{\}\\ \[\begin{array}{ll}\text{query}(\mathit{ht},k)\triangleq\text{query\_iter}( \mathit{ht},k,H(k))&\text{update\_iter}(\mathit{ht},k,H(k))\\ \text{query\_iter}\triangleq\text{rec\;query\_iter}(\mathit{ht},k,i).&\text{ update\_iter}\triangleq\text{rec\;update\_iter}(\mathit{ht},k,i).\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ \[m(\gamma,k,v)*slot(\gamma,j,\text{Some}((k,v_{j})))\vdash v=\text{Some}(v_{j}) \qquad\text{(\@@cite[cite]{[\@@bibref{}{Q}{}{}]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{}]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{Q}{}{} ]}{}\@@cite[cite]{[\@@bibref{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bibref{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bib{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bib{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bib{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bib{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bib{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bib{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bib{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bib{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bib{}{}{Q}{} ]}{}\@@cite[cite]{[\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{} ]}{}\@@cite[cite{[\@@bib{}{}{Q}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@[cite{\@@bib{}{}{Q}{}{} ]}{}\@@@bib{}{Q}{}{ ]}{}\@@cite[cite{\@@bib}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{} ]}{}\@@cite[cite{\@@bib}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}{Q}{}{} ]}{}\@@cite[cite{\@@bib{}{}Q}{}{} ]}{}\@@cite[cite{\@@bib precondition the ghost state for all the slots previously accessed in the probe, in addition to the preconditions of query. Figure 9 shows a proof outline. Stepping through, we first call lock_shared; using our shared \(\mathsf{IsRwLock}(\mathit{ht.locks}[i],\gamma_{i},F_{i})\). From this call, we obtain \(\mathsf{Sh}(\gamma_{i},s)\), for some \(s\) which is fixed for the duration we hold the lock. By Rw-Shared-Guard and the definition of \(F_{i}\), we have \(\mathsf{Sh}(\gamma,s)\nmapsto_{\gamma}(\mathit{ht.slots}[i]\hookrightarrow s )*\mathit{slot}(i,s)\) (eliminating the \(\triangleright\) by Later-Guard). In the outline, we represent this shared state with our \([\dots]\) notation. Now, by Heap-Read-Shared we can perform the \(!\mathit{ht.slots}[i]\) operation to load the value of \(s\) and case on it. If the slot is empty (None) then we apply \(\mathtt{QuervNotFound}\) to get our answer; if the slot is full and the key matches, then we apply \(\mathtt{QuervFound}\). As discussed above, we have all we need to apply these deductions even when the state on the left-hand side is shared. The most interesting case is the recursive one: here, we append the newly obtained \(\mathit{slot}(i,s)\) to obtain \(\mathrel{\mathop{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{$ \times$}}}_{H(k)\leq j\leq i}\mathit{slot}(\gamma,j,\mathsf{Some}(k_{j},v_{j}))\), meeting the precondition for the recursive call. The Client of the Hash Table.There are many options available to the hash table's client. We presume that the client wishes to share the hash table between threads, and she has the freedom to do this as she wishes. For instance, she might share it between a fixed number (\(N\)) of threads, using a fractional paradigm to give out a fraction \(1/N\) to each (Example 3.2). Alternatively, she could allocate it permanently and share it forever (Example 3.4). Or she could put the hash table inside yet another reader-writer lock, with multiple threads able to concurrently access the hash table by taking a shared lock. This last possibility could be a step to augment our design with resizing: a client could take the lock exclusively to "stop the world" and rebuild the hash table. Shared State with the Hash Table.One might wonder if the client could further apply Leaf and use the hash table to store propositions and manage shared access to them. For example, we might want to say \(m(\gamma_{\text{\sc th}},k,v)\nmapsto F(k,v)\) for some proposition family \(F\); then any client with shared access to the key \(k\) would also get shared access to the resource \(F(k,v)\). Indeed, we can modify our Hash Table Resource to allow this. Specifically, we could reconstruct the resource via a storage protocol so we can prove \(\nRightarrow\) propositions. Effectively, the existing hash table monoid construction would become the protocol monoid for this new storage protocol. Figure 9. Proof outline of query_iter ## 6. More advanced storage protocols The lock example in our paper is intentionally kept somewhat simple for the sake of exposition. However, subsequent work has already used Leaf's storage protocols to verify far more sophisticated read-sharing mechanisms. Specifically, IronSync (Hance et al., 2023) is a verification framework that combines storage protocols with a handful of other techniques (notably, using a substructural type system to manipulate ghost resources, including shared ghost resources, rather than using CSL directly). Their framework embeds Leaf's monoidal storage protocol definitions as axioms for manipulating their ghost resources. They describe their experience using storage protocols to verify: * A multi-counter reader-writer lock with additional, domain-specific features. This is a component of an effort to verify a multi-threaded page cache; the lock is used to protect a 4 KiB cache page, and the domain-specific features relate to reading and writing the cache page from disk. The lock not only allows read-sharing of memory resources for the 4 KiB pages, but also of ghost resources related to their contents. * A concurrent ring buffer with multiple producers and multiple consumers, where entries are alternately writeable and read-shared, as producer threads enqueue messages to be read (possibly simultaneously) by a number of consumer threads. This is a component of a state replication algorithm (Calciu et al., 2017), targeting non-uniform memory access (NUMA) architecture. Once again, this not only allows read-sharing of memory resources, but also ghost resources related to the operation log. The second example, in particular, demonstrates that read-sharing protocols extend beyond reader-writer locks. Furthermore, both examples (similar to our hash table) demonstrate the use of read-shared custom ghost resources. ## 7. Soundness Here, we sketch our construction of the Leaf logic within the Iris separation logic; for full details, consult the Coq development. To get the most out of this section, it helps for the reader to be already familiar with Iris; Jung et al. (2018) provide all necessary background. As context, we review the components of Iris. First, there is the _Iris base logic_, a step-indexed logic of resources in the abstract, with no primitive notion of a program or Hoare logic. The Iris base logic is proved sound via a semantic model called _the UPred model_. Then, atop the base logic, Iris can do a variety of useful things, e.g., instantiate a program logic given some operational semantics. To build Leaf, we add a few minor deduction rules to the base logic, proved sound via the _UPred_ model. Our additions to the base logic are given in blue. We then define \(\nightsquigarrow\), \(\langle p\rangle^{\gamma}\), and \(\operatorname{sto}(\gamma,F)\), and prove all of Leaf's deduction rules within the Iris logic. The rest of the Iris framework, such as the machinery to instantiate a program logic and prove adequacy theorems, is unchanged. _Ghost state_. In Iris, ghost state is constructed from a mathematical object called a _CMRA_. A PCM is a special case of a discrete CMRA, and the ghost state \(\langle\vec{x}^{\gamma}_{i}\rangle\) in this paper is just the usual Iris ghost state. We also add the PCM-AND rule to the base logic, which follows straightforwardly from the definition of \(\wedge\) over the _UPred_ model and holds for any discrete CMRA. _Invariants_. Our definitions build on Iris's invariants, so we review those here. Iris defines (within the base logic) a persistent proposition \(\overline{P}^{\iota}\) as knowledge that an invariant \(P\) is allocated at name \(\iota\) Iris then proves the following rules so the user can allocate, open, and close invariants: \[(\mathcal{N}\text{ infinite})\vdash\mathcal{P}\mathbin{\raisebox{-1.0pt}{ \includegraphics[]{fig:P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_PP_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_P_PP_P_P_P_P_P_P_P_PP_P_P_P_P_P_P_P_P_P_P_PP_P_P_P_P_P_P_P construct ghost state via the authoritative-fragmentary construction, \(\mathsf{Auth}(\mathsf{Prot}(P))\). Let: \[\mathsf{sto}(\gamma,F)\triangleq\mathsf{RespectsComposition}(F)* \boxed{\exists(x:P).\;\raisebox{-1.29pt}{\scalebox{1.0}{$\bullet$}}\; \mathsf{prot}(\widetilde{x})^{\mathsf{t}Y}*C(x)*F(\mathcal{S}(x))}^{Y}* \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\mathsf{prot}(\widetilde{ \epsilon})^{\mathsf{t}Y}\] \[\langle p\rangle^{Y}\triangleq\raisebox{-1.29pt}{\scalebox{1.0}{ $\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0 }{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$ \circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0 }{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0 }{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0 }{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0 }{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0 }{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0 }{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0 }{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0 }{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0 }{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{ \scalebox{1.0}{$\circ$}}\;\raisebox{-1.29pt}{\scalebox{1.0}{$\circ$}}\; \raisebox{-1. Fine-grained Concurrent Separation Logic (FCSL) [14] with its concurroids (also based on PCMs), and Iris with its CMRAs (see below). To our knowledge, none of these frameworks have yet been used to provide a modular representation of temporarily-shared custom resources that supports the composition of resources shared via the representation. Iris's ghost state formalismLeaf creates a custom ghost state mechanism, storage protocols, based on PCM ghost state. Iris has generalized PCM ghost state in a different direction, creating an algebraic object called a _CMRA_[17], which has two new features over PCMs. The first is a built-in notion of _persistent_ state; in Leaf, we de-emphasize persistent state because of our focus on temporarily-shared state. However, it would be straightforward to incorporate persistence into the protocol monoid formalism. The second is a _step-indexed_ notion of equality, which makes CMRAs suitable for _higher-order_ ghost state and many of the foundational elements of Iris. In Leaf, we wanted our storage protocols to be easily represented in first-order logic with a discrete notion of equality. Factoring our map into two steps, \(\mathcal{S}:P\to S\) and \(F:S\to i\)_Prop_ is what allows us to define a storage protocol without any step-indexing. As such, one can understand storage protocols as a particular ghost state abstraction built on CMRA machinery. Verified hash tablesHash tables have been verified before [13, 14, 15], including a concurrent one done in Iris that uses mutual exclusion locks [13]. Our concurrent hash table has some crucial differences that make it interesting: **(i)** we use reader-writer locks, and thus shared ownership for queries, and **(ii)** ours requires a single operation (update or query) to take more than one lock. Case study comparisonIt is worth comparing explicitly to how our reader-writer lock and hash table case study might be done if we used more traditional techniques. One of the most common such techniques in use to day is--as we have referenced several times in this paper--the technique of fractional permissions. If we were to build the hash table case study using fractional permissions (in Iris, or in any other framework supporting monoid ghost state and invariants), it might look something like the following: * First, the resource being protected by the lock would need to have a built-in notion of being fractional. The reader-writer lock spec could be parameterized over a fractional proposition family, \(F(x,q)\). The \(\mathsf{lsRwLock}(rw,y,F)\) proposition would need to be fractionalized as well, which could be done using a technique called _cancellable invariants_ (invariants with associated fractional tokens, which allow the inner resources to be reclaimed). Ultimately, the lock's Hoare triples would look something like (to select a few): \[\forall rw,\gamma,F,q_{0}\cdot\{\mathsf{lsRwLock}(rw,\gamma,F,q_{0}) \}\mathsf{lock\_shared}(rw)\ \{\mathsf{lsRwLock}(rw,y,F,q_{0})*\exists x,q \cdot\mathsf{Sh}(\gamma,x,q)*F(x,q)\}\] \[\forall rw,\gamma,F,x,q_{0},q\cdot\{\mathsf{lsRwLock}(rw,y,F,q_{0 })*\mathsf{Sh}(\gamma,x,q)*F(x,q)\}\mathsf{unlock\_shared}(rw)\ \{\mathsf{lsRwLock}(rw,\gamma,F,q_{0})\}\] \[\forall rw,\gamma,F,\{\mathsf{lsRwLock}(rw,\gamma,F,1)\}\mathsf{ rwlock\_free}(rw)\ \{\}\] Furthermore, the lock would need to guarantee that, \[\mathsf{lsRwLock}(rw,\gamma,F,q_{0})*\mathsf{lsRwLock}(rw,\gamma,F,q_{1}) \dashv\mathsf{lsRwLock}(rw,\gamma,F,q_{0}+q_{1})\] while the client would have to promise that \(F(x,q_{0})*F(x,q_{1})\dashv F(x,q_{0}+q_{1})\). Also observe that \(\mathsf{Sh}\) to track the fractional amounts that are "lent out," so we can make sure the same amount is returned later. * To verify the reader-writer lock, we would internally define an invariant that maintains some possibly-fractional amount of the resource, so that it has something to "lend out" whenever a client takes a read lock. Further, we would need to create ghost resources to define \(\mathsf{Sh}(\gamma,x,q)\), \(\mathsf{Exc}(\gamma)\), intermediate states, and so on. These resources would need to keep track of all the fractional amounts that are "lent out," and make sure they sum to the correct amount. We would still need to reason about the intermediate states of the locking operations, but now the relationships are slightly harder to specify, because they interact with all of the fractional accounting. * The client of the lock (the hash table) would need to make sure the resources it uses have a built-in fractional notion so they can interoperate with the lock. Thus the points-to operations would need a built-in notion of fractions \((\ell\stackrel{{\text{frac}}}{{\longleftrightarrow}}q\ v)\) while the hash table's "slot resources" \(slot(\gamma,i,s)\) would be replaced by fractional resources \(slot(\gamma,i,s,q)\). Now, all the updates and deductions would be expressed with fractions, and proving the \(\nrightarrow\) relations would involve reasoning about a composition operator \(\cdot\) that adds rational numbers. For example, one has to reason like, "Suppose I have a \(q\) fraction of slot \(j\), and a unit amount of slot \(j+1\), and I replace slot \(j+1\) with a unit amount of a new slot value..." While this is all certainly possible, our perspective is that it involves a large number of "bureaucratic" details that do not directly relate to the programmer's primary intuition of why the program is correct. By contrast, when doing this in the Leaf style, as we have seen: * The RwLock specification--that is, the "interface" between the two components--becomes cleaner. Neither lsRwLock, Sh, nor \(F\) need an additional rational number parameter (Figure 1). Instead, the relationships between these components and the sharing that takes place are all made clear through the \(\nrightarrow\) and \([\ldots]\ \{\ldots\}\ e\ \{\ldots\}\) notation. * The RwLock is easier to verify because the storage protocol formulation helps us reduce the problem to a series of proof obligations regarding the evolution of the system. * The Hash Table is easier to verify in Leaf because we can reason in a manner similar to how we would do it in an exclusive ownership setting, without the encoding having to "bake in" sharing-related details. For example, we would reason something like, "Suppose I have slot \(j\) and slot \(j+1\), and I replace slot \(j+1\)..." and then rely on Leaf to apply this in the presence of shared slots. This sort of simplification would apply to any application that uses fine-grained reader-writer locks in a similar manner. ## 9. Conclusion We have introduced Leaf, a concurrent separation logic with an approach to temporarily shared ownership based on our novel guarding operator, \(\nrightarrow\). We showed that Leaf can help the user implement and verify sharing strategies, that it allows modular specifications involving shared state that abstract away the sharing mechanism being used, and that Leaf's composition capabilities allow it to handle fine-grained concurrency. ## 10. Acknowledgments We thank Tej Chajed and the anonymous reviewers for helpful feedback. Work at CMU was supported, in part, by an Amazon Research Award (Fall 2022 CFP), a gift from VMware, the Future Enterprise Security initiative at Carnegie Mellon CyLab (FutureEnterprise@CyLab), and the NSF/VMware Partnership on Software Defined Infrastructure as a Foundation for Clean-Slate Computing Security (SDI-CSCS) program under Award No. CNS-1700521.
2309.15784
Gaussian Process-Enhanced, External and Internal Convertible (EIC) Form-Based Control of Underactuated Balance Robots
External and internal convertible (EIC) form-based motion control (i.e., EIC-based control) is one of the effective approaches for underactuated balance robots. By sequentially controller design, trajectory tracking of the actuated subsystem and balance of the unactuated subsystem can be achieved simultaneously. However, with certain conditions, there exists uncontrolled robot motion under the EIC-based control. We first identify these conditions and then propose an enhanced EIC-based control with a Gaussian process data-driven robot dynamic model. Under the new enhanced EIC-based control, the stability and performance of the closed-loop system is guaranteed. We demonstrate the GP-enhanced EIC-based control experimentally using two examples of underactuated balance robots.
Feng Han, Jingang Yi
2023-09-27T16:58:16Z
http://arxiv.org/abs/2309.15784v1
# Gaussian Process-Enhanced, External and Internal Convertible (EIC) ###### Abstract External and internal convertible (EIC) form-based motion control (i.e., EIC-based control) is one of the effective approaches for underactuated balance robots. By sequentially controller design, trajectory tracking of the actuated subsystem and balance of the unactuated subsystem can be achieved simultaneously. However, with certain conditions, there exists uncontrolled robot motion under the EIC-based control. We first identify these conditions and then propose an enhanced EIC-based control with a Gaussian process data-driven robot dynamic model. Under the new enhanced EIC-based control, the stability and performance of the closed-loop system is guaranteed. We demonstrate the GP-enhanced EIC-based control experimentally using two examples of underactuated balance robots. ## I Introduction An underactuated balance robot possesses fewer control inputs than the number of degrees of freedom (DOFs) [1, 2]. Control design of underactuated balance robots needs to take care of both the trajectory tracking of the actuated subsystem and balance control of the unactuated subsystem [3, 4, 5]. Balance of unstable coordinates of underactuated robots brings additional challenges for robot control. Many methods have been proposed to cope with the robot modeling [1, 4, 5, 6, 7], control design and applications [8, 9]. The external and internal convertible (EIC) form-based control (i.e., EIC-based control) has been demonstrated as one of the effective approaches to achieve simultaneous trajectory tracking and balance [10]. Other balance control algorithms include the orbital stabilization control [11, 12, 13, 14], and energy shaping-based control [15, 16]. One limitation of these methods is that the achieved balance-enforced trajectory is not unique [2, 17]. Although the EIC-based control can achieve stability and balance [5, 10], certain system conditions should be satisfied. Furthermore, an accurate robot dynamics model is required, which is also not robust under model uncertainties. Machine learning-based method provides an efficient tool in robot modeling and control. In particular, Gaussian process (GP) regression is an effective learning approach that generates analytical structure and bounded prediction errors [18, 19, 20, 6]. Development of GP-based performance-guaranteed control for underactuated balance robots has been reported [4, 18, 21]. In [4], the control input is partitioned into two parts. A GP-based inverse dynamics controller for unactuated subsystem to achieve balance and a model predictive control (MPC) design are used to simultaneously track the given reference trajectory and obtain the balance equilibrium manifold (BEM). The GP prediction uncertainties are incorporated into the control design to enhance the control robustness. The work in [5] followed the cascaded control design in EIC-based framework and the controller was adaptive to the prediction uncertainties. The training data was also selected to reduce the computational complexity. In this paper, we take advantage of the structured GP modeling in [5] and present a method to resolve the limitation of the original EIC-based control. We first show that under the EIC-based control, there exist uncontrolled motions that cause the entire system unstable. The uncontrolled motion is due to the fact that the EIC-based control is updated from a low- to a high-dimensional space. The conditions for the stable GP-based model learning and control are identified and presented. With the properly selected nominal model, the uncontrolled motion is eliminated with the GP-based data-driven robot dynamics. Finally, we propose a partial EIC (PEIC) form-based control by constructing a virtual inertial matrix to re-shape the dynamics coupling. The proposed GP-based control is shown to achieve guaranteed stability and performance. Experimental validation and demonstration are presented by using two examples of underactuated balance robots. The major contributions of this work are twofold. Compared with [5, 10], the uncontrolled motion of the EIC-based control are identified and illustrated. To overcome the EIC-based design limitations, the conditions for nominal GP model selection are presented. The proposed controller is new and also achieves super performance and stability. Second, unlike the work in [4] with the complex MPC with high computational cost, the proposed GP models directly capture the robot dynamics and the control design preserves the EIC structure property. The demonstrated experiments are also new comparing with the previous work. ## II EIC-Based Robot Control and Problem Statement ### _Robot Dynamics and EIC-Based Control_ We consider a general underactuated balance robot with \((n+m)\) DOFs, \(n,m\in\mathbb{N}\), and the generalized coordinates are denoted as \(\mathbf{q}=[q_{1}\cdots q_{n+m}]^{T}\). The dynamics model can be expressed in a standard form \[\mathcal{S}:\mathbf{D}(\mathbf{q})\ddot{\mathbf{q}}+\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q }}+\mathbf{G}(\mathbf{q})=\mathbf{B}\mathbf{u}, \tag{1}\] where \(\mathbf{D}(\mathbf{q})\), \(\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\) and \(\mathbf{G}(\mathbf{q})\) are the inertial matrix, Coriolis and gravity matrix, respectively. \(\mathbf{B}\) denotes the input matrix and \(\mathbf{u}\in\mathbb{R}^{m}\) is the control input. The generalized coordinates are partitioned into two parts as \(\mathbf{q}=[\mathbf{q}_{a}^{T}\ \mathbf{q}_{a}^{T}]^{T}\), with actuated and unactuated coordinates \(\mathbf{q}_{a}\in\mathbb{R}^{n}\) and \(\mathbf{q}_{u}\in\mathbb{R}^{m}\), respectively. We focus on the case \(n>m\) and without loss of generality, we assume that \(\mathbf{B}=[\mathbf{I}_{n}\ \mathbf{0}]^{T}\) is constant, where \(\mathbf{I}_{n}\) is the identify matrix with dimension \(n\). The robot dynamics (1) is rewritten as \[\mathcal{S}_{a} :\mathbf{D}_{aa}\tilde{\mathbf{q}}_{a}+\mathbf{D}_{au}\tilde{\mathbf{q}}_{u}+\mathbf{ H}_{a}=\mathbf{u}, \tag{2a}\] \[\mathcal{S}_{u} :\mathbf{D}_{ua}\tilde{\mathbf{q}}_{a}+\mathbf{D}_{uu}\tilde{\mathbf{q}}_{u}+\bm {H}_{u}=\mathbf{0} \tag{2b}\] for actuated and unactuated subsystems, respectively. Subscripts "\(aa\) (\(uu\))" and "\(ua\) and \(au\)" indicate the variables related to the actuated (unactuated) coordinates and coupling effects, respectively. For representation convenience, we introduce \(\mathbf{H}=\mathbf{C}\dot{\mathbf{q}}+\mathbf{G}\), \(\mathbf{H}_{a}=\mathbf{C}_{a}\dot{\mathbf{q}}+\mathbf{G}_{a}\), and \(\mathbf{H}_{u}=\mathbf{C}_{u}\dot{\mathbf{q}}+\mathbf{G}_{u}\). The dependence of matrices \(\mathbf{D}\), \(\mathbf{C}\), and \(\mathbf{G}\) on \(\mathbf{q}\) and \(\dot{\mathbf{q}}\) is dropped. Subsystems \(\mathcal{S}_{a}\) and \(\mathcal{S}_{u}\) are referred to as external and internal subsystems, respectively [10]. Given the desired trajectory \(\mathbf{q}_{a}^{d}\) for \(\mathcal{S}_{a}\), the control input is first designed to follow \(\mathbf{q}_{a}^{d}\) by temporarily neglecting \(\mathcal{S}_{u}\) as \[\mathbf{u}^{\rm ext}=\mathbf{D}_{aa}\mathbf{v}^{\rm ext}+\mathbf{D}_{au}\tilde{\mathbf{q}}_{u}+ \mathbf{H}_{a}, \tag{3}\] where error \(\mathbf{e}_{a}=\mathbf{q}_{a}-\mathbf{q}_{a}^{d}\) and \(\mathbf{v}^{\rm ext}\in\mathbb{R}^{n}\) is the auxiliary input such that \(\mathbf{e}_{a}\) converges to zero. To account for the coupling relationship between \(\mathcal{S}_{a}\) and \(\mathcal{S}_{u}\), the unactuated coordinate \(\mathbf{q}_{u}\) is balanced onto the BEM. The BEM is the instantaneous equilibrium in terms of \(\mathbf{q}_{u}\) under control \(\mathbf{v}^{\rm ext}\) \[\mathcal{E}=\left\{\mathbf{q}_{u}^{c}:\mathbf{\Gamma}\left(\mathbf{q}_{u};\mathbf{v}^{\rm ext }\right)=\mathbf{0},\dot{\mathbf{q}}_{u}=\tilde{\mathbf{q}}_{u}=\mathbf{0}\right\}, \tag{4}\] where \(\mathbf{\Gamma}(\mathbf{q}_{u};\mathbf{v}^{\rm ext})=\mathbf{D}_{uu}\tilde{\mathbf{q}}_{u}+\mathbf{D} _{ua}\mathbf{v}^{\rm ext}+\mathbf{H}_{u}\). \(\mathbf{q}_{u}^{c}\) is obtained by inverting \(\mathbf{\Gamma}_{0}=\mathbf{\Gamma}(\mathbf{q}_{u};\mathbf{v}^{\rm ext})|_{\tilde{\mathbf{q}}_{u}= \tilde{\mathbf{q}}_{u}=\mathbf{0}}=\mathbf{0}\). To stabilize \(\mathbf{q}_{u}\) onto \(\mathcal{E}\), we update \(\mathbf{q}_{a}\) motion to incorporate balance control as \[\mathbf{v}^{\rm int}=-\mathbf{D}_{ua}^{+}(\mathbf{H}_{u}+\mathbf{D}_{uu}\mathbf{v}_{u}^{\rm int}), \tag{5}\] where \(\mathbf{D}_{ua}^{+}=(\mathbf{D}_{ua}^{T}\mathbf{D}_{ua})^{-1}\mathbf{D}_{ua}^{T}\) denotes the generalized inverse of \(\mathbf{D}_{ua}\). \(\mathbf{v}_{u}^{\rm int}\) is the auxiliary control that drives error \(\mathbf{e}_{u}=\mathbf{q}_{u}-\mathbf{q}_{u}^{c}\) toward zero. The final control is obtained by replacing \(\mathbf{v}^{\rm ext}\) in (3) with \(\mathbf{v}^{\rm int}\), that is, \[\mathbf{u}^{\rm int}=\mathbf{D}_{aa}\mathbf{v}^{\rm int}+\mathbf{D}_{au}\tilde{\mathbf{q}}_{u}+ \mathbf{H}_{a}. \tag{6}\] The above sequential EIC-based control design achieves tracking for \(\mathcal{S}_{a}\) and balance for \(\mathcal{S}_{u}\) simultaneously. It has been shown in [10] that with an assumption that the robot model errors are affine with tracking errors, the control \(\mathbf{u}^{\rm int}\) gaurantees both \(\mathbf{e}_{a}\) and \(\mathbf{e}_{u}\) convergence to a neighborhood of origin exponentially. ### _Limitations of EIC-based Control_ In this subsection, we show the limitations of the EIC-based control design discussed in the previous section. The limitation comes from (5) that uses a mapping from low-dimensional (\(m\)) to high-dimensional (\(n\)) space (i.e., \(m<n\)). For robot control (6), it has been shown that there exists a finite time \(T>0\) and for small number \(\epsilon>0\), \(\|\mathbf{q}_{u}(t)-\mathbf{q}_{u}^{e}(t)\|<\epsilon\) for \(t>T\)[10]. Given the negligible error, we obtain \(\mathbf{D}_{au}(\mathbf{q}_{a},\mathbf{q}_{u})\approx\mathbf{D}_{au}(\mathbf{q}_{a},\mathbf{q}_{u}^{e})\). We apply singular value decomposition (SVD) to \(\mathbf{D}_{ua}\) and \(\mathbf{D}_{ua}^{+}\), \[\mathbf{D}_{ua}=\mathbf{U}\mathbf{\Lambda}\mathbf{V}^{T},\quad\mathbf{D}_{ua}^{+}=\mathbf{V}\mathbf{ \Lambda}^{+}\mathbf{U}^{T}, \tag{7}\] where \(\mathbf{U}\in\mathbb{R}^{m\times m}\) and \(\mathbf{V}\in\mathbb{R}^{n\times n}\) are unitary orthogonal matrices. \(\mathbf{\Lambda}=[\mathbf{\Lambda}_{m}\ \mathbf{0}]\in\mathbb{R}^{m\times n}\) and \(\mathbf{\Lambda}^{+}=[\mathbf{\Lambda}_{m}^{-1}\ \mathbf{0}]^{T}\in\mathbb{R}^{n\times m}\) and \(\mathbf{\Lambda}_{m}=\mathrm{diag}(\sigma_{1},...,\sigma_{m})\) with all singular values \(0<\sigma_{1}\leq\sigma_{2}\leq\cdots\leq\sigma_{m}\). Since \(\mathbf{V}\) is a unitary orthogonal matrix, its column vectors serve as a set of complete basis in \(\mathbb{R}^{n}\). Rewriting the \(\mathbf{q}_{a}\) and \(\mathbf{v}^{\rm ext}\) in \(\mathrm{span}(\mathbf{V})\), we have transformations \[\mathbf{p}_{a}=\mathbf{V}^{T}\mathbf{q}_{a},\quad\mathbf{\nu}^{\rm ext}=\mathbf{V}^{T}\mathbf{v}^{\rm ext}, \tag{8}\] where \(\mathbf{\nu}^{\rm ext}=[(\mathbf{\nu}_{m}^{\rm ext})^{T}\ (\mathbf{\nu}_{n}^{\rm ext })^{T}]^{T}\). Note that \([\mathbf{p}_{a}^{T}\ \mathbf{q}_{u}^{T}]^{T}\) still serves as a complete set of generalized coordinates for \(\mathcal{S}\). The robot dynamics \(\mathcal{S}_{u}\) under control \(\mathbf{u}^{\rm ext}\) is \[\tilde{\mathbf{q}}_{u}=-\mathbf{D}_{uu}^{-1}(\mathbf{D}_{ua}\mathbf{v}^{\rm ext}+\mathbf{H}_{u}).\] Plugging (7) and (8) into the above equation yields \[\tilde{\mathbf{q}}_{u}=-\mathbf{D}_{uu}^{-1}(\mathbf{U}\mathbf{\Lambda}\mathbf{\nu}_{m}^{\rm ext}+ \mathbf{H}_{u}). \tag{9}\] For \(\mathcal{E}\), \(\mathbf{q}_{u}^{e}\) is obtained by solving \(\mathbf{\Gamma}_{0}(\mathbf{q}_{u};\mathbf{v}^{\rm ext})=\mathbf{0}\). With the above discussion, we substitute \(\tilde{\mathbf{D}}_{ua}(\mathbf{q}_{u}^{e})\) with \(\tilde{\mathbf{D}}_{ua}(\mathbf{q}_{u})\) in \(\mathbf{\Gamma}_{0}\) and therefore, using (7), \(\mathbf{\Gamma}_{0}=\mathbf{0}\) is rewritten into \[\mathbf{\Lambda}\mathbf{\nu}_{m}^{\rm ext}+\mathbf{U}^{T}\mathbf{H}_{u}^{gp}\Big{|}_{\mathbf{q}_{u }=\mathbf{q}_{u}^{e},\tilde{\mathbf{q}}_{u}=\mathbf{0}}=\mathbf{0}, \tag{10}\] which is also obtained from right-side of (9). The BEM \(\mathcal{E}\) only depends on \(\mathbf{\nu}_{m}^{\rm ext}\), which is in the subspace \(\mathrm{span}\{\mathbf{V}_{1},...,\mathbf{V}_{m}\}\) of \(\mathbf{V}\). The control effect \(\mathbf{\nu}_{n}^{\rm ext}\) in the subspace \(\mathrm{ker}(\mathbf{D}_{ua})\) is disposable when obtaining the BEM. The control \(\mathbf{v}^{\rm int}\) in (5) is augmented by matrix \(\mathbf{D}_{ua}^{+}\) using \(\mathbf{v}_{u} ## III GP-Based Robot Dynamics Model In this section, we build a GP-based dynamics model. The enhanced EIC-based control design in the next section will be built on a selected nominal model. ### _GP-Based Robot Model_ Obtaining an accurate analytical model is challenging for many robotic systems and we consider capturing the robotic system dynamics using a GP-based data-driven method. We consider a multivariate continuously smooth function \(y=f(\mathbf{x})+w\), where \(w\) is the zero-mean Gaussian noise. The Gaussian process can be viewed as a distribution over function. Denote the training data sampled from \(y=f(\mathbf{x})+w\) is \(\mathbb{D}=\{\mathbf{X},\mathbf{Y}\}=\left\{\mathbf{x}_{i},y_{i}\right\}_{i=1}^{N}\), where \(\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{N}\), \(\mathbf{Y}=\{y_{i}\}_{i=1}^{N}\), \(\mathbf{x}_{i}\in\mathbb{R}^{n_{x}}\), and \(N\in\mathbb{N}\) is the number of the data point. The GP model is trained by maximizing posterior probability \(p(\mathbf{Y};\mathbf{X},\mathbf{\alpha})\) over the hyperparameters \(\mathbf{\alpha}\). That is, \(\mathbf{\alpha}\) is obtained by solving \[\min_{\mathbf{\alpha}}-\log(\mathbf{Y};\mathbf{X},\mathbf{\alpha})=\min_{\mathbf{\alpha}}-\frac{1 }{2}\mathbf{Y}^{T}\mathbf{K}^{-1}\mathbf{Y}-\frac{1}{2}\log\det(\mathbf{K}),\] where \(\mathbf{K}=(K_{ij})\), \(K_{ij}=k(\mathbf{x}_{i},\mathbf{x}_{j})=\sigma_{f}^{2}\exp(-\frac{1}{2}(\mathbf{x}_{i}- \mathbf{x}_{j})^{T}\mathbf{W}(\mathbf{x}_{i}-\mathbf{x}_{j}))+\vartheta^{2}\delta_{ij}\), \(\mathbf{W}=\operatorname{diag}\{W_{1},\cdots,W_{n_{x}}\}>0\), \(\delta_{ij}=1\) for \(i=j\), and \(\mathbf{\alpha}=\{\mathbf{W},\sigma_{f},\vartheta^{2}\}\) are hyperparameters. Given a new \(\mathbf{x}^{*}\), the GP model predicts the corresponding \(y\) and the joint distribution is \[\begin{bmatrix}\mathbf{Y}\\ y\end{bmatrix}\sim\mathcal{N}\left(\mathbf{0},\begin{bmatrix}\mathbf{K}&\mathbf{k}^{T}\\ \mathbf{k}&k^{*}\end{bmatrix}\right), \tag{13}\] where \(\mathbf{k}=\mathbf{k}(\mathbf{x}^{*},\mathbf{X})\) and \(k^{*}=k(\mathbf{x}^{*},\mathbf{x}^{*})\). The mean value and variance for input \(\mathbf{x}^{*}\) are \[\mu_{i}(\mathbf{x}^{*})=\mathbf{k}^{T}\mathbf{K}^{-1}\mathbf{Y},\ \Sigma_{i}(\mathbf{x}^{*})=k^{*}- \mathbf{k}\mathbf{K}^{-1}\mathbf{k}^{T}. \tag{14}\] For a vector function, we build one GP model for each channel. To apply the GP data-driven model for robot dynamics \(\mathcal{S}\), we first build a nominal model \[\mathbf{S}^{n}:\ \bar{\mathbf{D}}\tilde{\mathbf{q}}+\bar{\mathbf{H}}=\mathbf{u}, \tag{15}\] where \(\bar{\mathbf{D}}\) and \(\bar{\mathbf{H}}\) are the nominal inertia and nonlinear matrices, respectively. In general, the nominal dynamics equation does not hold for the data sampled from physical robot systems. The GP models are built to capture the difference between \(\mathcal{S}^{n}\) and \(\mathcal{S}\). The dynamics model difference is \[\mathbf{H}^{e}=\mathbf{D}\tilde{\mathbf{q}}+\mathbf{H}-\bar{\mathbf{D}}\tilde{\mathbf{q}}-\bar{\mathbf{H} }=\mathbf{u}-\bar{\mathbf{D}}\tilde{\mathbf{q}}-\bar{\mathbf{H}}.\] We build the GP models to capture \(\mathbf{H}^{e}=[(\mathbf{H}^{e}_{a})^{T}\ (\mathbf{H}^{e}_{u})^{T}]^{T}\). Two GP models are built to predict \(\mathbf{H}^{e}_{a}\) and \(\mathbf{H}^{e}_{u}\). The training data \(\mathbb{D}=\{\mathbf{X},\mathbf{Y}\}\) are sampled from \(\mathcal{S}\) as \(\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{N}\), \(\mathbf{Y}=\{\mathbf{H}^{e}_{i}\}_{i=1}^{N}\), where \(\mathbf{x}=\{\mathbf{q},\ \dot{\mathbf{q}},\ \ddot{\mathbf{q}}\}\). The GP predicted mean and variance are denoted as \((\mathbf{\mu}_{i}(\mathbf{x}),\mathbf{\Sigma}_{i}(\mathbf{x}))\) for \(\mathbf{H}^{e}_{i}\), \(i=a,u\). The GP-based robot dynamics model \(\mathcal{S}^{gp}\) is then given as \[\mathcal{S}^{gp}_{a}:\bar{\mathbf{D}}_{aa}\tilde{\mathbf{q}}_{a}+\bar{\mathbf{D}}_{aa} \tilde{\mathbf{q}}_{u}+\mathbf{H}^{gp}_{a}=\mathbf{u}, \tag{16a}\] \[\mathcal{S}^{gp}_{u}:\bar{\mathbf{D}}_{ua}\tilde{\mathbf{q}}_{a}+\bar{\bm {D}}_{uu}\tilde{\mathbf{q}}_{u}+\mathbf{H}^{gp}_{u}=\mathbf{0}, \tag{16b}\] where \(\mathbf{H}^{gp}_{i}=\mathbf{H}_{i}+\mathbf{\mu}_{i}(\mathbf{x})\), \(i=a,u\). The GP-based model prediction error is \[\mathbf{\Delta}=\begin{bmatrix}\mathbf{\Delta}_{a}\\ \mathbf{\Delta}_{u}\end{bmatrix}=\begin{bmatrix}\mathbf{\mu}_{a}(\mathbf{x})-\mathbf{H}^{e}_{ a}\\ \mathbf{\mu}_{u}(\mathbf{x})-\mathbf{H}^{e}_{u}\end{bmatrix}. \tag{17}\] To quantify the GP-based model prediction, we use Theorem 6 in [22] and obtain the following property for \(\mathbf{\Delta}\). **Lemma 1**: _Given the training dataset \(\mathbb{D}\), if the kernel function \(k(\mathbf{x}_{i},\mathbf{x}_{j})\) is chosen such that \(\mathbf{H}^{e}_{a}\) for \(\mathcal{S}_{a}\) has a finite reproducing kernel Hilbert space norm \(\left\|\mathbf{H}^{e}_{a}\right\|_{k}<\infty\), for given \(0<\eta_{a}<1\),_ \[\Pr\left\{\left\|\mathbf{\Delta}_{a}\right\|\leq\left\|\mathbf{\kappa}^{T}_{a}\mathbf{ \Sigma}^{\frac{1}{2}}_{a}(\mathbf{x})\right\|\right\}\geq\eta_{a}, \tag{18}\] _where \(\Pr\{\cdot\}\) denotes the probability of an event, \(\mathbf{\kappa}_{a}\in\mathbb{R}^{n}\) and its \(i\)-th entry is \(\kappa_{ai}=\sqrt{2\|\mathbf{H}^{e}_{a,i}\|_{k}^{2}+300\varsigma_{i}\ln^{3}\frac{N+1 }{1-\eta_{a}^{\frac{N}{n}}}}\), \(\varsigma_{i}=\max_{\mathbf{x},\mathbf{x}^{\prime}\in\mathbf{X}}\frac{1}{2}\ln|1+\vartheta _{i}^{-2}k_{i}\left(\mathbf{x},\mathbf{x}^{\prime}\right)|\). A similar conclusion holds for \(\mathbf{\Delta}_{u}\) with probability \(0<\eta_{u}<1\)._ ### _Nominal Model Selection_ With the constructed GP models, the next goal is to build an enhanced EIC-based control to achieve stability and performance by eliminating the limitations that was discussed in the previous section. To achieve such a goal, we first require bounded matrices \(\bar{\mathbf{D}}\) and \(\bar{\mathbf{H}}\). Inverting inertial matrix \(\bar{\mathbf{D}}\) is required for feedback linearization and thus, \(\bar{\mathbf{D}}\) is selected invertible. Second, the uncontrolled motion exists in the kernel of matrix \(\bar{\mathbf{D}}_{ua}\). If \(\ker(\bar{\mathbf{D}}_{ua})\) is constant, the uncontrolled motion appears in the fixed subspace of the configuration space. Therefore, it is required that the kernel of \(\bar{\mathbf{D}}_{ua}\) is non-constant. As mentioned previously, the uncontrolled motion happens due to controller updating from low- to high-dimensional spaces. If the unactuated coordinates depend on \(m\) (out of \(n\)) control inputs, we only need to update this \(m\)-input set. From the above reasoning, we obtain the following conditions for the nominal model. * \(\mathcal{C}_{1}\): \(\bar{\mathbf{D}}=\bar{\mathbf{D}}^{T}\succ 0\), i.e., positive definite, \(\left\|\bar{\mathbf{D}}\right\|\leq d\), \(\left\|\bar{\mathbf{H}}\right\|\leq h\), where constants \(0<d,h<\infty\); * \(\mathcal{C}_{2}\): \(\operatorname{rank}(\bar{\mathbf{D}}_{aa})=n\), \(\operatorname{rank}(\bar{\mathbf{D}}_{uu})=\operatorname{rank}(\bar{\mathbf{D}}_{ua})=m\); * \(\mathcal{C}_{3}\): non-constant kernel of \(\bar{\mathbf{D}}_{ua}\); * \(\mathcal{C}_{4}\): motion of the unactuated coordinates depend on only \(m\) control inputs. We will illustrate how to select nominal models that satisfy the above conditions in Section V. ## IV GP-Enhanced EIC-Based Control In this section, we first present the partial EIC (PEIC) control that takes advantage of the GP predictive model and explicitly eliminates the uncontrolled motion. Stability and performance analysis is then discussed. ### _PEIC-Based Control Design_ With GP predictive models \(\mathcal{S}^{gp}\), we incorporate the predictive variance of \(\mathcal{S}^{gp}_{a}\) into the auxiliary control \(\mathbf{v}^{\text{ext}}\) as \[\hat{\mathbf{v}}^{\text{ext}}=\tilde{\mathbf{q}}^{d}_{a}-\mathbf{k}_{p1}(\mathbf{ where \(\mathbf{k}_{p1}(\mathbf{\Sigma}_{a}),\mathbf{k}_{d1}(\mathbf{\Sigma}_{a})\succ 0\) are control gains that depend on variance \(\mathbf{\Sigma}_{a}\). Given the GP-based dynamics, the BEM is estimated by solving the optimization problem \[\hat{\mathbf{q}}_{u}^{e}=\arg\min_{\mathbf{q}_{u}}\|\mathbf{\Gamma}_{0}(\mathbf{q}_{u};\hat{ \mathbf{v}}^{\rm ext})\|. \tag{20}\] The solution is denoted as \(\hat{\mathbf{q}}_{u}^{e}\). The updated control design is \[\hat{\mathbf{v}}_{u}^{\rm int}=\ddot{\hat{\mathbf{q}}}_{u}^{e}-\mathbf{k}_{p2}(\mathbf{\Sigma} _{u})\hat{\mathbf{e}}_{u}-\mathbf{k}_{d2}(\mathbf{\Sigma}_{u})\dot{\hat{\mathbf{e}}}_{u}, \tag{21}\] where \(\hat{\mathbf{e}}_{u}=\mathbf{q}_{u}-\hat{\mathbf{q}}_{u}^{e}\) is the internal system tracking error relative to the estimated BEM. \(\mathbf{k}_{p2}(\mathbf{\Sigma}_{u}),\mathbf{k}_{d2}(\mathbf{\Sigma}_{u})\succ 0\) are also designed and tuned by the estimated GP variance \(\mathbf{\Sigma}_{u}\). Let \(\Delta\mathbf{q}_{u}^{e}=\mathbf{q}_{u}^{e}-\hat{\mathbf{q}}_{u}^{e}\) denote the BEM estimation error and the actual BEM is \(\mathbf{q}_{u}^{e}=\hat{\mathbf{q}}_{u}^{e}+\Delta\mathbf{q}_{u}^{e}\). The control design based on actual BEM is \(\mathbf{v}_{u}^{\rm int}=\tilde{\mathbf{q}}_{u}^{e}-\mathbf{k}_{p2}(\mathbf{\Sigma}_{u})\mathbf{e }_{u}-\mathbf{k}_{d2}(\mathbf{\Sigma}_{u})\mathbf{e}_{u}\) and therefore, we have \[\mathbf{v}_{u}^{\rm int}=\hat{\mathbf{v}}_{u}^{\rm int}-\Delta\mathbf{v}_{u}^{\rm int},\] where \(\Delta\mathbf{v}_{u}^{\rm int}=\Delta\ddot{\mathbf{q}}_{u}^{e}+\mathbf{k}_{p2}\Delta\mathbf{q }_{u}^{e}+\mathbf{k}_{d2}\Delta\dot{\mathbf{q}}_{u}^{e}\). Compared to (4), the BEM estimation error comes from GP modeling error and optimization accuracy. It is reasonable to assume that \(\Delta\mathbf{q}_{u}^{e}\) is bounded. Because of bounded Gaussian kernel function, the GP prediction variances are also bounded, i.e., \[\|\mathbf{\Sigma}_{a}(\mathbf{x})\|\leq(\sigma_{a}^{\rm max})^{2},\|\mathbf{\Sigma}_{u}(\bm {x})\|\leq(\sigma_{u}^{\rm max})^{2}, \tag{22}\] where \(\sigma_{a}^{\rm max}=\max_{i}(\sigma_{f_{ai}}^{2}+\vartheta_{ai}^{2})^{1/2}\), \(\sigma_{u}^{\rm max}=\max_{i}(\sigma_{f_{ai}}^{2}+\vartheta_{ai}^{2})^{1/2}\), \(\sigma_{f}\) and \(\vartheta\) are the hyperparameters in each channel. Furthermore, we require the control gains to satisfy the following bounds \[k_{i1}\leq\lambda(\mathbf{k}_{i1})\leq k_{i3},\quad k_{i2}\leq\lambda(\mathbf{k}_{i2} )\leq k_{i4},\;i=p,d\] for constants \(k_{pj},k_{dj}>0\), \(j=1,\cdots,4\), where \(\lambda(\cdot)\) denotes the eigenvalue operator. The control design \(\mathbf{v}^{\rm int}\) in (5) revises the preliminary control \(\mathbf{v}^{\rm ext}\). Under the updated control, \(\mathbf{q}_{a}\) serves as a control input to drive \(\mathbf{q}_{u}\) to \(\mathbf{q}_{u}^{e}\). For PEIC-based control, we instead consider a partial coupling constraint between \(\mathbf{q}_{a}\) and \(\mathbf{q}_{u}\) and assign \(m\) control inputs (equivalently the actuated coordinates) for unactuated subsystem control. To achieve such a goal, we partition the actuated coordinates as \(\mathbf{q}_{a}=[\mathbf{q}_{aa}^{T}\ \mathbf{q}_{au}^{T}]^{T}\), \(\mathbf{q}_{au}\in\mathbb{R}^{m}\), \(\mathbf{q}_{aa}\in\mathbb{R}^{n-m}\), and \(\mathbf{u}=[\mathbf{u}_{a}^{T}\ \mathbf{u}_{a}^{T}]^{T}\). The \(\mathcal{S}^{gp}\) dynamics in (16) is rewritten as \[\begin{bmatrix}\bar{\mathbf{D}}_{a}^{a}&\bar{\mathbf{D}}_{a}^{au}&\bar{\mathbf{D}}_{au}^{a} \\ \bar{\mathbf{D}}_{aa}^{ua}&\bar{\mathbf{D}}_{aa}^{u}&\bar{\mathbf{D}}_{au}^{u}\\ \bar{\mathbf{D}}_{ua}^{u}&\bar{\mathbf{D}}_{ua}^{u}&\bar{\mathbf{D}}_{bu}^{u}\\ \end{bmatrix}\begin{bmatrix}\tilde{\mathbf{q}}_{aa}\\ \tilde{\mathbf{q}}_{au}\\ \tilde{\mathbf{q}}_{u}\\ \tilde{\mathbf{q}}_{u}\\ \end{bmatrix}+\begin{bmatrix}\mathbf{H}_{aa}^{gp}\\ \mathbf{H}_{aa}^{gp}\\ \mathbf{H}_{ab}^{gp}\\ \end{bmatrix}=\begin{bmatrix}\mathbf{u}_{a}\\ \mathbf{u}_{u}\\ \mathbf{0}\\ \end{bmatrix}, \tag{23}\] where all block matrices are in proper dimension. We rewrite (23) into three groups as \[\mathcal{S}_{aa}^{gp}:\bar{\mathbf{D}}_{aa}^{a}\tilde{\mathbf{q}}_{aa}+\mathbf{H}_{an}^{a }=\mathbf{u}_{a}, \tag{24a}\] \[\mathcal{S}_{au}^{gp}:\bar{\mathbf{D}}_{aa}^{u}\tilde{\mathbf{q}}_{au}+ \bar{\mathbf{D}}_{au}^{u}\tilde{\mathbf{q}}_{u}+\mathbf{H}_{an}^{u}=\mathbf{u}_{u},\] (24b) \[\mathcal{S}_{au}^{gp}:\bar{\mathbf{D}}_{ua}^{u}\tilde{\mathbf{q}}_{au}+ \bar{\mathbf{D}}_{uu}^{u}\tilde{\mathbf{q}}_{u}+\mathbf{H}_{un}=\mathbf{0}, \tag{24c}\] where \(\mathbf{H}_{an}^{aa}=\bar{\mathbf{D}}_{aa}^{au}\tilde{\mathbf{q}}_{au}+\bar{\mathbf{D}}_{a}^{a }\tilde{\mathbf{q}}_{u}+\mathbf{H}_{aa}^{gp}\), \(\mathbf{H}_{un}^{u}=\bar{\mathbf{D}}_{aa}^{ua}\tilde{\mathbf{q}}_{aa}+\bar{\mathbf{D}}_{au}^{u} \tilde{\mathbf{q}}_{u}+\mathbf{H}_{an}^{gp}\), and \(\mathbf{H}_{un}=\bar{\mathbf{D}}_{ua}^{a}\tilde{\mathbf{q}}_{aa}+\mathbf{H}_{u}^{gp}\). Apparently, \(\mathcal{S}_{au}^{gp}\) is virtually independent of \(\mathcal{S}_{aa}^{gp}\), since there is "no dynamics coupling". The dynamics coupling virtually exists only between \(\mathcal{S}_{a}^{gp}\) and \(\mathcal{S}_{au}^{gp}\). Let control \(\hat{\mathbf{v}}^{\rm ext}\) in (19) be partitioned into \(\hat{\mathbf{v}}_{a}^{\rm ext},\hat{\mathbf{v}}_{u}^{\rm ext}\) corresponding to \(\mathbf{q}_{aa}\) and \(\mathbf{q}_{au}\), respectively. \(\hat{\mathbf{v}}_{a}^{\rm ext}\) is directly applied to \(\mathcal{S}^{gp}\) and \(\hat{\mathbf{v}}_{u}^{\rm ext}\) is updated for balance control purpose. As aforementioned, the necessary conditions to eliminate the uncontrolled motion in \(\mathcal{S}_{a}\) is that \(\mathbf{q}_{u}\) only depends on \(m\) inputs. The task of driving \(\mathbf{q}_{u}\) to \(\mathbf{q}_{u}^{e}\) is assigned to \(\mathbf{q}_{au}\) coordinates only. With this observation, the PEIC-based control is given as \(\hat{\mathbf{u}}^{\rm int}=[\hat{\mathbf{q}}_{a}^{T}\ \hat{\mathbf{u}}_{u}^{T}]^{T}\) with \[\hat{\mathbf{u}}_{a}=\bar{\mathbf{D}}_{aa}^{a}\hat{\mathbf{v}}_{a}^{\rm ext}+\mathbf{H}_{an}^{a },\hat{\mathbf{u}}_{u}=\bar{\mathbf{D}}_{aa}^{u}\hat{\mathbf{v}}^{\rm int}+\bar{\mathbf{D}}_{ au}^{u}\tilde{\mathbf{q}}_{u}+\mathbf{H}_{an}^{u}, \tag{25}\] where \(\hat{\mathbf{v}}^{\rm int}=-\left(\bar{\mathbf{D}}_{ua}^{u}\right)^{-1}\left(\mathbf{H}_{un} +\bar{\mathbf{D}}_{uu}\hat{\mathbf{v}}_{u}^{\rm int}\right)\). The auxiliary controls are \(\hat{\mathbf{v}}_{a}^{\rm ext}\) and \(\hat{\mathbf{v}}_{u}^{\rm int}\). Clearly, the unactuated subsystem only depends on \(\mathbf{u}_{u}\) under the PEIC design. Fig. 1 illustrates the overall flowchart of the PEIC-based control design for underactuated balance robots. ### _Stability and Performance Analysis_ To investigate the closed-loop dynamics, we take the GP prediction error and BEM estimation error into consideration. The GP prediction error in (17) is extended to \(\mathbf{\Delta}_{aa}\), \(\mathbf{\Delta}_{au}\) and \(\mathbf{\Delta} with \(\mathbf{O}_{\text{tot}}=[\mathbf{O}_{a}^{T}\,\mathbf{O}_{t}^{T}]^{T}\), \(\mathbf{O}_{a}=[\mathbf{O}_{aa}^{T}\,\mathbf{O}_{a}^{T}]^{T}\), \(\mathbf{O}_{aa}=-(\bar{\mathbf{D}}_{aa}^{a})^{-1}\mathbf{\Delta}_{aa}\), \(\mathbf{O}_{u}=-\bar{\mathbf{D}}_{uu}^{-1}(\mathbf{\Delta}_{u}-\bar{\mathbf{D}}_{ua}^{u}(\bar{ \mathbf{D}}_{ua}^{u})^{-1}\mathbf{\Delta}_{au})-\mathbf{\Delta}_{v}^{\text{int}}\), \(\mathbf{k}_{p}=\mathrm{diag}(\mathbf{k}_{p1},\mathbf{k}_{p2})\), and \(\mathbf{k}_{d}=\mathrm{diag}(\mathbf{k}_{d1},\mathbf{k}_{d2})\). Because of bounded \(\bar{\mathbf{D}}\), there exists constants \(0<d_{a1},d_{a2},d_{u1},d_{a2}<\infty\) such that \(d_{a1}\leq\left\|\bar{\mathbf{D}}_{aa}\right\|\leq d_{a2}\) and \(d_{u1}\leq\left\|\bar{\mathbf{D}}_{uu}\right\|\leq d_{u2}\). The perturbation terms are further expressed and bounded as \[\left\|\mathbf{O}_{a}\right\| =\left\|-\left[\begin{matrix}\mathbf{0}\\ (\bar{\mathbf{D}}_{ua}^{u})^{-1}\bar{\mathbf{D}}_{uu}\hat{\mathbf{v}}_{u}^{\text{int}} \end{matrix}\right]-(\bar{\mathbf{D}}_{aa}^{a})^{-1}\mathbf{\Delta}_{a}+\left[\begin{matrix} \mathbf{0}\\ \mathbf{O}_{\text{hot}}\end{matrix}\right]\right\|\] \[\leq\tfrac{d_{u2}}{\sigma_{1}}\left\|\hat{\mathbf{v}}_{u}^{\text{int} }\right\|+\tfrac{1}{d_{a1}}\left\|\mathbf{\Delta}_{a}\right\|+\left\|\mathbf{O}_{\text {hot}}\right\|\] and \[\left\|\mathbf{O}_{u}\right\| =\left\|-\bar{\mathbf{D}}_{uu}^{-1}(\mathbf{\Delta}_{u}-\bar{\mathbf{D}}_{ua }^{u}(\bar{\mathbf{D}}_{aa}^{u})^{-1}\mathbf{\Delta}_{au})-\Delta\mathbf{v}_{u}^{\text{ int}}\right\|\] \[\leq\tfrac{1}{d_{u1}}\left\|\mathbf{\Delta}_{u}\right\|+\tfrac{\sigma _{m}}{d_{u1}d_{a1}}\left\|\mathbf{\Delta}_{a}\right\|+\left\|\Delta\mathbf{v}_{u}^{ \text{int}}\right\|.\] The perturbation \(\mathbf{O}_{\text{hot}}\) is due to approximation and \(\Delta\mathbf{v}_{u}^{\text{int}}\) is the control difference due to the BEM calculation by the GP prediction, and we assume they are affine functions with total error \(\mathbf{e}\), that is, \[\left\|\mathbf{O}_{\text{hot}}\right\|\leq c_{1}\left\|\mathbf{e}\right\|+c_{2},\quad \left\|\Delta\mathbf{v}_{u}^{\text{int}}\right\|\leq c_{3}\left\|\mathbf{e}\right\|+c_ {4}\] with \(0<c_{i}<\infty\), \(i=1,\cdots,4\). From (22), we have \(\left\|\mathbf{\kappa}_{a}^{T}\mathbf{\Sigma}_{a}^{\frac{1}{2}}\right\|\leq\sigma_{a}^ {\text{max}}\left\|\mathbf{\kappa}_{a}\right\|\) and \(\left\|\mathbf{\kappa}_{u}^{T}\mathbf{\Sigma}_{a}^{\frac{1}{2}}\right\|\leq\sigma_{u}^ {\text{max}}\left\|\mathbf{\kappa}_{u}\right\|\). Thus, for \(0<\eta<1\), we can show that \[\Pr\left\{\left\|\mathbf{O}\right\|\leq d_{1}+d_{2}\left\|\mathbf{e}\right\|+l_{u} \left\|\mathbf{\kappa}_{u}\right\|+l_{a}\left\|\mathbf{\kappa}_{a}\right\|\right\}\geq\eta,\] with \(\eta=\eta_{a}\eta_{u}\), \(d_{1}=c_{2}+\left(1+\tfrac{d_{u2}}{\sigma_{1}}\right)c_{4}\), \(d_{2}=c_{1}+\tfrac{d_{u2}}{\sigma_{1}}c_{3}\), \(l_{a}=\tfrac{\sigma_{u}^{\text{max}}}{d_{1}d_{a1}}\), \(l_{u}=\tfrac{\sigma_{u}^{\text{max}}}{d_{u1}}\). With the above results, we have the following results about the stability and performance of the PEIC-based control and the proof is neglected due to page limit. **Lemma 2**: _For robot dynamics (2), using the GP-based model (16) and under the PEIC-based control design (19), (21) and (25), the system error \(\mathbf{e}\) exponentially converges to a small ball near the origin._ ## V Experimental Results We used two inverted pendulum platforms to conduct experiments to validate and demonstrate the robot control design. Fig. 2(a) shows a 2-DOF rotary inverted pendulum and Fig. 2(b) for a 3-DOF robotic leg that has an inverted link as the controlled balance task. The rotary inverted pendulum (2 DOFs, \(n=m=1\)) was made by Quanser Inc. and we used this example to illustrate the EIC-based control. The base joint (\(\theta_{1}\)) is actuated by a DC motor and the inverted pendulum joint (\(\theta_{2}\)) is unactuated. The physical model in (2) is given in [23]. The control input is motor voltage. Since the condition \(\mathcal{C}_{4}\) is satisfied automatically, there is no uncontrolled motion if the EIC-based control is applied. Either a constant nominal model or a time-varying nominal model should work. We take the nominal model \[\mathbf{\mathcal{S}}^{n1}:\ \bar{\mathbf{D}}_{1}=\frac{1}{100}\begin{bmatrix}5&-2 \,\mathrm{c}_{2}\\ -2\,\mathrm{c}_{2}&2\end{bmatrix},\ \bar{\mathbf{H}}_{1}=\begin{bmatrix}0\\ -\,\mathrm{s}_{2}\end{bmatrix},\] \[\mathbf{\mathcal{S}}^{n2}:\ \bar{\mathbf{D}}_{2}=\frac{1}{100}\begin{bmatrix}2&1\\ 1&2\end{bmatrix},\ \bar{\mathbf{H}}_{2}=\mathbf{0},\] where \(\mathrm{c}_{i}=\cos\theta_{i}\), \(\mathrm{s}_{i}=\sin\theta_{i}\) for angle \(\theta_{i}\), \(i=1,2\). The control gains \(k_{p1}=10+50\Sigma_{a}\), \(k_{d1}=3+10\Sigma_{a}\), \(k_{p2}=1000+500\Sigma_{u}\), and \(k_{d2}=100+200\Sigma_{u}\) were chosen. The reference trajectory was \(\theta_{1}=0.5\sin t+0.3\sin 1.5t\) rad. The control was implemented at \(400\) Hz in Matlab/Simulink with Quanser's hardware-in-the-loop real-time system. For comparison purpose, we also implemented a physical model-based EIC controller in experiments. Fig. 3 shows the experimental results. With either \(\mathcal{S}^{n1}\) or \(\mathcal{S}^{n2}\), the base link closely follows the reference trajectory and a similar trend is found for the pendulum motion (see Fig. 3(b)). However, the tracking error was reduced and the pendulum closely followed the small vibrations for the case with \(S^{n1}\). With \(\mathcal{S}^{n2}\), the tracking errors became large when the base link changed rotation direction; see Fig. 3(c) at \(t=10,17,22\) s. Since the condition \(\mathcal{C}_{4}\) is automatically satisfied, both the time-varying nominal model and constant nominal model worked for modeling learning and EIC-based control design. Table I lists the statistics of the tracking errors (mean and one standard deviation), including the learning-based control and physical model-based control. For both subsystems, the errors with the learning-based approach are smaller. In particular, with a time-varying nominal model, the tracking error (mean value) for \(e_{1}\) and \(e_{2}\) reduced \(75\%\) and \(65\%\) respectively in comparison with the physical model-based one. We next use a 3-DOF robotic leg (\(n=2,m=1\)) to demonstrate the proposed control design. The control implementation was at \(200\) Hz through ROS (Robot Operating System) at a Linux real-time system machine. The nominal model is \[\bar{\mathbf{D}}=\begin{bmatrix}0.15&0.025\,\mathrm{c}_{2}&0.025\,\mathrm{c}_{3}\\ 0.025\,\mathrm{c}_{2}&0.15&0.05\,\mathrm{c}_{23}\\ 0.025\,\mathrm{c}_{3}&0.05\,\mathrm{c}_{23}&0.1\end{bmatrix},\ \bar{\mathbf{H}}=\begin{bmatrix}0\\ 0.2\,\mathrm{c}_{2}\\ 0.1\,\mathrm{s}_{3}\end{bmatrix},\] Fig. 2: (a) A Furuta pendulum testbed. The base link joint \(\theta_{1}\) is actuated and the pendulum link joint \(\theta_{2}\) is unactuated. (b) A three-link robotic leg with two base links \(\theta_{1}\) and \(\theta_{2}\) are actuated and the top link \(\theta_{3}\) is unactuated. where \(\mathrm{c}_{ij}=\cos(\theta_{i}-\theta_{j})\). We apply an open-loop control (combination of sine wave torque) to excite the system and obtain the training data. The control gains were \(k_{p1}=15.0\mathbf{I}_{2}+20\mathbf{\Sigma}_{a},k_{d1}=3\mathbf{I}_{2}+10\mathbf{\Sigma}_{a},k_ {p2}=25+20\Sigma_{u},k_{d2}=5.5+10\mathbf{\Sigma}_{u}\). The reference trajectory was \(\theta_{1}^{d}=0.5\sin t\), \(\theta_{2}^{d}=0.4\sin 3t\) rad. We chose \(q_{aa}=\theta_{1}\) and \(q_{au}=\theta_{2}\). Fig. 4 shows the experimental results. Under the proposed control, the system followed the given reference trajectory closely and the third link was balanced around BEM as shown in Fig. 4(a). In Fig. 4(b), the tracking error of joint \(\theta_{1}\) is between \(-0.05\) rad to \(0.05\) rad, while the tracking error of joint \(\theta_{2}\) is between \(-0.1\) rad to \(0.1\) rad. Fig. 4(c) shows the results under the regular EIC-based control and it is clear that the system became unstable. The motion of the actuated coordinate in the new coordinate \(\mathbf{p}_{a}\) is shown in Figs. 4(d) and 4(e) and \(p_{a2}\) represents the uncontrolled motion variable. Though \(p_{a1}\) followed the reference, the \(p_{a2}\) showed a large error due to the lack of control. Fig. 4(f) shows the estimated error bound and it is clear that the tracking error entered and remained inside the bounded area. The above results confirmed that under the proposed control, the uncontrolled motion is eliminated and the simultaneously tracking and balance control property of EIC-based control is preserved. ## VI Conclusion This paper proposed a learning-based controller for underactuated balance robots. The proposed control is an extension of the external and internal convertible form control (EIC-based control). The EIC-based control aims to achieve tracking and balance simultaneously. However, we showed that there exists uncontrolled motion, which can cause the system unstable. We identified the conditions under which the uncontrolled motion happened and also proposed the GP-enhanced EIC-based control. The proposed new robot control preserved the structured design of the EIC-based control and achieved tracking and balance tasks. We tested the the new control design on two experimental platforms and confirmed that stability and balance can be guaranteed. Fig. 4: Experiment results with the underactuated robotic leg. (a) Motion profiles and (b) tracking errors under the PEIC-based control. (c) Motion profiles under the EIC-based control. (d) Motion profiles in the new coordinate \(\mathbf{p}_{a}\) under the PEIC-based control. (e) Motion profile \(\mathbf{p}_{a}\) under the EIC-based control. (f) Error trajectory in the \(\|\mathbf{e}_{a}\|\)-\(\|\mathbf{e}_{q}\|\) plane. Fig. 3: Experiment results with rotary inverted pendulum (a) Arm rotation angles. (b) Pendulum rotation angles. (c) Tracking control errors.
2307.00143
Centauri: Practical Rowhammer Fingerprinting
Fingerprinters leverage the heterogeneity in hardware and software configurations to extract a device fingerprint. Fingerprinting countermeasures attempt to normalize these attributes such that they present a uniform fingerprint across different devices or present different fingerprints for the same device each time. We present Centauri, a Rowhammer fingerprinting approach that can build a unique and stable fingerprints even across devices with homogeneous or normalized/obfuscated hardware and software configurations. To this end, Centauri leverages the process variation in the underlying manufacturing process that gives rise to unique distributions of Rowhammer-induced bit flips across different DRAM modules. Centauri's design and implementation is able to overcome memory allocation constrains without requiring root privileges. Our evaluation on a test bed of about one hundred DRAM modules shows that system achieves 99.91% fingerprinting accuracy. Centauri's fingerprints are also stable with daily experiments over a period of 10 days revealing no loss in fingerprinting accuracy. We show that Centauri is efficient, taking as little as 9.92 seconds to extract a fingerprint. Centauri is the first practical Rowhammer fingerprinting approach that is able to extract unique and stable fingerprints efficiently and at-scale.
Hari Venugopalan, Kaustav Goswami, Zainul Abi Din, Jason Lowe-Power, Samuel T. King, Zubair Shafiq
2023-06-30T21:27:54Z
http://arxiv.org/abs/2307.00143v1
# Centauri: Practical Rowhammer Fingerprinting ###### Abstract Fingerprinters leverage the heterogeneity in hardware and software configurations to extract a device fingerprint. Fingerprinting countermeasures attempt to normalize these attributes such that they present a uniform fingerprint across different devices or present different fingerprints for the same device each time. We present Centauri, a Rowhammer fingerprinting approach that can build a unique and stable fingerprints even across devices with homogeneous or normalized/obfuscated hardware and software configurations. To this end, Centauri leverages the process variation in the underlying manufacturing process that gives rise to unique distributions of Rowhammer-induced bit flips across different DRAM modules. Centauri's design and implementation is able to overcome memory allocation constrains without requiring root privileges. Our evaluation on a test bed of about one hundred DRAM modules shows that Centauri achieves 99.91% fingerprinting accuracy. Centauri's fingerprints are also stable with daily experiments over a period of 10 days revealing no loss in fingerprinting accuracy. We show that Centauri is efficient, taking as little as 9.92 seconds to extract a fingerprint. Centauri is the first practical Rowhammer fingerprinting approach that is able to extract unique and stable fingerprints efficiently and at-scale. + Footnote †: publicationid: November 8, 2021 ## I Introduction Stateless tracking is becoming more prevalent [20, 5] in response to recent countermeasures against stateful tracking using browser cookies [39] and device identifiers (e.g., IDFA on iOS and AAID on Android) [3, 16]. Stateless tracking involves probing for distinguishing features of a device to construct a _fingerprint_ without needing to store any client-side state [28]. For a fingerprint to be useful for tracking, it needs to possess two attributes: uniqueness and stability. First, a fingerprint should have sufficiently high entropy to uniquely identify a device within a given population of devices [12]. Second, it should remain sufficiently stable over an extended duration, so it can be linked to other fingerprints from the same device for re-identification [54]. Fingerprinting typically leverages the heterogeneity in hardware and software configurations to extract fingerprints. For example, FingerprintJS [14], a widely deployed fingerprinting library, aggregates a variety of software and hardware attributes such as browser version, screen resolution, and the number of processors to construct device fingerprints. To attain high entropy, such fingerprints are dependent on devices having diverse configurations that are sufficiently distinguishable. A common countermeasure to reduce entropy is to standardize or normalize the attributes that capture device configuration to present the same values across different devices [38, 6]. Fingerprinters also have to contend with how the captured attributes change over time with the goal of producing a consistent fingerprint. These changes can either arise from natural evolution of device configuration (e.g., software updates) or from fingerprinting countermeasures that intentionally randomize values of attributes on the same device [7, 40]. To attain high stability, fingerprinters attempt to predict fingerprint changes [54] or employ attribute value stemming to improve stability while minimally impacting uniqueness [45]. In this work, we investigate a stronger threat model where a fingerprint aims to extract unique and stable fingerprints Fig. 1: Visualization of the Rowhammer bit flip distribution with a brighter spot representing a higher bit flip probability. The **top row** shows the distributions on two different but identical DRAM modules. The **bottom row** shows the distribution on the same DRAM modules at a later point in time. for devices with identical hardware and software configurations over extended periods of time. To this end, we aim to capture fundamental differences in the physical properties of the device's hardware as unique fingerprints. Our key insight is that a fingerprint may be able to extract fingerprints from inherent differences that arise as a result of _process variation_ in the hardware (CMOS) manufacturing process. As users seldom modify their device hardware, these fingerprints remain stable, as long as they account for differences resulting from process variation in the same hardware. While prior research has explored variations in internal clocks [50], GPUs [27] and CPUs [8], we are the first to successfully leverage memory (DRAM) for fingerprinting. We leverage Rowhammer [26] to extract fingerprints by capturing the side-effects of process variation in memory modules. At a high level, "hammering" a memory row (i.e., repeated read or write operations in a short time interval) results in bit flips in adjacent memory rows. In this paper, we show that the pattern of bit flips due to Rowhammer can be leveraged to build a fingerprint. We also show that the pattern of Rowhammer bit flips is sufficiently unique and stable to build a reliable fingerprint for the population of computing devices (billions of devices). To build intuition, Figure 1 visualizes the distribution of bit flips produced by executing Rowhammer at the same locations on two identical DRAM modules1 at two different points in time. The results look promising--the distribution of bit flips is _reasonably similar_ on the same DRAM modules at different points in time while being _noticeably different_ across the pair of DRAM modules. Footnote 1: also called dual in-line memory modules or DIMMs In this paper, we present Centauri, a _practical_ Rowhammer-based fingerprinting approach that exploits bit flip distributions to extract highly _unique_ and _stable_ fingerprints even among homogeneous devices with identical software and hardware configurations over an extended period of time. Centauri overcomes three main challenges that make it practical for fingerprinting. _First_, as alluded to in Figure 1, the bit flips triggered by Rowhammer are non-deterministic (i.e., hammering the same location does not flip the same set of bits). Thus, a fingerprinter has to account for this non-determinism to extract stable fingerprints. We identify certain practical scenarios that exacerbate this non-determinism where comparing set similarity to match fingerprints falls short (SSVI-F, VI-G). With Centauri, we hammer the same locations multiple times to extract a probability distribution of bit flips as fingerprints. We then compare the divergence of these distributions that leads to better re-identification of devices even where there is a drastic difference in the set of bits that flipped. _Second_, fingerprints are constrained by the abstractions provided by the operating system to allocate memory. These abstractions provide limited access to contiguous physical memory and hide information about their allocation on the DRAM. Without root privileges, these constraints prevent fingerprints from trivially tracking the location of bit flips to fingerprint devices. We use the insight from our measurement study (SSIV-B)that the distribution of bit flips in contiguous 2 MB chunks of memory is unique and persistent to overcome this challenge. Armed with the insight, we sample enough 2 MB chunks to guarantee access to the same chunk for fingerprinting. _Third_, memory modules implement mitigations against Rowhammer, such as Target Row Refresh (TRR) [31]. While prior research has demonstrated ways to craft hammering patterns [15, 22] to bypass TRR, they provide limited insights towards operationalizing them to trigger bit flips at scale. Centauri systematically identifies effective patterns for at-scale fingerprinting using Rowhammer. We evaluate Centauri on a set of 98 DIMMs across 6 sets of identical DRAM modules across 2 major DRAM manufacturers. Centauri demonstrates high entropy with a highest fingerprint accuracy of 99.91% corresponding to a precision of 100%, and recall of 97.06%. Centauri also demonstrates high stability with daily experiments to extract fingerprints from the same devices over a period of ten days without any degradation in fingerprint accuracy. Our experiments show that Centauri only suffers a minor loss in accuracy of 0.9% in presence of external factors that are not under the control of fingerprinters but affect the distribution of bit flips (such as the CPU frequency). We also investigate the trade-off between the accuracy of Centauri's fingerprints against the efficiency of Centauri's approach in terms of the time taken to extract fingerprints. Centauri is able to extract a fingerprint in as little as 9.92 seconds, reducing the overhead by more than 95.01% while degrading accuracy by just 0.64%. Our key contributions include: * Practically extracting highly unique and stable fingerprints using Rowhammer: We practically demonstrate Centauri on the largest scale of DRAM modules in current literature. * Handling non-deterministic bit flips: We handle non-deterministic bit flips by hammering the same memory chunks multiple times and using the divergence between probability distributions of bit flips to re-identify devices. * Overcoming memory allocation constraints: We overcome memory allocation constraints by devising a novel sampling strategy that guarantees access to the same chunk of memory for fingerprinting. * Operationalizing bypass techniques for Rowhammer mitigations: We bypass Rowhammer mitigations by identifying effective hammering patterns that can trigger bit flips at-scale. ## II Background Fig. 2: This figure shows a single rank of a DRAM DIMM. Each rank contains multiple logical structures called banks that are interspersed across multiple physical structures called chips. Each bank is an array of cells in the form of rows and columns. ### _DRAM basics_ All DRAM technologies follow the same basic architecture [33, 34, 36, 23, 19, 37, 32, 35]. We concentrate on DIMM-based DRAM packages in this paper for ease of testing, but the findings apply to other packaging techniques as well. Each physical DRAM Dual Inline Memory Module (DIMM) is installed on a DRAM channel on the motherboard. Channels enable issuing concurrent requests to multiple DIMMs. Figure 2 represents one side of a DIMM, also called a rank. Each individual DIMM contains multiple chips, which are uniformly divided into logical structures called banks on one or both of their ranks. A bank is a two-dimensional array of cells organized into rows and columns. Each cell contains a capacitor and an access transistor, with the capacitor's charged state representing a single bit. The number of cells in a column is given by the width (x8, x16 etc) of the DIMM. Each row in a DDR4 DIMM contains 65,536 capacitors. The number of ranks, banks, rows etc describe a DIMM's geometry. The memory controller issues commands to the DRAM to perform memory operations at the granularity of a _row_. The ACT command activates a row by loading it into the row-buffer before reading or writing to it. The PRE command deactivates a row and prepares the row-buffer to load another row by restoring previously held values. The memory controller also periodically issues REFI commands that refresh the charge held by the capacitors since the charge naturally drains over time. Every DRAM capacitor typically gets refreshed at least once every 64 milliseconds. ### _Rowhammer_ Modern DIMMs are susceptible to memory corruption as a result of electrical interference among cells. Rowhammer [4, 26] corrupts the data stored in some capacitors leading to bit flips in memory. Specifically, Rowhammer triggers bit flips at a particular address by repeatedly accessing neighboring addresses. This leads to the repeated activation and deactivation of rows containing the accessed addresses. The resulting electro-magnetic interference between the accessed rows (referred to as aggressors or aggressor rows) and their neighboring rows (referred to as victim rows) accelerates the rate of charge dissipation of the the capacitors in the victim rows. Once these capacitors have lost a sufficient amount of charge, refreshing the DRAM cannot restore their value, resulting in memory corruption. Modern DDR4 DIMMs implement Target Row Refresh or TRR to mitigate Rowhammer [31]. While different implementations of TRR exist, all of them essentially track memory accesses to identify aggressor rows and issue additional refreshes to the associated victim rows [15, 18]. #### Iii-C1 Rowhammer for fingerprinting The rate at which a capacitor loses its charge depends on its physical properties [43, 21]. These properties are not uniform on all chips due to process variation induced during manufacturing [52]. As a result, the capacitors that lose their charge (i.e., the bits that flip) should also depend on the particular chip being subjected to Rowhammer. Thus, when running Rowhammer with identical parameters (under comparable environmental conditions) on different chips, differences in the bit flip behavior can be attributed to differences in the physical properties of the DIMMs. In this paper, we show how to exploit the distribution of bit flips across DIMMs for fingerprinting. ### _Related work_ Rowhammer PUF [51] demonstrates that different memory regions within a PandaBoard [13] exhibit unique and consistent sets of bits that flip as a result of Rowhammer. However, their work does not compare the uniqueness of bit flips across devices. Furthermore, ensuring fingerprint stability is easier in a weaker threat model that allows fingerprinters to pick aggressor rows such that they always trigger bit flips on the same victim row. Thus, when comparing sets of bit flips from different points in time to re-identify a given device, fingerprints are able to assume that the bits that flipped were on the same row. This assumption side-steps the need to account for differences in the bit flips across multiple rows on the same device. In contrast, a fingerprint under a realistic threat model would be constrained by the abstractions provided by the OS for memory allocation. These abstractions hide information about the physical memory allocation, thereby making it difficult to \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & **Rowhammer PUF**[51] & **Drammer, Cross-VM Rowhammer FFS**[53, 46, 55] & **Rowhammer,js [17]** & **SMASH [10]** & **Blacksmith, TRRepsas [22, 15]** & **Centauri [This paper]** \\ \hline \hline **Studies bit flip uniqueness** & & & & & \\ \hline **Studies bit flip stability** & & & & & \\ \hline **Overcomes OS abstractions to reproduce bit flips** & & & & & \\ \hline **Overcomes Rowhammer mitigations (TRR)** & & & & & \\ \hline **Scale** & 1 & \(\leq\)60 & 10 & 5 & \(\sim\)40 & **98** \\ \hline \end{tabular} \end{table} TABLE I: Comparing Centauri against other Rowhammer research., Hand Candidate full consideration, partial consideration and no consideration of a given topic, respectively. Centauri is the first approach to demonstrate the extraction of unique and stable fingerprints on the largest scale using Rowhammer while overcoming practical limitations enforced by the operating system and by Rowhammer mitigations (TRR). ensure that bit flips were only triggered from a particular row when re-identifying a device. Rowhammer PU also presented results on DDR2 memory which did not incorporate any mitigations against Rowhammer. While Drammer [53] does not aim to exploring the fingerprinting capabilities of Rowhammer, it proposes a technique to overcome the operating system's abstractions to force a victim to allocate memory in a region that is susceptible to Rowhammer. The proposed technique, Phys Feng Shui requires the overall memory layout to remain unchanged (no allocation/deallocation of memory by other processes). Fingerprinters could seek to trigger bit flips on a device at arbitrary points in time, across which they cannot expect the memory layout to remain constant. Thus, Phys Feng Shui cannot be adopted to overcome restrictions enforced by the operating system to execute Rowhammer for fingerprinting. Research from Razavi et al. [46] and Xiao et al. [55] leverage memory deduplication and MMU paravirtualization respectively to overcome memory restrictions and trigger bit flips on memory allocated to a victim VM on cloud machines. However, these capabilities are not enabled or available on most end-user devices. TRRespass [15] and Blacksmith [22] introduce fuzzers that discover many-sided and non-uniform hammering patterns respectively to overcome TRR and trigger bit flips. While both papers present results on reproducing bit flips, they do not study differences in the distribution of bit flips across DIMMs since they do not explore the fingerprinting capabilities of Rowhammer. Fingerprinters also get limited insights on employing these patterns to trigger bit flips on a large number of DIMMs since both TRRespass and Blacksmith do not provide guidance on which discovered pattern to employ beyond those that produce bit flips. In their threat models, both TRRespass and Blacksmith also do not have to overcome the limited memory abstractions provided by the OS. Rowhammer.js [17] and SMASH [10] show how to trigger bit flips when confined to running JavaScript on the browser. However, they do not explore differences in bit flip distributions or their reproducibility as they don't intend to use Rowhammer for fingerprinting. With Centauri, we first make the crucial observation that the bit flips in each contiguous 2 MB chunk of memory (the largest contiguous chunk that can be reliably allocated without requiring administrative privileges) are highly unique and persistent (SSIV-B). Armed with this observation, we present a novel sampling strategy as part of Centauri's design (SSV) to overcome the restrictions imposed by the operating system's memory abstractions. To operationalize hammering patterns at scale, Centauri increases the CPU frequency to maximum to speed up the discovery of patterns and proposes to prioritize those patterns that can trigger a larger number of bit flips and can generalize to multiple DIMMs. To account for the inherent non-determinism in bit flips, we reset and hammer the same 2 MB chunk multiple times to extract a probability distribution of bit flips. We compare the similarity of these probability distributions to fingerprint DIMMs. We summarize how Centauri differs from prior research in Table 1. In summary, Centauri is the first technique to demonstrate the extraction of unique and stable fingerprints on the largest scale using Rowhammer while overcoming practical limitations enforced by the operating system and by Rowhammer mitigations such as TRR. ## III Centauri overview In this paper, we present Centauri, a Rowhammer-based fingerprinting approach that extracts unique and stable fingerprints even among devices that have identical hardware and software configurations. In this section, we first define our threat model in terms of our assumptions and goals and then provide an overview of Centauri's design. ### _Threat model_ In our threat model, the fingerprint runs code on a user's device. Users either run an app or visit a website that includes the fingerprinting SDK/library. In this paper, we focus on the former and discuss extension to the latter in SSII-A. We assume that the fingerprinter has a wide array of devices and DRAM modules (DIMMs) with different configurations in their possession. Within their own set of devices, fingerprints have no restrictions in terms of what they can execute. We also assume that the fingerprinter has access to powerful servers where they can store and match fingerprints. In this paper, we take the role of the fingerprinter and our goal is to extract unique and stable fingerprints from devices even among those that have identical configurations. ### _Centauri architecture_ At a high level, Centauri triggers bit flips on multiple contiguous 2 MB chunks of memory on a user's device and uses the distribution of the triggered bit flips as a fingerprint. Centauri then uses a similarity metric to compare fingerprints extracted from different sessions at different points in time to recognize if these sessions were executed on the same device. Centauri's operation consists of three phases, namely, a _templating phase_, a _hammering phase_, and a _matching phase_. \(\bullet\) In the templating phase, fingerprinters conduct experiments on their own devices to discover ways to overcome Rowhammer mitigations and trigger bit flips. \(\bullet\) In the hammering phase, fingerprinters execute code on users' devices. The fingerprinter's code uses the knowledge gained from the templating phase to trigger bit flips on their devices. They then create a probability distribution out of the triggered bit flips which serves as a fingerprint for the user's device. \(\bullet\) In the matching phase, fingerprinters compare the fingerprint extracted from a user's device against other reference fingerprints to identify the user. They also use the extracted fingerprints to create new or update existing references. ## IV Measuring the entropy and persistence of Rowhammer bit flips In this section, we first calculate a theoretical upper bound on the entropy that can be obtained from the bits that flip across multiple contiguous chunks of 2 MB of memory. While calculating this theoretical upper bound, we assume that the process variation during the manufacturing of DIMMs is such that the bits that flip within each row (on the same DIMM and across DIMMs) are independent. In this analysis, we also assume that bit flips are deterministic, i.e., hammering the same memory regions always results in the same set of bit flips. Then, we relax these assumptions and validate our analysis by measuring the actual entropy on a set of 3,611 such chunks across 36 DIMMs. We focus on 2 MB chunks since this is the largest contiguous chunk of memory that we can reliably obtain from the OS without requiring root privileges (using Transparent Huge Pages or THP [24]). From our measurement, we observe that bit flips are unique across chunks and despite exhibiting non-deterministic behavior, are persistent within each chunk. We use takeaways from our measurement study to design our fingerprinting technique. ### _Theoretical entropy analysis_ If we consider that there are approximately 1 trillion DIMMs on the planet, with each DIMM having approximately 100,000 rows, we will have a total of \(10^{18}\) possible rows. We will need approximately \(\log_{2}{(10^{18})}=59.79\approx\) 60 bits of entropy to represent all these rows. Since rows within DRAM DIMMs are a finer granularity than the number of DIMMs (and correspondingly the number of devices), obtaining an entropy of 60 bits would be sufficient to represent all devices on the planet. As discussed in SSII, every row of a DRAM DIMM contains 65,536 capacitors. If process variation results in Rowhammer triggering exactly one bit flip per row (i.e., only one capacitor losing its charge), we can use the index of the flipped bit within each row to at most identify 65,536 rows (equivalent of 16 bits of entropy). Thus, if exactly one bit flips per row, using the index of the flipped bit within the row does not have enough entropy to represent all possible rows across all DIMMs. The solid orange line in Figure 3 shows the amount of entropy available to represent all rows with varying number of bit flips observed per row and the dashed orange line showing the required entropy. From the figure, we see that if 5 bits flip per row, we can represent all possible rows since we get 73 bits of entropy (\(\log_{2}{\binom{65,536}{5}}\)). The analysis presented so far limits us to only observe bit flips within a single row. However, with a contiguous chunk of 2 MB of memory, we can access multiple rows from each bank. For example, in case of dual rank DIMMs having a width of 8 bits (going forward, we refer to this configuration as 2Rx8), a contiguous 2 MB chunk of memory corresponds to an aligned set of 8 consecutive rows (or 524,288 capacitors) within each bank. In this case, we will need an entropy of 57 bits to represent all \(10^{17}\) chunks across all DIMMs. We see that if 4 bits flip among these rows, we get 71 bits of entropy (\(\log_{2}{\binom{524,288}{4}}\)) to represent them. As contiguous 2 MB chunks are interleaved across banks, we can can access these consecutive rows across all banks. If we consider all 16,777,216 capacitors spread across all banks in a 2 MB chunk, we see that 3 bit flips in each chunk is sufficient to obtain an entropy of 54 bits (\(\log_{2}{\binom{16,777,216}{3}}\)). This entropy is sufficient to represent all \(10^{16}\) such chunks across a trillion DIMMs. The grey and blue solid lines in Figure 3 show the variation in entropy with varying number of bit flips produced per aligned set of 8 consecutive rows and per aligned set of consecutive of 8 consecutive rows across all banks respectively. Dashed lines of the same colors show the required entropy in both cases. In summary, our analysis indicates that the distribution of bit flips triggered by Rowhammer in individual 2 MB contiguous chunks of memory is potentially unique even if they can produce at least 5 bit flips. We reiterate that the theoretical analysis assumed that all chunks produce bit flips, the distribution of bit flips is independent and that bit flips do not exhibit any non-deterministic behavior. We now perform experiments to measure the actual entropy across such chunks across multiple DIMMs. ### _Empirical entropy analysis_ Existing Rowhammer research has primarily focused on developing techniques to trigger bit flips [26, 17, 53, 15, 10, 22] in memory. To the best of our knowledge, prior work lacks any analysis of the distribution of bit flips, particularly in terms of their entropy. In this section, we present the first such study on DDR4 DIMMs. Concretely, we first validate our theoretical analysis by measuring the entropy of the distribution of bit flips within a given bank across multiple 2 MB chunks of memory across DIMMs. As mentioned in SSI, we find that bit flips are not deterministic and, as a result, merely measuring the entropy of the distribution of bit flips is insufficient to extract a reliable fingerprint. Thus, we also measure the persistence of the distribution of bit flips across repeated measurements to the same chunks across DIMMs. #### Iv-B1 Test Bed Our test bed for this measurement consists of 36 identical 2Rx8 DIMMs. We fuzz one of these DIMMs to discover a non-uniform hammering pattern that can trigger bit flips. We observe that the discovered pattern is able to produce bit flips on all 36 DIMMs. #### Iv-B2 Methodology We conduct our experiments in a controlled setting with root privileges that allow us to allocate 1 GB of contiguous memory. This setting gives us control over where we perform our hammering since we observe that the allocation of huge pages in physical memory rarely changes in Linux (verified using pagemap [25]). We confine our hammering to randomly chosen contiguous 2 MB chunks within the allocated huge page that lie within an arbitrarily chosen bank. While hammering, we modify the hammering pattern determined by the fuzzer to only trigger bit flips within the randomly chosen 2 MB chunks. #### Iv-B3 Results Across all 36 DIMMs, we hammered a total of 3,611 chunks. 99.77% of these chunks (3,603 chunks) Fig. 3: Plots showing the variation in the number of bits of entropy that can be obtained to represent different regions of memory across a trillion DIMMs with varying number of bit flips. produced at least one bit flip. Among these chunks, the number of bit flips ranged from 1 bit flip to 1,799 bit flips at an average of 711 bit flips per chunk. Across the chunks that produced bit flips, we record an entropy of 12 bits for the triggered bit flips. We calculated entropy in terms of the number of chunks that had the same set of bit flips as a given chunk. Crucially, we highlight that in our test bed, 12 bits of entropy corresponds to the highest possible normalized entropy of 1.0 [2], which demonstrates that each chunk has a unique set of bit flips. We summarize these findings in Table II. We use the fact that the bit flips in every chunk in our experiment is unique to estimate the expected entropy on all possible chunks. Extrapolating our results, based on the average of 711 bit flips per chunk yields over 7,700 bits of entropy, which is significantly higher than the 60 bits needed to represent such chunks on a trillion DIMMs. To use the bit flips produced by Rowhammer as a fingerprint, ensuring that they have high entropy on different chunks is not sufficient unless they are also persistent within the same chunk. In our study, we notice that reinitializing regions with the same data and hammering them again does not guarantee that the same bit will flip. In other words, we observe that the bits that flip within a given chunk are not deterministic. For example, Figure 4 shows the sets of bits that flipped when hammering the same chunk on a particular DIMM twice (while restoring the data written to the chunk before hammering again). Set A shows the addresses that flipped during the first attempt which is completely disjoint to set B shows the set of addresses that flipped during the second attempt. Thus, we cannot re-identify a particular chunk by merely employing a set similarity metric like Jaccard index (as proposed by existing research [51, 29]). **Takeaway 2:** Bit flips exhibit _non-deterministic behavior_, i.e., hammering the same set of aggressor rows multiple times does not result in the same set of bit flips. When we restored the data and attempted to hammer the same chunk 6 more times, we observed 2 attempts with no bit flips and 4 attempts where some addresses in set A flipped again. We visually represent the list of bits that flipped across all 8 attempts in Figure 5. This figure indicates that some bits (such as those in set A) have a higher probability of flipping and other bits (such as those in set B) have a lower probability of flipping. Leveraging this observation, we compute a probability distribution for the bit flips in each chunk and match similarity of distributions to measure persistence. To extract a probability distribution from a given chunk, we hammer it multiple times and use the count of flips at different indices across all hammering attempts. Then, we use Jensen-Shannon (JS) divergence [41] to compute the similarity of distributions. In Figure 6, we compare the distributions for 3 randomly chosen chunks from 3 different DIMMs across 3 different hammering attempts. We see similarities in the probability distributions computed from the same chunk as compared to distributions across chunks. **Takeaway 3:** Different bits have different probabilities of flipping. The distribution of these probabilities is _unique_ across contiguous 2 MB chunks of memory and _persistent_ within each 2 MB chunk. Fig. 4: Set A shows the addresses of bits that flipped when hammering a particular chunk of a DIMM. Set B shows the addresses of bits that flipped when restoring data to the chunk and hammering item again. The two sets are disjoint demonstrating the non-deterministic behavior of bit flips. Fig. 5: Set U shows the set of all addresses that flipped across 8 attempts to restore the data and hammering the same chunk from Figure 4. The boldfaced addresses are addresses that flipped more than once across the 8 attempts. ## V Centauri We provide a detailed account of Centauri's three phases: ### _Templating phase_ In the templating phase, we (the fingerprinters) seek to discover hammering patterns that can overcome Rowhammer mitigations to trigger bit flips on our own devices, so that we can employ the patterns to trigger bit flips on users' devices in the hammering phase. Concretely, in our experiments on DDR4 DIMMs, we run Blacksmith's [22] fuzzer to discover non-uniform hammering patterns that can evade Target Row Refresh (TRR). To account for all TRR implementations that could exist in the wild, our goal is to discover a wide array of patterns that can overcome all of them. Once the fuzzer discovers patterns that can trigger bit flips, we evaluate them on our own DIMMs to decide which patterns to employ on users' devices in the hammering phase. We prioritize those patterns that can trigger more bit flips as well as those that can trigger bit flips on more DIMMs. Patterns that trigger a large number of bit flips help account for differences in bit flip behavior when extracting fingerprints. These differences either arise as a result of the inherent non-deterministic behavior of bit flips (discussed in SSIV-B) or as a result of external factors that fingerprinters cannot control on users' devices. For example, we observe fewer bit flips with the same pattern on the same DIMM when running at lower CPU frequency. Since we cannot control the CPU frequency of a user's device, we pick patterns that can account for changes to CPU frequency to ensure robustness of our fingerprints. A pattern that produces very few bit flips in the controlled setting of our own devices may not produce any bit flips on a user's device in presence of factors outside our control. In contrast, a pattern that produces a large number of bit flips in our controlled setting may still produce enough bit flips to be able to fingerprint a user's device. We evaluate Centauri's robustness while extracting fingerprints in context of varying CPU frequencies in SSVI-F. We prioritize patterns that generalize to trigger bit flips on more DIMMs in our possession since they are also likely to generalize to more DIMMs in the wild. ### _Hammering phase_ In this phase, our goal is to trigger bit flips in a user's device and extract the distribution of bit flips as a fingerprint. Since we do not have knowledge of the type of DIMMs (or their corresponding TRR implementations) on the user's device, we rely on the patterns discovered in the templating phase to trigger bit flips. The hammering patterns discovered by the fuzzer are defined by a set of aggressors, a phase, an amplitude and a frequency [22]. The phase, amplitude and frequency are such that some aggressors engage with TRR (we refer to these as secondary aggressors) and the others trigger bit flips (we refer to these as primary aggressors). Within the execution of each pattern, we observe that the aggressors accessed at certain points in time always served as primary aggressors regardless of the choice of which addresses were used as primary and secondary aggressors. We also observe that the position of the primary aggressors within the pattern is fixed across all DIMMs where the pattern is able to trigger bit flips. Concretely, for a pattern \(a_{1},a_{2},\ldots a_{i},a_{i+1},\ldots a_{n}\), addresses places at indices \(i\) and \(i+1\) served as primary aggressors on all DIMMs where the pattern triggered bit flips. To reliably produce bit flips in the hammering phase, we pick addresses such that the primary aggressors form a double-sided aggressor pair, and the secondary aggressors are other addresses within the same bank as the primary aggressors. In this section, we discuss the hammering phase in context Fig. 6: Visualization of the relative persistence of bit flips within given 2 MB chunks of memory across multiple DIMMs. Fig. 7: Visualization of a hammering sweep performed within one bank of a contiguous 2 MB chunk. The central rectangular blocks in the visualization represent rows within the chunk. We map primary aggressors in the discovered patterns to rows within the chunk and secondary aggressors to random rows within the same bank. The hammering sweep involves sequentially hammering all pairs of double-sided aggressors as primary aggressors within the bank of the chunk and scanning the other rows for bit flips. of a single DIMM present on the user's device. We discuss ways to extend the hammering phase to devices having multiple DIMMs across multiple channels in SSVII-B. To execute the discovered patterns, we allocate transparent huge pages on a user's device to obtain contiguous chunks of 2 MB of memory without requiring root privileges. We can access all addresses within the huge page by modifying the lower 21 bits of the starting address of the chunk. This allows us to pick double-sided aggressor pairs since such chunks typically provide access to contiguous rows across multiple banks of a DIMM. For example, in case of the user's device having one 1Rx8 DIMM, the chunk gives us access to 16 contiguous rows within each of the 16 banks on the DIMM. To trigger bit flips, we first choose a particular bank within the chunk. We can do this since most bits in the address that determine the bank are contained with the lower 21 bits in most CPU architectures [44, 10]. Then, we map the primary aggressors in the discovered patterns to double-sided aggressors within the chosen bank of the chunk and secondary aggressors to random addresses within the same bank (possibly outside the chunk). While we cannot determine the exact row of a given address from the lower 21 bits, we do know the relative position of rows with respect to the row corresponding to the start address of the chunk. This allows us to choose different pairs of rows as primary aggressors within the chunk. We sequentially consider all pairs of double-sided aggressors within the bank of the 2 MB chunk as primary aggressors and execute the discovered pattern. Upon executing the pattern with each pair of primary aggressors, we record the addresses that had bit flips within the allocated chunk and the corresponding change in the data written at those addresses. We then restore the original data written to the chunk, shift our primary aggressors to the next set of double-sided aggressors within the chunk and repeat the same procedure. We refer to this operation of hammering all possible double-sided aggressors as primary aggressors within the allocated chunk as a hammering sweep. Figure 7 visualizes the _hammering sweep_. To account for non-determinism in bit flips, we repeat the hammering sweep multiple times on the chunk. ### _Matching phase_ With the observation from SSIV-B that the distribution of bit flips within a bank of contiguous 2 MB chunks is highly unique and stable, we compare the similarity of these distributions to fingerprint them. From the information recorded in the hammering phase, we identify the relative positions and counts of the capacitors that flipped within the contiguous 2 MB chunk (indexed from 0 to 1,048,576 in case of 1Rx8 DIMMs). We then use these counts to create an empirical probability distribution for each capacitor to flip within the chunk. We then compare the similarity of this distribution against previously extracted distributions using JS divergence to identify the chunk. However, we cannot guarantee access to the same 2 MB regions on a user's device, since memory allocation is handled by the OS. Thus, if we obtain two different 2 MB chunks of memory on a user's device during two different sessions and compared their distributions, we would incorrectly conclude that different devices were used during these sessions. One way to overcome this challenge would be to hammer multiple 2 MB chunks during each session. For example, suppose the user's device has 1 GB of memory which corresponds to 512 different 2 MB chunks of memory. Taking inspiration from the birthday paradox [1], if we were to hammer 64 chunks each in two different sessions with the user's device, then the probability that at least one chunk would overlap between them is over \(99.9\%\). We show how we derived this number from first principles in Appendix A. One drawback to this approach is that hammering a large number of chunks would prolong the duration of the hammering phase, thereby making it less efficient. However, we can overcome this by building up references across sessions, which would result in hammering fewer chunks in the long term. For example, suppose we have reference distributions to 64 different chunks from one session. In a subsequent session with the same device, we hammer 64 chunks such that only one chunk happens to overlap with the reference. We can now combine the distributions of the 63 non-matching chunks to our reference to have an updated reference from 127 different chunks from that device. When running a subsequent session on the same device, we have a higher probability that an allocated chunk would match our reference, since the reference size has increased. Upon reaching the limiting case where we have references for all possible chunks, merely hammering one chunk in the hammering phase would be sufficient, thereby resulting in higher efficiency. ## VI Evaluation This section seeks to answer the following questions about Centauri and its ability to extract fingerprints. * How unique are the fingerprints extracted by Centauri? * How stable are the fingerprints extracted by Centauri? * How long does Centauri take to extract fingerprints? * Can Centauri extract robust fingerprints in presence of external factors? * How does Centauri compare against Rowhammer PUF? ### _Test bed_ Since the likelihood of an unintended bit flip that results in a crash is non-trivial, we decide to evaluate Centauri on a controlled test bed. Our test bed consists of 35 single rank DIMMs having a width of 8 bits (1Rx8), 11 single rank DIMMs having a width of 16 bits (1Rx8) and 36 dual rank DIMMs having a width of 8 bits (2Rx8) from one major DRAM manufacturer. To evaluate Centauri's fingerprints across manufacturers, our test bed also contains 10 1Rx8 DIMMs, 2 1Rx16 DIMMs and 4 2Rx8 DIMMs from another major DRAM manufacturer. Overall, our test bed contains 98 DIMMs that include 6 identical sets of DIMMs across two major DRAM manufacturers. We installed these DIMMs among 12 Intel Core 17 destkops for our experiments. Since we have more DIMMs than destkops in our possession, we hammer DIMMs in batches. While repeating experiments on a particular DIMM, we make sure that we seat it on the same slot on the same desktop. ### _Experimental methodology_ #### Vi-B1 Templating phase We use a set of 11 hammering patterns, which together are able to trigger bit flips on all 98 DIMMs in our test bed. We note that patterns generalize across DIMMs that were made by the same manufacturer. In other words, patterns that produce bit flips on DIMMs of a particular manufacturer, do not produce bit flips on DIMMs from other manufacturers. Thus, the pattern that triggers bit flips also helps identify DIMMs by revealing their manufacturer. #### Vi-B2 Hammering phase For each DIMM in our test set, we perform a hammering sweep (visualized in Figure 7) on a particular bank of multiple 2 MB chunks of memory with the appropriate pattern and record the resulting distribution of bit flips in that chunk. To compute the resulting distribution, we repeat the hammering sweep operation multiple times on each chunk. We consider the set of bit flip probability distributions of all hammered chunks on a particular DIMM as its fingerprint. We vary the number of times we repeat the hammering chunk operation and the number of times we activate aggressors in different experiments to evaluate their impact on Centauri's fingerprints (SSVI-E). #### Vi-B3 Matching phase Given two fingerprints, we compute the JS divergence on all pairs of bit flip probability distributions between them. We consider the two fingerprints to match (i.e., to correspond to the same DIMM) if the minimum JS divergence value across all pairs is below an empirically determined threshold. ### _How unique are the fingerprints extracted by Centauri?_ For this evaluation, we extract 2 fingerprints from each DIMM in our test bed using the aforementioned methodology. We use the first fingerprint extracted from a particular DIMM as a reference for that DIMM and compare subsequently extracted fingerprints against the reference. In these experiments, we extracted both fingerprints within the space of few hours on the same day. We did not re-seat the DIMMs in the interim period. We repeated the hammering sweep operation 8 times and activated the aggressors 10,000,000 times. Figure 8(a) shows the recorded minimum JS divergence when matching fingerprints from each of the 36 2Rx8 DIMMs against reference fingerprints from each of them. From the figure, we clearly see that the minimum JS divergence computed among distributions taken from the same DIMMs is significantly lower than the minimum JS divergence computed among distributions taken across different DIMMs. Figure 8(b) and Figure 8(c) shows similar plots across 35 1Rx8 DIMMs and 11 1Rx16 DIMMs respectively. We report similar plots on DIMMs from the other manufacturer in Appendix B. The clear separation in JS divergence computed on pairs of fingerprints (bit flip distributions) taken from the same DIMM against those taken from different DIMMs allows us to pick multiple thresholds for JS divergence to uniquely identify DIMMs. By picking appropriate thresholds2, we attain an overall fingerprint accuracy of 99.91%, corresponding to a precision of 100% and recall of 97.06%. Overall, these results show that Centauri has very high discriminative power and can uniquely identify DIMMs across multiple sets of DIMMs with identical configurations. Footnote 2: We picked thresholds by running the same experiment on a smaller subset of DIMMs. ### _How stable are the fingerprints extracted by Centauri?_ We evaluate the stability of the fingerprints extracted by Centauri over time. Since we have more DIMMs than desktops, we first evaluate Centauri's stability on a set of 10 random DIMMs across both manufacturers (6 and 4 DIMMs respectively) by extracting fingerprints from them once a day for 10 days. We highlight that we do not re-seat DIMMs at anytime during this evaluation which is what we would expect from users in wild. We repeated the hammering sweep operation 8 times and activated the aggressors 200,000 times. Figure 9 shows the variation in Centauri's accuracy, precision and recall on these DIMMs over time. From the plot we see that all metrics roughly remain constant with some minor fluctuations. Importantly, the plot does not show any trend of decline in the values for any metric indicating that the fingerprints extracted by Centauri are stable. These metrics were computed using the same threshold for each day which indicates that the JS divergence values (and the corresponding bit flip probabilities) remain unchanged. We highlight that the stability is not a result of our specific choice for the threshold since we record similar accuracy, precision and recall even with slightly altered thresholds. Motivated by these results, we increased the scale of our evaluation by evaluating on all DIMMs, first over a period of 2 weeks and then over a period of 4 weeks. For these Fig. 8: Plots showing the distribution of JS divergence values when comparing bit flip distributions obtained from the same pair of DIMMs and across different DIMMs. experiments, we had to re-seat DIMMs in the interim period to cover all DIMMs due to the limited number of desktops at our disposal. Figure 10 shows how Centauri's accuracy, precision and recall change over two weeks and over four weeks on our entire set of DIMMs. At a common threshold of 0.6 for JS divergence, we only see minor fluctuations in precision and recall, but we see a significant decline in recall. When we tried to change the threshold to improve the recall, we observed that it came at the cost of precision. We were unable to pick a common threshold that gave us high precision and high recall. We suspect that this instability in Centauri's fingerprints is a result of our experimental setup where we re-seat DIMMs. If this is indeed the case, Centauri's fingerprints would be stable in the wild since users rarely re-seat their DIMMs. The ideal way to confirm this would be to evaluate the stability of Centauri's fingerprints on our entire test bed over an extended period of time (4 weeks) without having to re-seat DIMMs. Since this is not feasible due to the limited number of desktops at our disposal, we run an experiment that provides strong evidence that re-seating induces the instability in Centauri's fingerprints. Concretely, we run 3 different experiments on 6 randomly chosen DIMMs from our test bed. In the first experiment, we hammer and extract two fingerprints within the space of a few minutes. We also extract two fingerprints within the space of a few minutes in the second experiment, but we re-seat the DIMM (i.e., take out and insert back) after extracting the first fingerprint. We do the same thing in the last experiment but reboot the desktop after extracting the first fingerprint. Since we cannot re-seat a DIMM without rebooting the device, we run the third experiment to see the impact of rebooting on the stability of our fingerprint. We present the highest accuracy attained, with the corresponding precision and recall for all 3 experiment in Table III. From the table we see a drastically low value of 50% recall for the experiment where we re-seated the DIMM. The other two experiments show high precision as well as recall. These observations support our suspicion that the instability in Centauri's fingerprints is a result of re-seating the DIMMs. Thus, we can expect the fingerprints extracted by Centauri to be stable in the wild since users rarely re-seat their DIMMs. ### _How long does Centauri take to extract fingerprints?_ Efficiency, quantified by the time required to generate fingerprints, is another important metric to evaluate a fingerprinting technique [50]. An efficient fingerprinting technique should be able to extract fingerprints in a relatively short duration of time in order to remain practical. When accessing aggressors 10,000,000 times to execute Rowhammer, performing the hammering sweep operation on a single 2 MB chunk takes an average of 20 seconds. Repeating the operation to account for non-determinism in bit flips further increases the time taken by Centauri to extract a fingerprint. Even in the limiting case where we have enough references to cover all 2 MB chunks in a given DIMM, Centauri takes almost 3 minutes to extract a fingerprint when accessing aggressors 10,000,000 times and repeating the hammering sweep 8 times. One way to improve Centauri's efficiency would be to reduce the number of times the aggressors are accessed to trigger bit flips. Another way would be to reduce the number of times we repeat the hammering sweep operation. Employing either approach is subject to making sure that they do not significantly degrade Centauri's fingerprinting accuracy. In this section, we present a comprehensive analysis of the trade-off between fingerprint accuracy and efficiency when running Centauri. Concretely, we present the fingerprint accuracy and the average time taken to extract a fingerprint from one 2 MB chunk across 15 different configurations on all the DIMMs in our test bed. These 15 configurations differ in terms of the number of times we access the aggressors when executing Rowhammer and the number of times we repeat the hammering sweep operation. We consider 5 different values Fig. 10: Plot showing the variation in the accuracy, precision and recall of the fingerprints extracted by Centauri on our entire test bed of 98 DIMMs over a period of 2 weeks and again over a period of 4 weeks. Unlike what we observed in our previous experiment, there is a significant change in metrics with the recall declining significantly. We suspect that this decline is a result of having to re-seat DIMMs in our experimental setup. Fig. 9: Plot showing the variation in the accuracy, precision and recall of the fingerprints extracted by Centauri on a set of 10 DIMMs over a period of 10 days. We see that the metrics roughly remain constant with minor fluctuations which provides strong evidence that the fingerprints extracted by Centauri are stable. for the number of accesses to each aggressor ranging from 10 million accesses to 200,000 accesses and 3 different values for the number of times we repeat the hammering sweep operation (8 times, 4 times and 2 times). Figure 11 shows the accuracy, precision, recall and time elapsed in all 15 configurations. These results show that while tuning either parameter to reduce the time taken by Centauri to extract fingerprints comes at the cost of its fingerprinting capability. However, accessing aggressors 1,000,000 times and repeating the hammering sweep operation twice only takes 9.92 seconds to extract fingerprints, while still maintaining a precision of 95.96% and a recall of 84.61% (accuracy of 99.27%). Based on their use case, fingerprinters can tune these parameters to choose between accuracy and efficiency. ### _Can Centauri extract robust fingerprints in presence of external factors?_ In this section, we evaluate the robustness of Centauri's fingerprints to external factors (outside the control of the fingerprint) that can influence the behavior of bit flips. Concretely, we evaluate Centauri's robustness in context of CPU frequency since fewer bits flip at lower frequencies as compared to higher frequencies. The operating frequency of a user's CPU is subject to change [11] and cannot be controlled by the fingerprint. In our experiments, we first extract fingerprints (bit flip distributions) on all DIMMs in our test bed when running the CPU at its highest frequency of 3600 MHz. Then, we extract fingerprints from the same DIMMs when running the CPU at a lower frequency of 2800 MHz. On average, we see that there is 2 orders of magnitude difference in the number of bit flips that trigger at the two frequencies. Comparing the fingerprints extracted at these two different frequencies, Centauri achieves accuracy of 99.09%, corresponding to a precision of 94.2% and recall of 81.56%. These results indicate that Centauri only suffers a modest drop in its ability to extract fingerprints even when matching bit flip distributions extracted at different frequencies. Since some CPU governors dynamically scale the CPU frequency based on the system load (e.g., ondemand [11]), Centauri can further improve its accuracy in such cases by running a CPU-intensive program to exert system load and increase the frequency. Being robust to variations that result from external factors such as CPU frequency also demonstrates the impact of Centauri's design of picking hammering patterns that produce the most number of bit flips. We conduct experiments by intentionally picking a pattern discovered in the templating phase that does not trigger the most number of bit flips. We notice that this pattern generalizes to trigger bit flips on fewer DIMMs when compared to the pattern that triggers the most bit flips. We first run evaluate this pattern on a set of 10 DIMMs where it produces bit flips without altering the frequency. In this case, the pattern has a fingerprint accuracy of 95.92%, precision of 100% and recall of 71.43%. On the same set of DIMMs, this pattern has a recall of 0% when comparing fingerprints (bit flip distributions) at different CPU frequencies, since it was unable to trigger bit flips at the lower frequency. Results in the next section, SSVI-G show that Jaccard similarity (proposed by Rowhammer PUF) has a lower accuracy than Centauri when matching fingerprints across frequencies. ### _How does Centauri compare against Rowhammer PUF?_ We compare Centauri's fingerprinting accuracy against that of an adapted version of Rowhammer PUF that uses Centauri's techniques to overcome the challenges enforced by the OS for memory allocation as well as Rowhammer mitigations (TRR). Thus, in this section we evaluate Centauri's approach of hammering the same multiple times to extract probability distributions and comparing their divergence to match fingerprints against Rowhammer PUF's approach of hammering each chunk once to extract sets of bit flips and comparing their similarity. When accessing aggressors 1 million times (the optimal number of times to trade-off between accuracy and efficiency per our discussion in SSVI-E), Centauri has an accuracy of 99.27%, corresponding to a precision of 95.96% and recall of 84.61%. With the same number of accesses on the same set of DIMMs, Rowhammer PUF has an accuracy of 98.01%, precision of 89.59% and recall of 64.64%. Furthermore, in presence of external factors outside the fingerprint's control, such as the CPU frequency, that affect the number of bit flips triggered, Centauri reports an accuracy of 99.09%, precision of 94.2% and recall of 81.56%. On the same set of DIMMs, using Rowhammer PUF to match fingerprints yields an accuracy of 96.3%, precision of 100% and a significantly low recall of 20.09%. Fig. 11: Plots showing the variation in accuracy, precision, recall and time elapsed to extract fingerprints when varying the number of accesses to trigger bit flips and varying the number of times we repeat the hammering sweep operation. ## VII Discussion ### _Extension to browser and mobile fingerprinting_ So far, we discussed Centauri in context of device fingerprinting, where a fingerprinter can run native code on a user's desktop. In this section, we discuss how to adapt Centauri to operate in a situation where the user visits a website under the control of the fingerprinter or installs an app developed by the fingerprinter on their mobile phone. In both cases, fingerprints cannot explicitly flush the cache which they could do with native code on desktops. Additionally, most Android devices do not support huge pages for the fingerprinter to allocate contiguous memory [53]. SMASH [10] describes self-evicting Rowhammer patterns that can evade TRR and trigger bit flips from JavaScript. While the SMASH paper presents results on CPUs that use Quad-age LRU [48] cache replacement policy, fingerprinters targeting the web can discover the replacement policies and correspondingly discover self-evicting Rowhammer patterns on other CPUs by running experiments on their own CPUs in Centauri's template-ing phase. Fingerprinters still have to know their user's CPU microarchitecture to pick the appropriate self-evicting pattern to trigger bit flips. With native code, fingerprints can merely read this from /proc/cpuinfo, but cannot do so from the browser. Rowhammer.js [17] describes timing side channels to infer a user's CPU microarchitecture. Thus, fingerprints running a website, can first infer the microarchitecture of a user by adopting Rowhammer.js and alter known non-uniform patterns into self-evicting patterns for that microarchitecture using SMASH to trigger bit flips. Once triggered, they can use Centauri's sampling and matching strategy to extract and compare fingerprints. Drammer [53] uses Android's ION memory allocator for uncached access to contiguous memory to trigger bit flips. However, some devices have disabled access to contiguous memory via ION. More generally, as long as there are techniques to get uncached access to contiguous memory, they can be combined with Centauri to extract fingerprints on mobile. ### _Extension to configurations with multiple DIMMs across channels_ If a user's device has multiple DIMMs across channels, contiguous addresses are interleaved in DIMMs across channels in addition to being interleaved across banks. Thus, while manipulating the available 21 bits in the address of a transparent huge page to set primary aggressors (and to scan for bit flips), we have to make sure that we do not alter the bits that correspond to the channel. For example, in case of 1Rx8 DIMMs on Kaby Lake machines, we can alter the row within a bank by modifying bits above the 18th bit (bit at index 17 with the least significant bit being bit 0) in the address of the huge page. From our experiments with DRAMA [44], we suspect that the channel bits can be derived from a combination of the bits at indices 7, 9, 14, and 173. Thus, we do not change the bit at index 17 to ensure that we do not change the channel. Footnote 3: We suspect that the bits at indices 27 and 28 also contribute to the channel, but they are not relevant since they fall outside the bits we can manipulate ### _Mitigating Centauri_ #### Vi-C1 Inapplicability of common defenses **Standard fingerprinting defenses** Standard mitigations against fingerprinting such as normalization [54] or enforcing permissioned access [42] cannot be employed against Centauri. Our results demonstrate that we can extract unique and stable fingerprints even among homogeneous devices. Normalization as a defense against Centauri would require eliminating process variation in the manufacture of DRAM chips, which is difficult to implement. Since all applications including those that are benign require access to memory, blocking access to memory or requiring user permission to access memory would not be a viable mitigation against Centauri. To the best of our knowledge, triggering additional bit flips to obfuscate or spoof the distributions extracted by Centauri are not viable since they risk hurting benign usage. **Standard Rowhammer defenses** With Centauri, we extracted fingerprints on DIMMs that use in-DRAM TRR to mitigate Rowhammer. Different hammering patterns being able to evade TRR and trigger bit flips on different DIMMs shows that fingerprinters can overcome most TRR implementations. Since we assume that fingerprinters can run experiments on their own devices to discover ways to trigger bit flips, we anticipate that they can also overcome other defenses that may be employed against Rowhammer. For example, DDR5 DIMMs are expected to have in-DRAM ECC [23] (Error Correction Codes) to mitigate Rowhammer. However, existing research [9] has already shown that bit flips can also be triggered on ECC equipped systems. Thus, ECC does not guarantee a defense against Centauri. #### Vi-C2 Potential defenses against Centauri At its core, Centauri relies on observing bit flips produced by Rowhammer to extract fingerprints. Thus, any defense that can prevent bit flips from being triggered or observed would be able to overcome Centauri. **Restricting access to contiguous rows within a bank** Any memory configuration that prevents access to contiguous rows within a single bank of a DIMM can be used to defend against Centauri. For example, as a result of interleaving of contiguous memory across channels, a CPU that has 2 channels with each channel having 2 DIMMs would result in contiguous 2 MB chunks not spanning more than one row per bank. First, Such a configuration would make it impossible to execute a double-sided Rowhammer, which is more reliable in producing bit flips. Second, and more importantly, even if a fingerprinter is able to trigger bit flips with such a configuration, they cannot observe them since bit flips are typically triggered in the row adjacent to the row being aggressed. **Preventing bit flips during the manufacturing process** In our experiments, we had difficulty triggering bit flips in DIMMs from a particular manufacturer. Most DIMMs from the manufacturer did not produce any bit flips. When repeatedly sweeping though contiguous 256 MB of memory, we occasionally observed a small number of bit flips (at most 200 bit flips) on 2 DIMMs. The DIMMs from this manufacturer are either robust to Rowhammer in that they do not trigger bit flips or they use a complex implementation of TRR that makes it difficult to consistently trigger bit flips. We were unable to fingerprint these DIMMs in our experiments. **Diversifying the number of TRR implementations** The time taken for Centauri to extract a fingerprint increases as the number of possible TRR implementations increase, thereby decreasing Centauri's efficiency. Fingerprinters do not have a way to know if a given hammering pattern will trigger bit flips on a DIMM without executing the pattern. At best, fingerprinters can use timing side channels (see Appendix C) to determine a DIMM's geometry (number of ranks, width etc.) and limit themselves to only executing patterns discovered on DIMMs with the same geometry. Since multiple DIMMs with the same geometry can have different TRR implementations, fingerprinters may have to attempt multiple patterns to trigger bit flips. Thus, even if fingerprinters have found hammering patterns that can overcome all implementations of TRR, they have to execute all their patterns in the worse case, thereby increasing the time taken to extract fingerprints. ### _Incrementally building up references_ In this section we discuss an alternate design of Centauri that does not leave a distinct footprint by having to allocate multiple transparent huge pages in the initial stages before reaching the limiting case (SSV-C). Fingerprinters can confine their hammering to fewer chunks (say, 2 chunks) per session to incrementally build up their references. In this case, fingerprinters would initially have multiple sets of references for devices without being able to link references that come from the same device. With this approach, fingerprinters would also not be able to fingerprint devices during their initial sessions. Eventually, after aggregating distributions from devices across multiple sessions, fingerprinters would be able to link sets of references to the same device and thereon fingerprint devices across all sessions. We demonstrate this approach with an example in Figure 12. Say, during a user's first session when running Centauri on a new device, the fingerprint obtains the distribution of bit flips in 2 distinct chunks of 2 MB of memory, chunk A and chunk B. Since each chunk has a unique distribution of bit flips, neither chunk would match any existing reference chunk known to the fingerprint. The fingerprint would create a fresh reference for these two chunks. In the next session on that device, say the fingerprint obtains the distributions of 2 other chunks, chunk C and chunk D. Again, since the distribution of bit flips in each chunk is unique, the fingerprinter will not be able to identify that this session corresponds to the same device, and would store them as a separate reference. In the next session with the user, say the fingerprinter obtains the bit flip distribution to chunk B and chunk C. Now, the fingerprinter would be able to fingerprint the device, and also combine the references containing chunks A and B with the references containing chunks C and D as chunks obtained from the same device. In a subsequent session on the same device, say the fingerprinter obtains the bit flip distributions to chunk C and chunk E. For this session too, the fingerprinters would be able to fingerprint the device and also extend their references for the user to contain the distribution of chunk E. Thus, when incrementally building up references, in the long run, fingerprinters would be able to fingerprint users without leaving behind a distinct memory footprint. However, in order to do so, fingerprinters would have to give up being able to fingerprint devices during their initial sessions. ### _Centauri for fraud detection_ Centauri can significantly strengthen fraud detection by generating fingerprints that can unique identify fraudsters' devices even among a population of identical devices over a long period of time. Fraudsters will also find it difficult to alter or spoof the fingerprints extracted by Centauri, since Centauri captures fundamental properties of hardware as fingerprints. However, Centauri's promise in detecting fraudsters comes with a non-zero risk to benign users. While triggering bit flips to extract fingerprints, Centauri could accidentally crash a user's device by flipping a sensitive bit reserved for the OS. In our experience, however, we see that such occurrences are extremely rate. OS vendors can also help mitigate this concern by ensuring that memory allocated to the OS does not physically border around memory allocated to other applications. Another risk presented by Centauri is that it could wear out memory modules if it is used to constantly trigger bit flips for fingerprinting. Centauri's approach of triggering bit flips with fewer accesses to aggressors helps mitigate this concern. Such concerns can also be mitigated by only employing other fingerprinting techniques for the common cases and sparingly employing Centauri to only handle the critical cases. ## VIII Conclusion We presented Centauri to extract unique and stable fingerprints even for devices with identical hardware and software configurations. To this end, Centauri leverages Rowhammer to capture the side-effects of process variation in the underlying manufacturing process of memory modules. Centauri's design Fig. 12: Visualization of incrementally building up reference fingerprints. In this figure, we assume that the fingerprinters first obtained bit flip distributions of two separate sets of contiguous chunks of memory from the same device across two separate sessions. Each set contains the distribution of bit flips observed on two such chunks. Since the distribution of bit flips on each chunk is unique, the fingerprinters treat these sets as belonging to two different devices. When they subsequently obtain bit flip distributions from chunks that overlap across both sets, they collapse them into a single reference that pertains to the same device. involves a novel sampling strategy to overcome memory allocation constraints, identification of effective hammering patterns to bypass Rowhammer mitigations and trigger bit flips at scale, and handling non-deterministic bit flips through multiple hammering iterations and divergence analysis of probability distributions. Our evaluation of Centauri on 98 DIMMs across 6 sets of identical DRAM modules from two manufacturers showed that it can extract high entropy and stable fingerprints with an overall accuracy of 99.91% while being robust and efficient. Centauri cannot be trivially mitigated without fundamentally fixing the underlying the Rowhammer vulnerability, which - despite existing countermeasures - is expected to escalate as the density of DRAM chips increases in the future.
2306.17716
Single Sample Prophet Inequality for Uniform Matroids of Rank 2
We study the prophet inequality when the gambler has an access only to a single sample from each distribution. Rubinstein, Wang and Weinberg showed that an optimal guarantee of 1/2 can be achieved when the underlying matroid has rank 1, i.e. in the case of a single choice. We show that this guarantee can be achieved also in the case of a uniform matroid of rank 2 by a deterministic mechanism, and we show that this is best possible among deterministic mechanisms. We also conjecture that a straightforward generalization of our policy achieves the guarantee of 1/2 for all uniform matroids.
Kanstantsin Pashkovich, Alice Sayutina
2023-06-30T14:58:14Z
http://arxiv.org/abs/2306.17716v1
# Single Sample Prophet Inequality for Uniform Matroids of Rank 2 ###### Abstract We study the prophet inequality when the gambler has an access only to a single sample from each distribution. Rubinstein, Wang and Weinberg showed that an optimal guarantee of \(1/2\) can be achieved when the underlying matroid has rank \(1\), i.e. in the case of a single choice. We show that this guarantee can be achieved also in the case of a uniform matroid of rank \(2\) by a deterministic mechanism, and we show that this is best possible among deterministic mechanisms. We also conjecture that a straightforward generalization of our policy achieves the guarantee of \(1/2\) for all uniform matroids. ## 1 Introduction We study the single-sample prophet inequalities (SSPI) for uniform matroids. This is a variation of the prophet inequalities problem where the gambler does not know the distributions \(X_{1},X_{2},\ldots,X_{n}\) for the arriving items, but has access only to a single sample \(s_{1}\sim X_{1}\), \(s_{2}\sim X_{2}\),..., \(s_{n}\sim X_{n}\) from each of the distributions. After getting access to the samples, the gambler is presented realizations \(r_{1}\sim X_{1}\), \(r_{2}\sim X_{2}\),..., \(r_{n}\sim X_{n}\), and needs to decide whether to accept the elements in the online fashion. The problem is to design a mechanism such that the expected value of the items accepted by the gambler is a good approximation of the value of the items accepted by the prophet, where the expectation is calculated with respect to the samples and realizations. Let \(k\) be the rank of the uniform matroid defining the feasibility constraints. We propose the following mechanism for the gambler. Here and later, we assume that all the values considered over all items have a perfect total order. In case when there are items with the same sample or realization values, we tiebreak them arbitrarily, e.g. based on a randomly sampled \([0,1]\) tie-breaker for each value. We also assume that all the values are nonnegative. Indeed, neither the prophet nor the gambler accepts items with negative values, and so it is sufficient to design a mechanism only in the setting where no item has a negative value. Note that in the case \(k=1\) the mechanism in Algorithm 1 behaves the same way as the \(2\)-competitive mechanism for uniform rank-\(1\) matroids as studied in [14]. We prove that the mechanism in Algorithm 1 is \(2\)-competitive also in the case \(k=2\). We conjecture that this mechanism is \(2\)-competitive for \(k>2\) as well. **Theorem 1.1**.: _The mechanism in Algorithm 1 is \(2\)-competitive when \(k=2\)._ Moreover, the model used in the proof of Theorem 1.1 shows that Algorithm 1 is "pointwise" \(2\)-competitive as defined in [3] when \(k=2\). "Pointwise" SSPI inequalities allow samples and realizations for the same item to be correlated. ### Previous works Prophet inequalities were extensively studied in the context of full information about the values' distributions [5],[7], [8], [10], [12], [15]. In [14], it was shown that in the case of single-choice, i.e. in the case of uniform matroids of rank \(1\), the optimal possible ratio of \(2\) is achieved in the SSPI setting. Also, in [14] it was shown that when all distributions are identical then for any \(\varepsilon>0\) the ratio \(\approx 0.745\) can be achieved within the factor \((1+\varepsilon)\) with access to \(O(n)\) samples, where the ratio \(\approx 0.745\) is also the optimal ratio for the full information case. The results of [14] for uniform matroids of rank \(1\) were crucial for [4] to improve SSPI guarantees for several well studied downwards closed families. For the choice of \(k\) elements, i.e. in the case of uniform matroids of rank \(k\), in [2] it was provided a mechanims for SSPI with competitive guarantee \(O(1-\frac{1}{\sqrt{k}})\). This is asymptotically matching the best known competitive guarantee in the case of prophet inequalities with full information for uniform matroids of rank \(k\)[1]. In [2], the authors provided a blackbox reductions showing that if a family of downwards closed constraints admits an _order oblivious secretary_ mechanism with competitive guarantee \(\alpha\) then the same family admits a "pointwise" SSPI with the same competitive guarantee \(\alpha\). Together with the fact that many known mechanisms for the secretary problem are order oblivious [6], [9], [11], [13], [16] the blackbox reduction led to a series of SSPI mechanisms for different matrioids. Recently, in [3] it was shown that also the "reverse" blackbox reduction holds, i.e. if a family of downward closed constraints admits a "point wise" SSPI with competitive guarantee \(\alpha\) then the same family admits an order oblivious secretary mechanism with competitive guarantee \(2\alpha\). ### Model We make use of a standard model for SSPI, e.g. the same model was used in [14]. We assume that for each item \(i\), \(i=1,\ldots,n\) we have two given values \(y_{i}\) and \(z_{i}\), where \(y_{i}>z_{i}\). Then with probability \(1/2\) we have that \((s_{i},r_{i})\) equals \((y_{i},z_{i})\), and with probability \(1/2\) we have that \((s_{i},r_{i})\) equals \((z_{i},y_{i})\). Every \(\alpha\)-competitive mechanism in such model is also \(\alpha\)-competitive in the general SSPI setting. Let us consider all values \(y_{i}\), \(z_{i}\), \(i=1,\ldots,n\), sorted in descending order. Let this sequence be \(w_{1},w_{2},\ldots,w_{2n}\), i.e. \(w_{1}>w_{2}>\ldots>w_{2n}\). Recall that in the considered model for every \(i\), \(i=1,\ldots,n\) exactly one of the values among \(y_{i}\) and \(z_{i}\) is picked to be equal to \(s_{i}\) while the other to be equal to \(r_{i}\). We refer to the indices in the sequence \(w_{1},w_{2},\ldots,w_{2n}\) as _elements_. Given an element \(j\), \(j=1,\ldots,2n\) if \(w_{j}\) is picked to be equal to \(s_{i}\) for some \(i\), \(i=1,\ldots,n\), we refer to \(j\) as an _S-element_. Similarly, if \(w_{j}\) is picked to be equal to \(r_{i}\) for some \(i\), \(i=1,\ldots,n\), we refer to \(j\) as an _R-element_. Similarly, each element \(j\), \(j=1,\ldots,2n\) can be either a _Y-element_ or _Z-element_ depending on whether \(w_{j}\) is \(y_{i}\) or \(z_{i}\) for some \(i\), \(i=1,\ldots,n\). We say that two values or two elements in the sequence \(w_{1},w_{2},\ldots,w_{2n}\) are _paired_ if they are \(y_{i}\) and \(z_{i}\) for the same item \(i\), \(i=1,\ldots,n\). Let \(j^{*}\) be the smallest Z-element, and \(k^{*}\) be the second smallest Z-element among \(1,\ldots,2n\). Thus all elements \(\{1,2,\ldots,k^{*}-1\}\setminus\{j^{*}\}\) are Y-elements. Let \(j^{y}\) be the Y-element, which is paired with \(j^{*}\), and let \(k^{y}\) be the Y-element, which is paired with \(k^{*}\). Naturally, all \(j^{*}\), \(j^{y}\), \(k^{*}\), \(k^{y}\) are distinct and satisfy \(j^{y}<j^{*}\), \(k^{y}<k^{*}\). For the sake of exposition, for \(j=1,\ldots,2n\) we say that a person, i.e. the gambler or the prophet, _accepted element_\(j\) if this person accepted the item with the value \(w_{j}\). ### Bad Example Let us show that there is no deterministic mechanism for SSPI that achieves the competitiveness ratio better than \(2\) on uniform matroids of rank \(2\). Let us assume that the gambler observes three items with following sample values \(s_{1}=m\), \(s_{2}=m\), \(s_{3}=0\) where \(m>0\). After that the gambler sees two realizations \(r_{1}=m\) and \(r_{2}=m\) before the realization of values for the third item is revealed. Let us assume that in this scenario the gambler accepts \(\beta\) items before the value of the third item is revealed. If \(\beta\leq 1\) then the mechanism cannot be \(2\)-competitive in the case, when the value of the first two items is \(m\) with probability \(1\) and the value of the last item is \(0\) with probability \(1\). Indeed, the expected gain of the prophet in this case is \(2m\), while the gain of the gambler is \(\beta m\). If \(\beta=2\) then the mechanism cannot be \(\alpha\)-competitive for \(\alpha<2\), when the value of the first two items are \(m\) with probability \(1\) and the value of the third item is \(0\) or \(M\) with probabilities \(1/2\) and \(1/2\), respectively, where \(M\gg m\). Indeed, the expected gain of the prophet in this case is at least \(M/2\), while the gain of the gambler is at most \[1/2\cdot(2m)+1/2\cdot(1/2\cdot(2m)+1/2\cdot(m+M))=1/4\cdot M+7/4\cdot m\,.\] ### Expected Gain of Prophet We characterize the expected gain of the prophet by computing for each element in the sequence \(w_{1}\), \(w_{2}\),..., \(w_{2n}\) the probability that the prophet accepts this element. **Lemma 1.1**.: _For each \(j=1,\ldots,2n\), the prophet accepts the element \(j\) with probability \(p_{j}\), where_ \[p_{j}:=\begin{cases}j/2^{j}=2\cdot\frac{j}{2^{j+1}}&\text{if}\quad 1\leq j<j^{ *}\\ (j-1)/2^{j-1}&\text{if}\quad j=j^{*}\\ 1/2^{j-2}&\text{if}\quad j^{*}<j<k^{*}\\ 1/2^{j-3}&\text{if}\quad\quad j=k^{*}\\ 0&\text{otherwise}\,.\end{cases}\] _Thus, the expected gain of the prophet equals \(\sum_{j=1}^{2n}p_{j}w_{j}\)._ Proof.: The prophet selects two largest R-elements among \(w_{1}\), \(w_{2}\),..., \(w_{2n}\). Thus, for every \(j\), \(j=1,\ldots,2n\) the item with the value \(w_{j}\) is selected if and only if \(j\) is an R-element, and there is at most one R-element among \(1,2,\ldots,j-1\). For elements \(j\) such that \(j<j^{*}\), the event of \(j\) being an R-element and the event of there being at most one R-element among \(1,2,\ldots,j-1\), are independent. The element \(j\) is an R-element with probability \(1/2\), and there is at most one element among \(1,2,\ldots,j-1\) with probability \((j-1)\frac{1}{2^{j-1}}+\frac{1}{2^{j-1}}=j/2^{j-1}\). Since those events are independent, the probability of \(j\) being accepted is \(j/2^{j}\). Now consider the element \(j\) such that \(j=j^{*}\). The event of \(j\) being an R-element happens with probability \(1/2\), and implies that the element \(j^{y}\) is an S-element. The rest of the elements among \(1,2,\ldots,j-1\) are independently either R-elements or S-elements. Thus the probability of having at most one R-element among \(1,2,\ldots,j-1\), conditioned on the event of \(j\) being an R-element, is \((j-1)/2^{j-2}\). Thus the probability that \(j\) is selected is \((j-1)/2^{j-1}\). Now consider an element \(j\) such that \(j^{*}<j<k^{*}\). There is exactly one R-element among \(j^{y},j^{*}\). Thus, for the element \(j\) to be accepted the rest of the elements among \(1,2,\ldots,j-1\) have to be S-elements. This happens with probability \(1/2^{j-3}\), and the element \(j\) is an R-element with probability \(1/2\). Thus the probability that \(j\) is accepted is \(1/2^{j-2}\). Now consider the element \(j\) such that \(j=k^{*}\). The event of \(j\) being an R-element happens with probability \(1/2\), and implies that the element \(k^{y}\) is an S-element. There is also exactly one R-element among \(j^{y},j^{*}\). The remaining elements among \(1,2,\ldots,j-1\) are independently either R-elements or S-elements, and so to have at most one R-element among \(1,2,\ldots,j-1\) all of those remaining elements must be S-elements. Thus the probability that \(j\) is selected is \(1/2^{j-3}\). Finally, for all elements \(j\) such that \(j>k^{*}\), there is exactly one R-element among \(j^{y},j^{*}\), and exactly one R-element among \(k^{y},k^{*}\). Thus there are two R-element among \(1,2,\ldots,j-1\), and so \(j\) is never selected by the prophet. ### Expected Gain of Gambler Recall that the adversary cannot control which elements are R-elements, and which are S-elements. However, the adversary can control in which order R-elements are presented to the gambler. We assume that the adversary behaves in a way which minimizes the gambler's gain. Recall that in Algorithm 1, the gambler determines the threshold \(T\) and accepts elements which are greater than \(T\). Thus, the adversary makes the gambler to accept two smallest R-elements that are larger than \(T\), if those elements exist. In case there are less than two such elements, the adversary makes the gambler accept all R-elements that are larger than \(T\) and no other elements. From now on, we assume that the adversary acts as above, i.e. in a way leading to the worst gain for the gambler. **Lemma 1.2**.: _For each \(j=1,\ldots,2n\), the gambler accepts the element \(j\) with probability at least \(q_{j}\), where_ \[q_{j}:=\begin{cases}(3j-1)/2^{j+2}&\text{if}\quad j\leq j^{*}-2\\ (4j-2)/2^{j+2}&\text{if}\quad j=j^{*}-1\text{ and }k^{*}>j^{*}+1\text{ and }j^{*}-1=j^{y}\\ (4j-3)/2^{j+2}&\text{if}\quad j=j^{*}-1\text{ and }k^{*}>j^{*}+1\text{ and }j^{*}-1\neq j^{y}\\ 4j/2^{j+2}&\text{if}\quad j=j^{*}-1\text{ and }k^{*}=j^{*}+1\\ 3/2^{j+1}&\text{if}\quad j=j^{*}\text{ and }k^{*}>j^{*}+1\\ 4/2^{j+1}&\text{if}\quad j=j^{*}\text{ and }k^{*}=j^{*}+1\\ 3/2^{j}&\text{if}\quad j^{*}<j<k^{*}-1\\ 1/2^{j-2}&\text{if}\quad j^{*}<j<k^{*}\text{ and }j=k^{*}-1\,.\end{cases}\] _Thus, the expected gain of the gambler is at least \(\sum_{j=1}^{2n}q_{j}w_{j}\)._ The key take-away from Lemma 1.2 is that element \(j\), \(j<j^{*}\) is accepted with probability at least \((3j-1)/2^{j+2}\). Element \(j=j^{*}\) is accepted with probability at least \(3/2^{j+1}\), and element \(j\), \(j^{*}<j<k^{*}\) is accepted with probability at least \(3/2^{j}\). However, in some cases for the comparison with the gain of the prophet later we need a stronger bound, for example, in cases when \(k^{*}=j^{*}+1\) or if \(j=j^{*}-1\). Proof of Lemma 1.2.: Let us consider the case analysis based on the position of the element \(j\). In each of the cases, we compute the unconditional probabilities for two events to happen simultaneously, in particular for the gambler to accept the element \(j\) and for the threshold to be equal to \(T\). Afterwards, we provide a desired lower bound \(q_{j}\) by summing up the obtained unconditional probabilities. _Case 1_.: \(j\leq j^{*}-4\). Let us consider the position of the threshold, i.e. the position of the second largest S-element. 1. \(T=w_{j+3}\). For the threshold \(T\) to be equal \(w_{j+3}\), the element \(j+3\) is an S-element and exactly one element in 1,..., \(j+2\) is an S-element, i.e. is the largest S-element. Let us now consider the position of the unique S-element in 1,..., \(j+2\). If this unique S-element is not \(j+1\) or \(j+2\), the gambler will not be able to accept the element \(j\) due to the assumption that the adversary minimizes the gambler's profit. Thus, all elements in 1,..., \(j\) have to be R-elements, the element \(j+3\) has to be an S-element, and among \(j+1\) and \(j+2\) exactly one is an R-element and one is an S-element. Also note, that any such S-R status assignment guarantees that \(T\) equals \(w_{j+3}\) and the gambler accepts the element \(j\). Let us now compute the probability of the event that \(T\) equals \(w_{j+3}\) and the gambler accepts the element \(j\). There are two choices for the largest S-element. In each of those scenarios we fix the S-R status of \(j+3\) elements. Thus the probability of the gambler accepting element \(j\) and for the threshold \(T\) to be equal \(w_{j+3}\) is \(2/2^{j+3}\). 2. \(T=w_{j+2}\). The largest S-element is either \(j+1\) or is among 1, 2,..., \(j-2\), \(j-1\). Thus there are \(j\) choices for the largest S-element, in each of these choices element \(j\) will be accepted, and in each of these choices we fix the S-R status of \(j+2\) elements. Thus the probability of the gambler accepting element \(j\) and for the threshold \(T\) to be equal \(w_{j+2}\) is \(j/2^{j+2}\). 3. \(T=w_{j+1}\). The largest S-element is among 1, 2,..., \(j-2\), \(j-1\). Thus there are \(j-1\) choices for the largest S-element, in each of these choices we fix the S-R status of \(j+1\) elements. Thus the probability of the gambler accepting element \(j\) and for the threshold \(T\) to be equal \(w_{j+1}\) is \((j-1)/2^{j+1}\). Hence, the probability that the gambler accepts the element \(j\) equals \[\frac{1}{2^{j+3}}(2+2j+4j-4)=\frac{3j-1}{2^{j+2}}\,.\] _Case 2_.: \(j=j^{*}-3\) Figure 1: An example of an S-R status assignment in subcase 1 of case 1, which makes the gambler accept item \(j\). The threshold is denoted by [T]. Elements accepted by the gambler are denoted by (*). The larger elements are at the left. Let us again consider the position of the threshold, i.e. the position of the second largest \(\mathsf{S}\)-element. 1. \(T=w_{j+1}\). Then the gambler accepts element \(j\) if the largest \(\mathsf{S}\)-element is one of \(1,2,\ldots,j-1\). Thus there are \(j-1\) choices for the largest \(\mathsf{S}\)-element, and in each of those choices we fix the \(\mathsf{S}\)-\(\mathsf{R}\) status of \(j+1\) elements. Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \((j-1)/2^{j+1}\). 2. \(T=w_{j+2}\). Then the gambler accepts element \(j\) if the largest \(\mathsf{S}\)-element is one of \(1,2,\ldots,j-2,j-1\) or \(j+1\). Thus there are \(j\) choices for the largest \(\mathsf{S}\)-element, in each of these choices we fix the \(\mathsf{S}\)-\(\mathsf{R}\) status of \(j+2\) elements. Thus the probability of the gambler accepting element \(j\) and for the threshold \(T\) to be equal \(w_{j+2}\) is \(j/2^{j+2}\). 3. \(T=w_{j+3}\). Then the gambler accepts element \(j\) if the largest \(\mathsf{S}\)-element is \(j+1\) or \(j+2\). If we fix the element \(j^{*}=j+3\) to be an \(\mathsf{S}\)-element, then the element \(j^{y}\) is an \(\mathsf{R}\)-element. Thus if \(j^{y}\) is in \(\{j+1,j+2\}\) there is one choice for the largest \(\mathsf{S}\)-element, and if \(j^{y}\) is not in \(\{j+1,j+2\}\) there are two choices. In each of these choices we fix the \(\mathsf{S}\)-\(\mathsf{R}\) status of the first \(j+3\) elements except for the element \(j^{y}\), since its \(\mathsf{S}\)-\(\mathsf{R}\) status is determined by the \(\mathsf{S}\)-\(\mathsf{R}\) status of element \(j^{*}=j+3\). There is at least one choice for the largest \(\mathsf{S}\)-element, and each such choice happens with probability \(1/2^{j+2}\). Thus the probability for the gambler to accept element \(j\) and for the threshold \(T\) to be equal \(w_{j+3}\) is at least \(1/2^{j+2}\). Hence, the probability that the gambler accepts the element \(j\) is at least \[\frac{1}{2^{j+2}}((2j-2)+j+1)=\frac{3j-1}{2^{j+2}}\,.\] _Case 3_.: \(j=j^{*}-2\), \(k^{*}>j^{*}+1\) 1. \(T=w_{j+1}\). Then the gambler accepts element \(j\) if the largest \(\mathsf{S}\)-element is one of 1, 2,..., \(j-1\). Thus there are \(j-1\) choices for the largest \(\mathsf{S}\)-element, and in each of those choices we fix the \(\mathsf{S}\)-\(\mathsf{R}\) status of \(j+1\) elements. Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \((j-1)/2^{j+1}\). 2. \(T=w_{j+2}\). Then the gambler accepts element \(j\) if the largest \(\mathsf{S}\)-element is one of 1, 2,..., \(j-2\), \(j-1\) or \(j+1\). Since \(j^{*}=j+2\) is an \(\mathsf{S}\)-element, the element \(j^{y}\) is an \(\mathsf{R}\)-element. If \(j^{y}\in\{1,2,\ldots,j-2,j-1\}\cup\{j+1\}\) there are \(j-1\) choices for the largest \(\mathsf{S}\)-element. In this case, the probability that the gambler accepts element \(j\) and \(T\) is equal to \(w_{j+2}\) is \((j-1)/2^{j+1}\). However if \(j^{y}\not\in\{1,2,\ldots,j-2,j-1\}\cup\{j+1\}\), there are \(j\) choices and so the probability that the gambler accepts element \(j\) and \(T\) is equal to \(w_{j+2}\) is \(j/2^{j+1}\). Thus, the probability that the gambler accepts element \(j\) and \(T\) is equal to \(w_{j+2}\) is at least \((j-1)/2^{j+1}\). 3. \(T=w_{j+3}\). The gambler accepts element \(j\) only if the largest S-element is \(j+1\) or \(j+2\). Recall that exactly one of \(j^{*}=j+2\) and \(j^{y}\) is an S-element. Thus, if \(j^{y}=j+1\) then we have two choices for the largest S-element. If \(j^{y}<j+1\) then we have only one choice for the largest S-element. Thus there is one or two choices for the largest S-element, and each time we fix the S-R status of \(j+3\) elements where two of these elements are paired. Thus, the probability, that the gambler accepts element \(j\) and that \(T\) is equal to \(w_{j+3}\), is \(1/2^{j+2}\) or \(2/2^{j+2}\), and so is at least \(1/2^{j+2}\). Hence, the probability that the gambler accepts the element \(j\) is at least \[\frac{1}{2^{j+2}}\left((2j-2)+(2j-2)+1\right)=\frac{4j-3}{2^{j+2}}\,.\] Note, that \(4j-3\geq 3j-1\) when \(j\geq 2\). So when \(j\geq 2\) we get the desired bound. When \(j=1\), observe that it is not possible that the estimate \((j-1)/2^{j+1}\) is tight in second subcase and the estimate \(1/2^{j+2}\) is tight in third subcase, because for this in second subcase we need \(j^{y}=j+1=2\) and in third subcase \(j^{y}\neq j+1=2\). Thus in case \(j=1\), we can derive a tighter bound. The probability that the gambler accepts element \(j\) is at least \[\frac{1}{2^{j+2}}\min((2j-2)+2j+1,(2j-2)+(2j-2)+2)=\frac{4j-2}{2^{j+2}}\geq \frac{3j-1}{2^{j+2}}\,,\] showing that the desired lower bound holds also for \(j=1\). _Case 4._\(j=j^{*}-2\), \(k^{*}=j^{*}+1\) 1. \(T=w_{j+1}\). This subcase is identical to the first subcase in case 3. The probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \((j-1)/2^{j+1}\). 2. \(T=w_{j+2}\). This subcase is identical to the second subcase in case 3. The probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+2}\) is at least \((j-1)/2^{j+1}\). 3. \(T=w_{j+3}\). Then the gambler accepts element \(j\) if the largest S-element is \(j+1\) or \(j+2\). However, observe exactly one of \(j^{*}=j+2\) and \(j^{y}\), as well as exactly one of \(k^{*}=j+3\) and \(k^{y}\), is an S-element. Thus, if \(j^{y}=j+1\) then we have two choices for the largest S-element. If \(j^{y}<j+1\) then we have only one choice for the largest S-element. Thus there is at least one choice, and each choice happens with probability \(1/2^{j+1}\), since we are fixing the S-R status of the first \(j+3\) elements, among which there are paired elements \(j^{*}\), \(j^{y}\) and \(k^{*}\), \(k^{y}\). Then the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+3}\) is at least \(1/2^{j+1}\). Hence, the probability that the gambler accepts the element \(j\) is at least \[\frac{1}{2^{j+1}}((j-1)+(j-1)+1)=\frac{4j-2}{2^{j+2}}\geq\frac{3j-1}{2^{j+2}},\] where the last inequality follows from \(j\geq 1\). _Case 5_.: \(j=j^{*}-1\), \(k^{*}>j^{*}+1\), \(j^{*}-1=j^{y}\) 1. \(T=w_{j+1}\). Then the gambler accepts element \(j\) if the largest S-element is one of 1, 2,..., \(j-1\). Thus there are \(j-1\) choices for the largest S-element, and in each of those choices we fix the S-R status of 1, 2,..., \(j\), \(j+1\). Note that \(j\) and \(j+1\) are paired. Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \((j-1)/2^{j}\). 2. \(T=w_{j+2}\). For this subcase, \(j+1\) needs to be the largest S-element. By making \(j+1\) the largest S-element, we fix the S-R status of 1, 2,..., \(j\), \(j+1\), \(j+2\). Note that \(j\) and \(j+1\) are paired. Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+2}\) is \(1/2^{j+1}\). 3. \(T=w_{j+3}\). Although it is possible for the threshold to be at the position \(j+3\) when item \(j\) is accepted, we omit this subcase since the total probability from the previous subcases is already sufficient. We remark that the current subcase \(T=w_{j+3}\) has different probability depending on whether \(k^{*}=j^{*}+2\) or \(k^{*}\geq j^{*}+3\). Hence, the probability that the gambler accepts the element \(j\) is at least \[\frac{1}{2^{j+1}}\left(2(j-1)+1\right)=\frac{4j-2}{2^{j+2}}\,.\] _Case 6_.: \(j=j^{*}-1\), \(k^{*}>j^{*}+1\), \(j^{*}-1\neq j^{y}\) 1. \(T=w_{j+1}\). Then the gambler accepts element \(j\) if the largest S-element is one of 1, 2,..., \(j-1\) but not \(j^{y}\). Thus there are \(j-2\) choices for the largest S-element, and in each of those choices we fix the S-R status of 1, 2,..., \(j\), \(j+1\). Note that \(j+1\) and \(j^{y}\) are paired. Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \((j-2)/2^{j}\). 2. \(T=w_{j+2}\). Note that \(j+1\) and \(j^{y}\) are paired, so for this subcase element \(j+1\) or \(j^{y}\) need to be the largest S-element. We fix the S-R status of 1, 2,..., \(j\), \(j+1\), \(j+2\), where all elements that are not \(j+2\), \(j+1\) or \(j^{y}\) become R-elements and where the S-R statuses of \(j+1\), \(j^{y}\) are arbitrary. Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+2}\) is \(1/2^{j}\). 3. \(T=w_{j+3}\). Note that \(j+1\) and \(j^{y}\) are paired. So for this subcase element \(j+1\) needs to be the largest S-element, otherwise \(j\) is not accepted by the gambler. We fix the S-R status of 1, 2,..., \(j\), \(j+1\), \(j+2\), \(j+3\) where elements \(j+1\) and \(j^{y}\) are paired. We remark that the current subcase \(T=w_{j+3}\) has different probability depending on whether \(k^{*}=j^{*}+2\) or \(k^{*}>j^{*}+2\). Independent on whether \(k^{*}=j+2\), the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+3}\) is at least \(1/2^{j+2}\). Thus, the probability that the gambler accepts element \(j\) is at least \[\frac{1}{2^{j+1}}\left(2(j-2)+2+1/2\right)=\frac{4j-3}{2^{j+2}}\,.\] _Case 7_.: \(j=j^{*}-1\), \(k^{*}=j^{*}+1\), \(j^{*}-1=j^{y}\) 1. \(T=w_{j+1}\). This case is identical to the first subcase of case 5. The probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \((j-1)/2^{j}\). 2. \(T=w_{j+2}\). This case is similar to the second subcase of case 5, but where among 1, 2,..., \(j\), \(j+1\), \(j+2\) we have paired \(j^{*}\), \(j^{y}\) and paired \(k^{*}\), \(k^{y}\). Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+2}\) is \(1/2^{j}\). Hence, the probability that the gambler accepts element \(j\) is at least \[\frac{1}{2^{j+1}}(2(j-1)+2)=\frac{4j}{2^{j+2}}\,.\] _Case 8_.: \(j=j^{*}-1\), \(k^{*}=j^{*}+1\), \(j^{*}-1\neq j^{y}\) This case is similar to case 6, except that in the first subcase we get the estimate \((j-2)/2^{j}\), in the second \(1/2^{j-1}\) and in the third case 0. Thus, the probability that the gambler accepts the element \(j\) is at least \[\frac{1}{2^{j+1}}\left(2(j-2)+4\right)=\frac{4j}{2^{j+2}}\,.\] Before we move further, observe that in the case \(j=j^{*}\), element \(j\) can only be accepted if it is an R-element. For this element \(j^{y}\) needs to be an S-element. Secondly, an element \(j\) can be accepted only if it exceeds the threshold, which is the second largest S-element Thus \(j^{y}\) is the largest S-element, so unlike in the cases with \(j<j^{*}\), there are no alternative choices for the largest S-element. _Case 9_.: \(j=j^{*}\), \(k^{*}\geq j^{*}+3\) 1. \(T=w_{j+1}\). This fixes the S-R status of \(j+1\) elements 1, 2,..., \(j-1\), \(j\), \(j+1\). Note that among these elements the elements \(j=j^{*}\) and \(j^{y}\) are paired. Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \(1/2^{j}\) 2. \(T=w_{j+2}\). This fixes the S-R status of \(j+2\) elements 1, 2,..., \(j-1\), \(j\), \(j+1\), \(j+2\). Note that among these elements the elements \(j=j^{*}\) and \(j^{y}\) are paired. Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+2}\) is \(1/2^{j+1}\) Hence, probability that the gambler accepts the element \(j\) is \(3/2^{j+1}\). _Case 10_.: \(j=j^{*}\), \(k^{*}=j^{*}+2\) 1. \(T=w_{j+1}\). This subcase is identical to first subcase of case 9, so the probability is \(1/2^{j}\) in this subcase. 2. \(T=w_{j+2}\). This fixes the S-R status of 1, 2,..., \(j+1\), \(j+2\). Note that among these elements the elements \(j=j^{*}\) and \(j^{y}\) are paired, also elements \(j+2=k^{*}\) and \(k^{y}\) are paired. Thus, the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+2}\) is \(1/2^{j}\). Hence, the probability that the gambler accepts element \(j\) is \(4/2^{j+1}\), which is at least the desired bound \(3/2^{j+1}\). _Case 11_.: \(j=j^{*}\), \(k^{*}=j^{*}+1\) 1. \(T=w_{j+1}\). This fixes the S-R status of elements 1, 2,..., \(j-1\), \(j\), \(j+1\). Note that among these elements the elements \(j=j^{*}\) and \(j^{y}\) are paired, also elements \(j+1=k^{*}\) and \(k^{y}\) are paired. Thus, the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \(1/2^{j-1}\). Hence, the probability that the gambler accepts element \(j\) is \(4/2^{j+1}\). Now we consider the cases when \(j^{*}<j<k^{*}\). For each of these cases, we need to show the desired lower bound \(3/2^{j}\) for the probability of accepting item \(j\). As previously, we proceed with analysis based on the position of the threshold \(T\), i.e. the position of the second largest S-element. Observe that there is exactly one S-element among \(\{j^{y},j^{*}\}\). Thus the threshold \(T\) is less than \(w_{j}\) only if there are no other S-elements before \(j\) except for the elements \(j^{y}\) and \(j^{*}\). _Case 12_.: \(j^{*}<j<k^{*}\), \(k^{*}\geq j+3\) 1. \(T=w_{j+1}\). For element \(j\) to be accepted we need to fix the S-R status of \(j-1\) elements \(\{1,\ldots,j+1\}\setminus\{j^{y},j^{*}\}\). Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \(1/2^{j-1}\). 2. \(T=w_{j+2}\). For element \(j\) to be accepted we need to fix the S-R status of \(j\) elements \(\{1,\ldots,j+2\}\setminus\{j^{y},j^{*}\}\). Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \(1/2^{j}\). Hence, the probability that the gambler accepts element \(j\) is \(3/2^{j}\). _Case 13_.: \(j^{*}<j<k^{*}\), \(k^{*}=j+2\) 1. \(T=w_{j+1}\). This case is identical to the first subcase of case 12. The probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+1}\) is \(1/2^{j-1}\) 2. \(T=w_{j+2}\). For the element \(j\) to be accepted we need to fix the S-R status of \(j-1\) elements \(\{1,\ldots,j+2\}\setminus\{j^{y},j^{*},k^{y}\}\). The S-R status of element \(k^{y}\) does not need to be fixed because it is paired with \(j+2=k^{*}\). Thus the probability of the gambler accepting element \(j\) and \(T\) being equal to \(w_{j+2}\) is \(1/2^{j-1}\). Hence, the probability that the gambler accepts element \(j\) is \(4/2^{j}\). _Case 14_.: \(j^{*}<j<k^{*}\), \(k^{*}=j+1\) 1. \(T=w_{j+1}\). Again for the element \(j\) to be accepted we need to fix the S-R status of \(j-2\) elements \(\{1,\dots,j+1\}\setminus\{j^{y},j^{*},k^{y}\}\). Thus the probability, that the gambler accepts element \(j\) and \(T\) is equal to \(w_{j+1}\), is \(1/2^{j-2}\). Hence, the probability that the gambler accepts element \(j\) is \(4/2^{j}\). ### Putting Everything Together In this section, we provide the proof of Theorem 1.1. Proof of Theorem 1.1.: Let us show that \[2\sum_{j=1}^{2n}q_{j}w_{j}\geq\sum_{j=1}^{2n}p_{j}w_{j}\,.\] Since \(w_{1}\), \(w_{2}\),..., \(w_{2n}\) is a non-increasing sequence of non-negative numbers, it is sufficient to show that for all \(i\), \(i=1,\dots,n\) we have \[2\sum_{j=1}^{i}q_{j}\geq\sum_{j=1}^{i}p_{j}\,. \tag{1}\] Observe that it is sufficient to prove (1) only for \(i\leq k^{*}\), since \(p_{j}=0\) when \(j>k^{*}\). **Claim 1.1**.: _If \(j<j^{*}\) or \(j^{*}<j<k^{*}\), then we have \(2q_{j}\geq p_{j}\)._ Consequently, this claim directly proves (1) for \(i\leq j^{*}-1\). Proof of Claim 1.1.: We use Lemma 1.1 and Lemma 1.2. If \(j<j^{*}\), then we have \(p_{j}=j/2^{j}\). If \(j<j^{*}\), then we have \(q_{j}\geq(3j-1)/2^{j+2}\), or \(j\geq 2\) and \(q_{j}=(4j-3)/2^{j+2}\); so in either case \(q_{j}\geq(3j-1)/2^{j+2}\). Thus, we have \[q_{j}\geq(3j-1)/2^{j+2}\geq 2j/2^{j+2}=p_{j}/2\,,\] where the second inequality uses \(j\geq 1\). If \(j^{*}<j<k^{*}\), then we have \(p_{j}=4/2^{j}\) and \(q_{j}\geq 3/2^{j}\), and so \(q_{j}\geq p_{j}/2\). **Claim 1.2**.: _If \(k^{*}>j^{*}+1\), then we have \(2q_{k^{*}-1}\geq p_{k^{*}}+p_{k^{*}-1}\)._ Proof of Claim 1.2.: We use Lemma 1.1 and Lemma 1.2 to show \[p_{k^{*}}+p_{k^{*}-1}=\frac{1}{2^{k^{*}-3}}+\frac{1}{2^{k^{*}-3}}=2\cdot\frac{ 1}{2^{k^{*}-3}}=2q_{k^{*}-1}\,.\] **Claim 1.3**.: _We have \(2(q_{j^{*}-1}+q_{j^{*}})\geq p_{j^{*}-1}+p_{j^{*}}\). Moreover, if \(k^{*}=j^{*}+1\) then we have \(2(q_{j^{*}-1}+q_{j^{*}})\geq p_{j^{*}-1}+p_{j^{*}}+p_{k^{*}}\)._ Proof of Claim 1.3.: We use Lemma 1.1 and Lemma 1.2. If \(k^{*}>j^{*}+1\), then we have \[p_{j^{*}-1}+p_{j^{*}}=\frac{2j^{*}-2}{2^{j^{*}-1}}=2\left(\frac{4j^{*}-7}{2^{j ^{*}+1}}+\frac{3}{2^{j^{*}+1}}\right)\leq 2(q_{j^{*}-1}+q_{j^{*}})\,.\] If \(k^{*}=j^{*}+1\), we have \[p_{j^{*}-1}+p_{j^{*}}+p_{k^{*}}=\frac{2j^{*}-2}{2^{j^{*}-1}}+\frac{1}{2^{j^{*} -2}}=2\left(\frac{4j^{*}-4}{2^{j^{*}+1}}+\frac{4}{2^{j^{*}+1}}\right)=2(q_{j^{ *}-1}+q_{j^{*}})\,,\] finishing the proof.
2302.14703
Improving Expert Specialization in Mixture of Experts
Mixture of experts (MoE), introduced over 20 years ago, is the simplest gated modular neural network architecture. There is renewed interest in MoE because the conditional computation allows only parts of the network to be used during each inference, as was recently demonstrated in large scale natural language processing models. MoE is also of potential interest for continual learning, as experts may be reused for new tasks, and new experts introduced. The gate in the MoE architecture learns task decompositions and individual experts learn simpler functions appropriate to the gate's decomposition. In this paper: (1) we show that the original MoE architecture and its training method do not guarantee intuitive task decompositions and good expert utilization, indeed they can fail spectacularly even for simple data such as MNIST and FashionMNIST; (2) we introduce a novel gating architecture, similar to attention, that improves performance and results in a lower entropy task decomposition; and (3) we introduce a novel data-driven regularization that improves expert specialization. We empirically validate our methods on MNIST, FashionMNIST and CIFAR-100 datasets.
Yamuna Krishnamurthy, Chris Watkins, Thomas Gaertner
2023-02-28T16:16:45Z
http://arxiv.org/abs/2302.14703v1
# Improving Expert Specialization in Mixture of Experts ###### Abstract Mixture of experts (MoE), introduced over 20 years ago, is the simplest gated modular neural network architecture. There is renewed interest in MoE because the conditional computation allows only parts of the network to be used during each inference, as was recently demonstrated in large scale natural language processing models. MoE is also of potential interest for continual learning, as experts may be reused for new tasks, and new experts introduced. The gate in the MoE architecture learns task decompositions and individual experts learn simpler functions appropriate to the gate's decomposition. In this paper: (1) we show that the original MoE architecture and its training method do not guarantee intuitive task decompositions and good expert utilization, indeed they can fail spectacularly even for simple data such as MNIST and FashionMNIST; (2) we introduce a novel gating architecture, similar to attention, that improves performance and results in a lower entropy task decomposition; and (3) we introduce a novel data-driven regularization that improves expert specialization. We empirically validate our methods on MNIST, FashionMNIST and CIFAR-100 datasets. ## 1 Introduction Mixture of Experts (MoE) architecture was introduced by Jacobs et al. (1991) over 20 years ago. It has since been successfully applied to learning problems such as reinforcement learning (Gimelfarb et al., 2018), transfer learning (Mihai and Lascarides, 2017), building large computationally efficient neural networks for language models and machine translation (Shazeer et al., 2017; Rajbhandari et al., 2022; Yazdani Aminabadi et al., 2022), continual learning (Veniat et al., 2021; Hih and Braun, 2022) and learning multiple domains, such as image classification, machine translation, and image captioning, concurrently (Kaiser et al., 2017). MoE is a modular neural network architecture. They are the simplest and most successful modular neural network architectures. MoE consists of modules, called _experts_, and a _gate_. The experts and the gate are simple neural networks. The experts compute functions that are useful in different regions of the input space. The output of an expert, for each sample, is either the learnt class distribution for a classification problem or the learnt regression function output for a regression problem. For simplicity we will use classification problems in this paper. The output of the gate is a vector of weights, one for each expert. The weights determine how much an expert contributes towards an MoE's prediction for a sample. This is called _conditional computation_ as only some experts are computed conditioned on the gate probabilities. Conditional computation is an important feature of an MoE as it makes training and inference faster. Ideally we want the gating network to learn a mean Figure 1: Original Mixture of Experts (MoE) architecture with 3 experts and 1 gate. The output of the model is \(\tilde{\vec{y}}=p_{1}\cdot\vec{\sigma}_{1}+p_{2}\cdot\vec{\sigma}_{2}+p_{3} \cdot\vec{\sigma}_{3}\), where \(p_{1}\), \(p_{2}\), \(p_{3}\) are the gate outputs and \(\vec{\sigma}_{1}\), \(\vec{\sigma}_{2}\), \(\vec{\sigma}_{3}\) are the outputs of experts 1, 2 and 3 respectively. state space and the experts to learn simpler functions in different parts of the state space that results in better performance of the MoE model. Figure 1 shows the _output mixture model_, which is the original MoE architecture, introduced by Jacobs et al. (1991). In this model the MoE prediction, \(\widehat{y}\), is a weighted sum of the outputs of the experts, \(\widehat{y}=\sum_{i=1}^{M}p_{i}\cdot\vec{o_{i}}\), where \(\vec{o_{i}}\) is the output of expert \(i\), \(p_{i}\) is the gating weight for expert \(i\) and \(M\) is the number of experts. Since there are \(M\) expert networks, the gating network has \(M\) output units. The loss \(L\) of the MoE is then \(L=l(d,\widehat{y})\), where \(d\) is the desired output and \(l\) is a loss function. Since the output is a sum of proportions of the outputs of the experts, the experts are tightly coupled. The _output mixture model_ could seem to be not truly realizing conditional computation. In practice, however, the probabilities for some experts are small enough to be neglected. Those expert outputs need not be computed and so indeed does enable conditional computation. MoE models are of particular interest because of their: 1. faster training due to conditional weight updates and faster inference due to conditional computation during feed forward (Shazeer et al., 2017), 2. transferability of sub-tasks learnt by experts to other tasks (Mihai and Lascarides, 2017). This makes them especially attractive to continual learning (Veinat et al., 2021; Hihn and Braun, 2022), 3. parallelizable expert training (Rajbhandari et al., 2022), 4. ability to solve multi-modal problems with a combination of heterogeneous experts (Kaiser et al., 2017). 5. ability to solve multi-task problems with multi-gate MoE architectures (Ma et al., 2018). The current literature on MoE, however, has concentrated on the performance of the overall model and not on what each expert learns. Our first contribution is our finding and clear presentation of two crucial problems in training MoE models: (1) that original MoE training methods lead to inequitable and unintuitive task decompositions that have both poor error and loss; and (2) how the tasks are distributed among the experts is relevant to both their performance and scalability. Our second contribution is a novel MoE gating architecture, we call _attentive gating architecture_. In current MoE, the expert distribution by the gate for a sample does not depend on the computations of the experts on that sample. This is to allow conditional computation, however, it seems unreasonable for the gate to learn the task decomposition by itself. Both the expert and gate learn sample classification and expert distribution, respectivley, based on the same input distribution. It then seems intuitively reasonable to not duplicate this learning. The attentive gating architecture computes the gate's expert distribution for a given sample, as the attention score, computed with the gate and expert computations for the given sample. The proposed method is analogous to computing the self-attention score, proposed by Bahdanau et al. (2015), of the gate and expert outputs. This is effectively asking the question, **Which experts should the gate attend to for a given sample?** Our experiments show that the attentive gating approach results in lower entropy of the task decomposition without compromising performance. However, since the task decomposition depends on expert computations there is no conditional computation during feed forward when training. We show that we can still provide conditional computation during inference by distilling the model, trained with attentive gating, to the original MoE architecture with no loss in performance. MoE trains both experts and the gate 'end-to-end' on overall loss of the model. The training does not provide any incentive for equitable use of experts, that is, a more balanced and intuitive distribution of samples across experts. We observed that this results in some experts being starved of samples during training. The starved experts, that are not allocated any or very few samples during training, are effectively not used for inference. An extreme version of this is when the gate selects the same expert for all the samples. Kirsch et al. (2018) refer to this as _module collapse_. When module collapse occurs, the MoE output does not depend on the gate. This is equivalent to using a single model. Our third and last contribution addresses this problem with a data-driven constraint, \(L_{s}\), added as a regularization term to the loss. \(L_{s}\) routes the samples that are similar, determined by a similarity measure, to the same expert and those that are not to different experts. In our experiments we have used the Euclidean distance as the dissimilarity measure (it is a dissimilarity measure because a larger distance indicates dissimilarity). The method could use other (dis)similarity measures. We have not tested any other measures. Our paper is organised as follows: Section 2 discusses the related work; Section 3 defines the information theoretic performance metrics we use to analyse the performance of the different MoE models and their training methods that we use in this paper; Section 4 presents the results of our preliminary experiments to analyse how a task is distributed by the gate among the experts. The findings in this section are our first contribution; Section 5 introduces our second contribution, a novel _attentive gating MoE architecture_; Section 6 introduces our third contribution, a novel data-driven soft constraint regularizatoin, \(L_{s}\); Section 7 details our experiments with the novel attentive gating architecture and \(L_{s}\) regularization and presents their results; we finally conclude with Section 8. Our repository1 has the code to reproduce all the experiments and results reported in this paper. ## 2 Related Work **Expert specialization in MoE:** Much of the MoE research so far has concentrated on the performance of the MoE model and not on how the task is decomposed between the experts. Recently there has been interest in improving the expert specialization through improved task decomposition as it improves performance and conditional computation (Shazeer et al., 2017). In Mittal et al. (2022) the authors have performed similar experiments as us to compare the specialization of experts trained 'end-to-end' with those trained with a good task decomposition. They arrived at the same conclusion that the original MoE training methods indeed lead to poor expert specialization and that a good task decomposition results in better expert specialization and better performance. Our work, presented as our first contribution, however pre-dates theirs as it was presented in our earlier work at a NeuRIPS 2021 workshop [citation hidden for anonymity]. Mittal et al. (2022) evaluated using synthetic data while we have used real data to arrive at the same results. **Task specific expert specialization:** There have been quite a few approaches to task specific expert specializations, especially for language and vision tasks, recently (Kudugunta et al., 2021; Riquelme et al., 2021; Lewis et al., 2021; Lepikhin et al., 2021; Fedus et al., 2022; Zhou et al., 2022). In all these approaches routing decisions to experts are based on image and text tokens. Hence, they are task aware approaches where the experts have to be a specific architecture and can only work with one type of dataset. So they are not well suited for multi-modal learning. For example, Riquelme et al. (2021); Lepikhin et al. (2021); Fedus et al. (2022) have added sparsity to transformer architectures by using MoE in the dense network layer of transformers for vision and language. Our approach is task agnostic. Each of our experts could have a different architecture. **Expert specialization with regularization:** Since the 'end-to-end' MoE training provides no incentive for an equitable sample distribution to the experts, auxiliary losses were added as regularizations by Shazeer et al. (2017); Lewis et al. (2021). The regularization added by Lewis et al. (2021) is specific to their method of routing text tokens to the experts. Shazeer et al. (2017) proposed a more generic \(L_{importance}\) regularization for equitable task distribution. However, as discussed in Section 6, their method simply uses all available experts even when it is not required for the task. Our regularization, \(L_{s}\) discussed in Section 6, is a data-driven approach to equitable task distribution that is a more scalable solution. Their work is the most relevant to ours. **Attentive gating:** To the best of our knowledge our attentive gate architecture is novel. The only other related work we found was by Liu et al. (2020), who have used the attention mechanism in the gate to focus the gate on different aspects of the input and target images. The gate then learns good segmentation of the input images and assigns the different segments to different experts. Their approach is similar to the original MoE where the gate independently decides the tasks to be assigned to the experts by attending to the data. Our approach learns the gate's expert distribution by attending to the experts. ## 3 Information Theoretic Performance Metrics Accuracy or error is not sufficient to measure the performance of an MoE as we are also interested in measuring gating sparsity and expert usage. We will here define the information theoretic performance metrics we use to analyse the training of the MoE. These metrics measure how well the gate distributes the samples to the experts and how well it utilizes the experts. ### Measuring Gating Sparsity Conditional computation is an important feature of the MoE. Sparser gating probabilities are desirable because they result in better conditional computation. The sparsity per sample can be measured by the average per sample expert selection entropy, \(H_{s}\), in Equation 1, over a batch. \(N\) is the number of samples in a batch and \(\vec{p}=(p_{1},p_{2}\ldots p_{M})\) are the gate probabilities for \(M\) experts, for each sample. A low value of \(H_{s}\) indicates sparse gating probabilites and hence better conditional computation. \[H_{s}=\frac{1}{N}\sum_{i=1}^{N}H(\vec{p}_{i}) \tag{1}\] ### Measuring Expert Utilization Ideally we want the sub-tasks of the task to be distributed equitably between the experts to avoid module collapse. This will require the average gate probabilities for each of the experts, over all the samples, to be roughly equal. The distribution of the experts over the samples can be measured by the entropy of the average gate probabilities over all samples in a batch, \(H_{u}\), as in Equation 2. A high \(H_{u}\) indicates a more equitable gate probability distribution and hence better utilization of experts. A low \(H_{u}\) indicates unequal utilization of experts. For example, in the case of module collapse, when all samples get sent to the same expert, that expert's average gate probability is \(1.0\). The probabilities of all the other experts will be zero. This will result in \(H_{u}=0\). \[H_{u}=H\left(\frac{1}{N}\sum_{i=1}^{N}\vec{p}_{i}\,\right) \tag{2}\] ### Measuring model output dependency on expert selection We introduce a new metric to measure the dependency of the class distribution \(Y\) on the gate's expert selection distribution \(E\). An equitable gate task decomposition among experts results in a high mutual dependence between \(Y\) and \(E\). In the case of module collapse, one expert does all the work and the gate does not contribute to solving the task. There is then no dependency between \(E\) and \(Y\). In the case where each expert is assigned just one sub-task, the gate does all the work. There is then a higher dependency between \(E\) and \(Y\). Hence, the more equitable the task distribution between the experts the higher the dependence between \(E\) and \(Y\). The mutual dependency between \(E\) and \(Y\) can be measured by computing their mutual information, I(E;Y), as shown in Equation 3, where \(H(E)\) is the marginal entropy of \(E\), \(H(Y)\) is the marginal entropy of \(Y\) and \(H(E,Y)\) is the joint entropy of \(E,Y\). Higher \(I(E;Y)\) values indicate better dependence between \(E\) and \(Y\) and subsequently more equitable task decomposition. \[I(E;Y)\equiv H(E)+H(Y)-H(E,Y) \tag{3}\] Since we do not have the true marginal and joint probabilities of \(E\) and \(Y\), we compute them empirically as detailed in Appendix A. The sample sizes are large enough that we do not introduce significant estimation bias. ## 4 Better Performance with Better Expert Specialization We will now look more closely at what the experts in an MoE model learn and show that the original MoE training approaches cannot find intuitive task decompositions. This results in poor expert specialization with a few experts learning most of the task. We show that intuitive and balanced task decompositions are important because they lead to better expert specialization which in turn improves the performance of MoE models. ### Does original MoE training find intuitive task decompositions? We ran preliminary experiments to analyse how the gate, in the original MoE, distributes a classification task among the experts. Our experiments showed that the original MoE model does not find a balanced and intuitive task decomposition and hence expert usage, even for the simple MNIST (LeCun and Cortes, 2010) learning problem. We trained an MoE model, that has \(5\) experts and \(1\) gate, on \(10,000\) training samples of the MNIST data containing all the \(10\) digits. We chose \(5\) experts as the MNIST dataset has \(10\) classes (sub-tasks). This allows for an intuitive distribution of \(2\) classes per expert. Each expert and the gate is a simple convolutional network with a single convolutional layer and 2 hidden layers with ReLU activation. For details of the parameters of the model please refer to Appendix B.1. We trained with Adam optimizer. The trained model was used to classify \(2,000\) samples of the MNIST test data. Figure 1(a) is an expert selection table. Each cell of the table is the count of samples of the digit that were routed to the expert corresponding to the cell. Figure 1(a) shows that only \(3\) of the \(5\) experts are used. Since the MNIST dataset contains only digits, we thought Figure 2: Expert selection table of the original MoE model for: (a) MNIST and (b) combined FMNIST and MNIST, datasets. We can see that not all experts are used. The task decomposition is not intuitive as in the case of combined FMNIST and MNIST expert 2 is used for both FMNIST and MNIST classes. we should try with a dataset that contains clearly very different sets of images with the intuition that samples from different datasets would be routed to different experts. We created such a dataset by combining the FashionMNIST (FMNIST) [11] and MNIST datasets. We chose the first \(6\) classes, \([t-shirt,\,\,trouser,\,\,publover,\,\,dress,\,\,coat,\,\,sandal]\), from FMNIST and last \(6\) classes,\([4,\,5,\,6,\,7,\,8,\,9]\), from MNIST and combined the data to create one dataset of \(12\) classes. The model for combined FMNIST and MNIST dataset has \(6\) experts as there are \(12\) classes, with the expert and gate architectures same as the MNIST model. For details of the parameters of the model refer to Appendix B.2. Figure 1(b) shows that the gate surprisingly uses expert \(2\) to learn a mix of classes from FMNIST and MNIST. Hence, we see that an intuitive task decomposition in MoE is not guaranteed even in a seemingly trivial case where the images of FMNIST and MNIST are clearly quite different from each other. In Section 4.2 we also see that such decompositions not only use experts inequitably but also result in poor performance. Let us now analyse the possible reasons for such unintuitive task decompositions. ### Do intuitive task decompositions have better performance? The simplest method to train an MoE is to train the gate and experts at the same time, 'end-to-end', by gradient descent. During training the gating probabilities, for each sample, determine which experts get trained on that sample. That is, gating interacts with training and in effect experts are trained only when they are chosen by the gating network. Existing MoE architectures trained 'end-to-end' do not decompose the task intuitively among the experts as we saw in Section 4.1. The question we are trying to answer is: does the 'end-to-end' MoE training find a gating decomposition that performs well for the task, even though it seems surprisingly counter-intuitive? Or, is the search for gating decomposition simply bad? We designed an experiment, summarized in Figure 4, to answer these questions. What we need for this is: (1) a gate trained with un-trained experts, using the original MoE model, resulting in unintuitive task decompositron as in Section 4.1; and (2) a gate trained with experts pre-trained with custom intuitively plausible partitions of the dataset. We then use each of these two pre-trained gates to train a new set of experts with the same decomposition of the task as the experts the gates were trained with. This enables us to check the performance of the gate task decompositions for an unintuitive partition vs an intuitive partition. Firstly, let us define a more intuitive task decomposition for the MNIST dataset and determine if the gate can learn this decomposition. We split the \(10\) digits into \(5\) sets of \(5\) pairs of digits, such as \(\{[0,7],[1,9],[2,4],[3,8],[5,6]\}\). We used \(5\) experts, each of which was trained with only Figure 4: Experiment designed to analyse if intuitive task decompositions have better performance. Refer to the Table 0(b) for results of the experiment. Figure 3: Expert selection table of models trained with experts pre-trained on custom splits of the classes: (a) MNIST: {[0,7], [1,9], [2,4], [3,8], [5,6]} and (b) combined FMNIST and MNIST: {[t-shirt,Trouser], [Pullover,Dress],[Coat,Sandal],[4,5],[6,7],[8,9]} data samples of one of the \(5\) pairs of digits. So the pairs of digits are distributed equally among the experts. We then fixed the parameters of these pre-trained experts and trained the gate with them. From the gate expert selection table in Figure 2(a), we see that the gate can indeed learn to select the correct expert for each digit and hence learn an intuitive task decomposition. Figure 2(b) shows the gate expert selection table for one split of the combined FMNIST and MNIST dataset, trained in the same way as with the MNIST dataset. We again see that the gate can learn to select the correct expert for each class in the combined dataset. We then fixed the parameters of the pre-trained gate and trained the MoE model with the pre-trained gate and new experts. Both the pre-trained gates decomposed the tasks exactly as in Figures 1(a) and 2(a) respectively for the MNIST dataset and similarly as Figures 1(b) and 2(b) for the combined FMNIST and MNIST dataset. Hence we see that a gate can learn an intuitive task decomposition. Let us now check the training loss and test error of the models with intuitive and unintuitive task decompositions. Tables 0(a) and 0(b) show the average training loss and average test error, both averaged over \(5\) runs of the experiment for MNIST and combined FMNIST and MNIST datasets. We see that the model trained with pre-trained experts has a lower training loss than the model trained with un-trained experts and has a lower error rate for both datasets. The experiment shows that intuitive task decompositions do exist with much better performance. The gate, however, does not learn them when both experts and the gate are jointly trained 'end-to-end'. The gate initially finds a poorly performing and unintuitve task decomposition and reinforces that throughout the training. If we have prior knowledge of a good task decomposition then it would be best to pre-train the experts on these sub-tasks and then train the gate. Typically we do not know a plausible task decomposition and it is what we wish to find, but 'end-to-end' MoE training fails to do so, even in this simple case. ## 5 Attentive Gating MoE Architecture In current MoE the gate learns the expert distribution from the input distribution and the expert learns the classification of the samples based on the input and expert distribution by 'end-to-end' training. We suggest a more intuitively plausible design, shown in Figure 5, that uses the expert's computations in computing the gating distribution. During MoE training, the gate output is the current query or token of interest and the expert outputs are the sequence of tokens that are attended to. The gate's hidden output, \(G_{1\times h}\) (subscripts are the size of the matrix), is used to compute the _Query_, \(Q_{1\times h}\), as in Equation 4 and the expert hidden outputs, \(E_{i_{1\times h}}\), are used to compute the _Keys_, \(K_{i_{1\times h}}\), as in Equation 5, where \(E_{i}\) is the \(i^{th}\) expert of \(M\) experts in the model. \(h\) is the size of the hidden layers of the experts and the gate. \(W_{q_{h\times h}}\) and \(W_{k_{h\times h}}\) are the query and key weight matrices. The attention score \(A(Q,K)\) (we have dropped the subscripts of \(Q\) and \(K\) here for better readability) is then computed as in Equation 6: \[Q_{1\times h} =G_{1\times h}\cdot W_{q_{h\times h}} \tag{4}\] \[K_{i_{1\times h}} =E_{i_{1\times h}}\cdot W_{k_{h\times h}}\] (5) \[A(Q_{1\times h},K_{M\times h}) =softmax\Bigg{(}\frac{Q_{1\times h}\cdot K_{M\times h}^{T}}{\sqrt{h }}\Bigg{)} \tag{6}\] The computed attention \(A(Q,K)\) can then be used to weight the outputs of the experts. Hence, \(A(Q,K)\) are the gate probabilities of selecting the corresponding expert, to compute the MoE output and loss. Our experiments, in Section 7, show that with the attentive gate the MoE model performs better than the original MoE method but does have a similar problem of inequitable expert utilization. Hence, there is a need for a soft constraint that will ensure equitable expert utilization. We discuss the regularization we used to tackle this problem in Section 6. \begin{table} \end{table} Table 1: Comparison of average training loss and test error for MoE models: (a) with inequitable task decompositions; and (b) with equitable task decompositions, from the experiment detailed in Figure 4, for MNIST and combined MNIST datasets. Figure 5: Attentive gating MoE architecture. ### Distilling attentive gating MoE model for conditional computation In the attentive gate architecture gating is dependent on the expert computations during feed forward. This does not allow for conditional computation during inference. To address this we distill the MoE model, trained in Section 5, into a regular MoE _output mixture model_. We fix the parameters of the experts learnt using the attentive gate, initialise the gate of the new _output mixture model_ to the trained gate parameters and proceed to train the new MoE model and gate. ## 6 Gating with Sample Similarity Regularization We need a soft constraint to ensure an equitable sample distribution to the experts. Shazeer et al. (2017) proposed the \(L_{importance}\) loss regularization as a soft constraint to assign equal importance to all experts for a batch. \(L_{importance}\) measures the batch-wise coefficient of variation (CV) of the gate output probabilities to avoid module collapse as in Equation 7. \(\vec{I}{=}\sum_{x\in X}\vec{p}_{x}\) is an importance factor that measures the relative importance of the expert to the batch with \(X\) samples. \(\vec{p}_{x}\) is the gate's expert distribution for sample \(x\in X\). \(w_{importance}\) is a tunable hyperparameter. \(CV(\vec{I}){=}\sigma(\vec{I})/\mu(\vec{I})\), where \(\sigma\) is the standard deviation and \(\mu\) is the mean. \[L_{importance}\left(X\right)=w_{importance}\cdot CV(\vec{I}) \tag{7}\] The \(L_{importance}\) regularization, however, just aims at using all the experts available equally and not in a suitable way for the task. This results in poor scalability as we show in Section 7. It seems intuitive and natural to add a data-driven soft constraint based on the properties of the samples in the dataset. This would allow incorporating domain knowledge into the training. Samples belonging to the same task tend to be similar. The hypothesis here is that, routing similar samples to the same expert and dissimilar samples to different experts will ensure cleaner and more equitable task decomposition. With this in mind we propose a data-driven soft constraint by adding a regularization factor, \(L_{s}\), based on some similarity measure of the samples. \[L_{s}(X) =\frac{1}{(N^{2}-N)}\Bigl{[}\sum_{x,x^{\prime}}S(x,x^{\prime})-D( x,x^{\prime})\Bigr{]} \tag{8}\] \[S(x,x^{\prime}) =\frac{1}{M}\sum_{e}\beta_{s}\cdot p(e|x)\cdot p(e|x^{\prime}) \cdot\|x-x^{\prime}\|^{2}\] (9) \[D(x,x^{\prime}) =\frac{1}{(M^{2}-M)}\sum_{e\neq e^{\prime}}\beta_{d}\cdot p(e|x) \cdot p(e^{\prime}|x^{\prime})\cdot\|x-x^{\prime}\|^{2} \tag{10}\] We have used the squared Euclidean distance measure \(\|x-x^{\prime}\|^{2}\), for pairs of samples \(x,x^{\prime}\in X\), where \(X\) is a batch of size \(N\). The purpose of the regularization is to allow the gate to learn expert selection probabilities, \(p(e|x)\), for each sample such that it minimizes the term, \(S(x,x^{\prime})\), with similar samples routed to the same expert and maximises the term, \(D(x,x^{\prime})\), with dissimilar samples sent to different experts as in Equation 10, where \(M\) is the number of experts in the model, \(e,e^{\prime}\in E_{M}\) are the experts assigned to samples \(x,x^{\prime}\) respectively and \(\beta_{s}\), \(\beta_{d}\) are tunable hyperparameters. Our expertments detailed in Section 7 show that \(L_{s}\) regularization performs as well as or better than \(L_{importance}\) regularization, while using less experts. ## 7 Experiments We evaluate our methods on the small MNIST and the much larger CIFAR-100 (Krizhevsky, 2009) datasets. For the MNIST dataset we used an MoE model with \(5\) experts and \(1\) gate. For the the CIFAR-100 dataset we used \(20\) experts and \(1\) gate. Each expert for the MNIST dataset has: \(1\) convolutional layer; \(2\) hidden layers with \(ReLU\) activation; and one output layer. The gate has the same architecture as the expert but different parameters. For details of the parameters of the model refer to Appendix B.1. Each expert for the CIFAR-100 dataset has: \(4\) convolutional layers; We used batch normalization and max pooling layers; \(2\) hidden layers with \(ReLU\) activation; and one output layer. For details of the parameters of the model refer to Appendix B.3. All models were trained with Adam optimizer with \(0.001\) learning rate. We used \(20\) epochs for MNIST dataset and \(40\) epochs for CIFAR-100 dataset. Each experiment was run \(10\) times for MNIST dataset and \(5\) times for CIFAR-100 dataset. Our baseline for the MoE architecture is the original MoE architecture and training method, the output mixture model. Our baseline for MoE regularization is the \(L_{importance}\)(Shazeer et al., 2017) regularization which is a generic regularization. Other MoE regularizations in the literature are specific to certain architectures and training methods. We trained the models for both datasets as follows: (1) single model which has the same architecture as one expert; (2) vanilla or original MoE _output mixture model_ with no regularizations; (3) vanilla MoE with \(L_{importance}\) regularization with different values of \(w_{importance}\); (4) vanilla MoE with \(L_{s}\) regularization with different combinations of values of \(\beta_{s}\) and \(\beta_{d}\); (5) with attentive gating MoE architecture; (6) with attentive gating MoE and \(L_{importance}\) regularization for different values of \(w_{importance}\); (7) with attentive gating and \(L_{s}\) regularization for different combinations of values of \(\beta_{s}\) and \(\beta_{d}\); (8) model distilled from attentive gating MoE with \(L_{importance}\); and (9) model distilled from attentive gating MoE with \(L_{s}\). The values of all the hyperparameters used in the experiments are listed in Appendix C. The experiment results for MNIST dataset are in Table 2. The experiment results for CIFAR-100 dataset are in Table 3. The results for each method of training, in the tables, are the performance metrics computed on the test set, with the the model that has the minimum training error among the multiple runs for each method. The standard deviation of the test error over the runs is also reported. Tables 2 and 3 show that the attentive gating model performs better than original MoE. Combined training with attentive gate and \(L_{importance}\) or \(L_{s}\) regularizations improves expert usage as indicated by higher \(H_{u}\) values and improves gate sparsity as indicated by lower \(H_{s}\) values. We also see that the \(L_{s}\) regularization has lower error rate than \(L_{importance}\). \(L_{s}\) does as well as or better than \(L_{importance}\) in terms of expert usage with higher values of \(H_{u}\). \(L_{s}\) regularization also has better conditional inference due to lower \(H_{s}\). We also evaluated with the FMNIST dataset. The details and results for the FMNIST dataset are in Appendix B.4 and D. Another discernible improvement of \(L_{s}\) over \(L_{importance}\) is in the number of experts required for the task. \(L_{importance}\) is designed to use all the available experts equitably whether this is required for the task or not. We ran experiments by increasing the number of experts from \(5\) to \(15\) for the MNIST dataset, which is more than the number of classes for the MNIST dataset. Figure 6 shows that \(L_{s}\) regularization results in more optimal use of experts while \(L_{importance}\) uses all the experts. This implies models with \(L_{s}\) could have less parameters than with \(L_{importance}\). Results with \(10\) experts are in Appendix E. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Experiment** & **Error** & **I(E; Y)** & \(\mathbf{H_{s}}\) & \(\mathbf{H_{u}}\) \\ \hline single model & 0.096\(\pm\)0.071 & NA & NA & NA \\ \hline \hline vanilla MoE & 0.038\(\pm\)0.009 & 2.022 & 0.092 & 2.172 \\ \hline vanilla MoE with & 0.032\(\pm\)0.008 & 2.262 & 0.061 & 2.32 \\ \hline **vamilla MoE with & **0.029\(\pm\)0.009** & 2.244 & 0.051 & 2.246 \\ \hline \hline attentive gate MoE with & 0.033\(\pm\)0.006 & 1.797 & 0.071 & 2.055 \\ \hline attentive gate MoE with & 0.035\(\pm\)0.005 & 2.26 & 0.055 & 2.266 \\ \hline **attentive gate MoE with \(\mathbf{L_{s}}\)** & **0.032\(\pm\)0.006** & 2.275 & 0.039 & 2.321 \\ \hline \hline distilled from attentive gate MoE with & 0.030\(\pm\)0.007 & 2.301 & 0.036 & 2.32 \\ \hline **distilled from attentive gate MoE with & **0.028\(\pm\)0.007** & 2.191 & 0.056 & 2.319 \\ \hline \end{tabular} \end{table} Table 2: Performance on the test set of the model with the minimum training error for MNIST dataset. Best results in each category of MoE training approaches is highlighted. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Experiment** & **Error** & **I(E; Y)** & \(\mathbf{H_{s}}\) & \(\mathbf{H_{u}}\) \\ \hline single model & 0.575\(\pm\)0.006 & NA & NA & NA \\ \hline \hline vanilla MoE & 0.460\(\pm\)0.010 & 0.967 & 0.306 & 1.023 \\ \hline vanilla MoE with & 0.483\(\pm\)0.007 & 4.177 & 1.135 & 3.981 \\ \hline **vamilla MoE with \(\mathbf{L_{s}}\)** & **0.457\(\pm\)0.012** & 1.424 & 0.381 & 1.279 \\ \hline \hline attentive gate MoE & 0.450\(\pm\)0.006 & 1.792 & 0.463 & 2.178 \\ \hline **attentive gate gate** & **0.447\(\pm\)0.005** & 3.684 & 1.036 & 4.141 \\ \hline \(L_{importance}\) & 0.451\(\pm\)0.016 & 3.117 & 0.770 & 3.357 \\ \hline \hline distilled from attentive gate MoE with & 0.531\(\pm\)0.131 & 3.179 & 1.75 & 3.843 \\ \hline \(L_{importance}\) & **0.482\(\pm\)0.065** & 1.605 & 0.718 & 2.541 \\ \hline \end{tabular} \end{table} Table 3: Performance on the test set of the model with the minimum training error for CIFAR-100 dataset. Best results in each category of MoE training approaches is highlighted. Figure 6: Expert selection table of MoE model trained with \(L_{s}\) and \(L_{importance}\) regularizations with \(15\) experts. ## 8 Conclusion In this paper we have clearly shown that intuitive task decompositions by the gate perform better. We introduced a novel MoE model architecture and training method using attentive gating. This method of training computes the gate's expert distribution on a sample from the computations of the experts for that sample. Finally, we introduce a novel data-driven sample similarity regularization, \(L_{s}\), that distributes the samples between the experts based on sample similarity. Our experiments show that training with attentive gating and \(L_{s}\) regularization improves performance, expert specialization and gate sparsity.
2309.09346
Speech-Gesture GAN: Gesture Generation for Robots and Embodied Agents
Embodied agents, in the form of virtual agents or social robots, are rapidly becoming more widespread. In human-human interactions, humans use nonverbal behaviours to convey their attitudes, feelings, and intentions. Therefore, this capability is also required for embodied agents in order to enhance the quality and effectiveness of their interactions with humans. In this paper, we propose a novel framework that can generate sequences of joint angles from the speech text and speech audio utterances. Based on a conditional Generative Adversarial Network (GAN), our proposed neural network model learns the relationships between the co-speech gestures and both semantic and acoustic features from the speech input. In order to train our neural network model, we employ a public dataset containing co-speech gestures with corresponding speech audio utterances, which were captured from a single male native English speaker. The results from both objective and subjective evaluations demonstrate the efficacy of our gesture-generation framework for Robots and Embodied Agents.
Carson Yu Liu, Gelareh Mohammadi, Yang Song, Wafa Johal
2023-09-17T18:46:25Z
http://arxiv.org/abs/2309.09346v1
# Speech-Gesture GAN: Gesture Generation for Robots and Embodied Agents ###### Abstract Embodied agents, in the form of virtual agents or social robots, are rapidly becoming more widespread. In human-human interactions, humans use nonverbal behaviours to convey their attitudes, feelings, and intentions. Therefore, this capability is also required for embodied agents in order to enhance the quality and effectiveness of their interactions with humans. In this paper, we propose a novel framework that can generate sequences of joint angles from the speech text and speech audio utterances. Based on a conditional Generative Adversarial Network (GAN), our proposed neural network model learns the relationships between the co-speech gestures and both semantic and acoustic features from the speech input. In order to train our neural network model, we employ a public dataset containing co-speech gestures with corresponding speech audio utterances, which were captured from a single male native English speaker. The results from both objective and subjective evaluations demonstrate the efficacy of our gesture-generation framework for Robots and Embodied Agents. ## I Introduction As a result of the ongoing improvement of humanoid robots and computer graphics, conversational embodied agents, including social robots and virtual agents, have emerged as effective interaction instrumentality. The ESI (Evaluation of Social Interaction) [1], a human evaluation instrument, identifies important social skills such as approaching, speaking, turn-taking, gazing and gesturing. Therefore, in human-agent interactions, social agents also need these social capabilities similar to humans. In particular, human gestures are a form of nonverbal cues utilised with utterances in interpersonal interaction. Secondly, researchers revealed that in certain cultures, speech and gestures are tightly linked in time [2]. Therefore, it is crucial to create gestures and briefly integrate them with speech while designing embodied agents. In fact, the danger for embodied agents is a mismatch between verbal and nonverbal information, which may cause extraordinary unpleasantness to the communicators [3]. Thirdly, gestures may be used to emphasise words, demonstrate purpose, depict things more vividly, and aid understanding of a conversation [4]. In human-robot interaction, it has been discovered that common language gestures strengthen the robot's attraction and prospective contact motivation [5]. However, considering the diversity of embodied agents and the physical limits of robots, it does not seem feasible to manually create gestures for each possible speech. Linguists [6] suggested a categorisation system with four classes: 1) Iconic (expressing an object's features or behaviours); 2) Deicic, or pointing (indicating an object's position); 3) Metaphoric (representing abstract concepts with a concrete form); 4) Beat (keeping with the rhythm of speech). Only the Beat gestures are audio signal-dependent (speech acoustic), while other types of gestures rely on the speech context (speech semantic). Therefore, gesture generation frameworks with single modal input can lack some types of gestures. Motivated by the accomplishments of GAN (Generative Adversarial Network) [7] in generative models, we propose a GAN-structured neural network model to generate gestures from speech. We trained our model on a gesture dataset with English speech. The subjective evaluation demonstrates the proposed model is effective, showing a good performance when compared with the ground truth. Also, the objective evaluation results confirm our model is highly effective when compared with other state-of-the-art gesture generation models. The contribution of our work is two-fold: 1) We propose a novel GAN-based generative framework that can use multimodal inputs to extract semantic and acoustic features as conditional information for adversarial training and generate multiple gestures from the same speech input using different input noises. 2) A comprehensive evaluation of the full model from objective to subjective with ablation studies of the outcomes of various designs and crucial modelling options; The rest of this paper is organised as follows: We first introduce the background and related work in Section II. Then, Section III describes our proposed speech-based gesture generation framework, including features extraction, model architecture and its implementation. Next, sections IV and V explain the quantitative metrics used in our proposed model and quantitative result with validation on an extra user study. Finally, we conclude our work with a brief discussion. ## II Related Work Several gesture generation approaches, ranging from rule-based to innovative data-driven, have been created in recent years. Initially, most approaches were rule-based; however, rule-based approaches result in a repetitious and monotonous experience in the lifetime human-agent connection. Recent innovative approaches are data-driven, enabling more variety in gesture production but making it more difficult to adapt to the physical limits of the embodied agents. ### _Rule-based gesture generation_ The primary concept behind rule-based generation approaches is to correlate speech syllables, and words with gestures as a straightforward way to produce gestures from speech content [8]. The rules for generating gestures in these studies were hand-defined by specialists [9, 10, 11]. One study [9] derived punctuation marks from a sentence using a dialogue sentence analysis methodology. Using image processing and clustering approaches, unique research [12] produced its own gestures dictionary for speech gestures from internet images, although the processing of gesture production is still governed by rules. For rule-based gesture generation, the greatest drawback is that manually defining a gesture pattern for each word requires an enormous amount of time and effort. By utilising the machine learning technique, the issue of repeated and labor-intensive generation of a speech gestures dictionary might be addressed. ### _Data-driven gesture generation_ Recent Data-driven studies focus on learning mapping functions from speech text or speech audio or both of them to speech gestures. #### Ii-B1 Gesture Generation with Speech Text Yoon et al. [13] presented a seq2seq-based autoencoder model which employed speech text as input to generate 2D co-gestures; they also implemented their model on the NAO robot. Another work [14] also extracted speech text features as input for their probabilistic model. However, both of them observed an unusual mapping issue in which the synthesised audio and produced gestures could not be closely synchronised. #### Ii-B2 Gesture Generation with Speech Audio Hasegawa et al. [15] extracted the MFCCs (Mel- Frequency Cepstral Coefficients) from the inputted audio as the speech representation; they used a bi-directional LSTM (Long Short-Term Memory) based recurrent neural network to generate co-speech gestures and then went through a noise filter as smoothing step. With the same speech gesture database, Kucherenko et al. [16] presented an autoencoder, which is used for representation learning to align the audio with gestures. Ferstl et al. [17] also used bi-directional LSTM regression with adversarial training to generate gestures from acoustic features (MFCCs with audio pitch); they also utilized multiple discriminators in adversarial training to improve the results from the generator. Our proposed approach varies from prior systems in that it generates co-speech gestures using text transcription and audio utterances. #### Ii-B3 Gesture Generation with Multimodal Input Single modality systems have clear limitations; as mentioned before, the lack of either acoustic or semantic features resulting from the single modal input is currently a significant hurdle to achieving outstanding results. However, multimodality systems could address this problem. Kucherenko et al. [18] proposed the first multimodal input autoregressive neural network model on co-speech gesture generation. Yoon et al. [19] added speaker identity as third modal input to achieve style control. ## III Proposed Speech-based Gesture Generation Framework ### _Speech and Gesture Dataset_ Unlike previous studies that used non-English gesture datasets [20], small gesture datasets [21], datasets with low-quality gestures [13] or multi-language datasets [22] our proposed speech-based gesture generation framework is specifically trained with the Trinity Dataset [23], that captured from a single male actor who is an English native speaker with 20 Vicon cameras (a sort of motion capture cameras). This gesture dataset contains 244 minutes of speech and gesture data from among a variety of topics, e.g., daily activities, hobbies and movies. First, we removed lower body data, because our work is aiming at co-speech gestures. Then, in order to save training time, for the upper body data, we used 4 joints from the spine, 2 joints from the neck, 3 joints from the left and right side arm, 2 joints from both side shoulder, 1 joint from the head. In addition, the fingers data is removed due to two reasons: 1) poor quality of data and 2) many common humanoid robots like NAO and Pepper from Softbank Robotics do not have enough fingers like human beings. Finally, we have speech audio utterances in the form of 44 kHz Waveform Audio file format, and speech text transcripts in the form of JavaScript Object Notation file format with timestamps and corresponding gestures in the form of Biovision Hierarchy file format. ### _Data Pre-processing and Feature Extraction_ Based on the experiments of previous work[18], we employ frame synchronization at 20 FPS (Frames Per Second) during feature extraction. The gesture data in Biovision Hierarchy consist of Euler angles and offsets of each joint in a hierarchical structure. Unlike previous studies that adopted conversion of Euler angles and absolute position in 3D co-ordinates [24], we converted Euler angles to exponential maps [25] because it is easy to convert exponential maps back to Euler angles and will not introduce potential discontinuities issues. After frame conversion from 60 FPS to 20 FPS, we get 45 features for each frame of gestures. As for the acoustic features extraction, similar to other state-of-the-art in speech-based gesture generation[15, 16, 26, 27], in order to align with the gesture features, we get feature vectors in 26 dimensions (for 26 Mel-spaced filterbanks) by calculating the MFCCs of the audio utterances waveform with the same frame rate, which is a representation of an utterance's short term power spectrum. However, a sequence of the speech audio utterances and its corresponding speech text transcripts generally have different lengths. In order to address this problem, we first encoded the words with semantic information as 768-dimensional vectors by using the BERT[28] pre-trained model, a state-of-the-art neural network model that uses surrounding text to assist computers in grasping the meaning of ambiguous words in a text. As for the words that do not have semantic information, we encoded them as fixed vectors that have the same dimensions as the BERT features. Then, we used the exact utterance time information of each world to upsample the text features. Therefore, the text and audio feature sequences get aligned and uniform. ### _Problem Formulation_ The problem of the co-gesture generation from speech can be defined as a mapping function \(\textbf{F}_{Generation}\), which is shown in Equation 1 for a segment of the input speech length \(T\), where \(\textbf{s}_{a}=[s_{a}]_{t=1:T}\) are the features extracted from speech audio utterances. Likewise, the features extracted from speech text are \(\textbf{s}_{t}=[s_{t}]_{t=1:T}\), with multiple noise **n**. The corresponding result \(\textbf{g}=[\textbf{g}_{t=1:T}]\) can be a sequence of Euler angles of selected joints in the form of \(\textbf{g}_{t}=[pitch_{t}^{i},raw_{t}^{i},yaw_{t}^{i}]_{i=1:J}\), where \(J\) is the number of selected joints. Furthermore, we define \(\textbf{g}=[\textbf{g}_{t=1:T}]\) as a sequence of 3D (three-dimensional) coordinates of selected joints, with \(\textbf{g}_{t}=[x_{t}^{i},y_{t}^{i},z_{t}^{i}]_{i=1:J}\). The object of our problem is to achieve the maximization of the conditional probability \(p(\textbf{g}|\textbf{s})\) to match well with the given speech input, where **s** is the concatenation of \(\textbf{s}_{a}\) and \(\textbf{s}_{t}\). \[\textbf{g}=\textbf{F}_{Generation}(\textbf{s}_{a},\textbf{s}_{t},\textbf{n}) \tag{1}\] ### _Model Architecture_ Speech features extracted from audio utterances and text transcripts are used as the condition in our proposed model, which is a conditional GAN-based architecture. Figure 1 shows the overview of the architecture. In the generation step, a random noise **n** from a normal distribution is reproduced in the same length as the speech features. Then, the noise, the text embeddings \(\textbf{s}_{t}\) and the MFCCs values \(\textbf{s}_{a}\) are concatenated as the feature vector, take it into the generator to get the corresponding sequence of gestures. Specifically, we employed the initial pose for the previous frames to improve the continuity during gesture generation. In order to improve the generator, we concurrently trained the discriminator to calculate the difference between the real distribution and fake distribution on the speech features condition. Next, after getting the sequence of generated gestures, we concatenated the generated gestures or real gestures with accompanying audio and semantic features and then sent them into the discriminator. The output value shows if the input gestures were real or fake for the corresponding speech features condition. ### _Gesture Generator_ Our gesture generator \(G\) generates gestures using encoded semantic and acoustic features as input. The structure of the generator \(G\) is shown in Figure 2. First, we concatenate the text embedding, MFCCs and random noise as a long vector, then send them through to the two-layer bi-direction GRU (Gated recurrent unit) with 0.2 dropouts. Next, the vector passes through the following linear layer with the TanH activation function to reduce the dimensionality of the feature. In order to ensure the continuity of generated gestures, we used the few frames of previously generated gestures as condition information Fig. 1: The architecture of the proposed gesture generation model Fig. 2: Gesture Generator to feed back to the FiLM (Feature-wise Linear Modulation) layer [29], as another state-of-the-art work [18] did. Finally, the output layer is a linear layer with the TanH activation function to get a possible range of results. The layers detail of the gesture generator is shown in Table I, where \(C_{in}\), \(C_{out}\) are dimensions of in and out channels, and \(L_{num}\) is the number of GRU layers. ### _Adversarial Scheme_ In order to optimize our gesture generator, a discriminator \(D\) is used in our adversarial scheme. Figure 3 illustrates the structure of our discriminator. First, the sequence of generated gestures from the generator, text embeddings and MFCCs both individually go through two linear layers: one with the Leaky ReLU activation function and the next one without the activation function. Inspired by the work [30], we take the vector of the concatenated gestures, audio and text features and then feed them into five layers of the 1D convolutional block, which consists of one 1D convolutional layer with Leaky ReLU and layer normalization, finally followed by an extra 1D convolutional layer. Next, the vector passes through two linear layers with Leaky ReLU for vector dimensional reduction. At the end of the discriminator, using a sigmoid activation function, the result is compressed between 0 and 1. These values could determine if the input gestures are real and well-matched with the condition features. The layers detail of the discriminator is shown in Table II, where \(k\), \(s\) and \(p\) are kernel size, stride and padding, respectively. ### _Training_ The losses listed below are used to train the proposed framework. The gesture generator is trained by using the loss \(\mathbf{L}_{G}\) in Equation 2, while the loss \(\mathbf{L}_{D}\) in Equation 6 is used for training the discriminator. \[\mathbf{L}_{G}=\alpha\cdot\mathbf{L}_{G}^{mse}+\beta\cdot\mathbf{L}_{G}^{ continuity}+\lambda\cdot\mathbf{L}_{G}^{WGAN} \tag{2}\] \[\mathbf{L}_{G}^{mse}=\frac{1}{n}\sum_{i=1}^{n}(\mathbf{g}_{i}-\hat{\mathbf{g}} _{i})^{2} \tag{3}\] \[\mathbf{L}_{G}^{continuity}=\frac{1}{n}\sum_{i=1}^{n}(\mathbf{S}_{i}-\hat{ \mathbf{S}}_{i})^{2} \tag{4}\] \[\mathbf{L}_{G}^{WGAN}=-\frac{1}{N}\sum_{i=1}^{n}D(\mathbf{s}_{a},\mathbf{s}_{ t},\hat{\mathbf{g}}_{i}) \tag{5}\] \[\mathbf{L}_{D}=\frac{1}{N}\sum_{i=1}^{n}D(\mathbf{s}_{a},\mathbf{s}_{t},\hat{ \mathbf{g}}_{i})-\frac{1}{N}\sum_{i=1}^{n}D(\mathbf{s}_{a},\mathbf{s}_{t}, \mathbf{g}_{i}) \tag{6}\] Where \(\mathbf{s}_{a}\), \(\mathbf{s}_{t}\) represent the speech audio and text features, respectively. Specifically, \(n\) is the total duration of the gesture sequence, \(\mathbf{g}_{i}\) and \(\hat{\mathbf{g}}_{i}\) are the \(i\)th original gesture and \(i\)th generated gesture, respectively. Using MSE (mean squared error) in Equation 3 and continuity loss in Equation 4, we reduced the gap between original gestures in training samples and the matching generated gestures while training our gesture generator. This loss \(\mathbf{L}_{G}^{continuity}\) can be construed as the mean squared error for the current speed difference of \(i\)th original gesture speed \(\mathbf{S}_{i}\) and \(i\)th generated gesture speed \(\hat{\mathbf{S}}_{i}\). The adversarial losses \(\mathbf{L}_{G}^{WGAN}\) in the Equation 5, where \(G\) is the generator and \(\mathbf{L}_{D}\) where \(D\) is discriminator come from the WGAN (Wasserstein Generative Adversarial Networks) [31], an improved generative model that makes the training more stable when compared with the traditional GAN model. As in the GAN training, \(\mathbf{L}_{G}\) and \(\mathbf{L}_{D}\) are alternately used to update the gesture generator and discriminator. The trained Fig. 3: Discriminator result of \(D()\) is 1 for original gestures and 0 for generated (fake) gestures. We split the Trinity dataset into three parts: 84% for the training set (205 minutes), 7.4% for the validation set (18 minutes), and 8.6% for the test set (21 minutes), and every set has its own audio, text transcript, and co-speech motion files. We trained the proposed model for 100 epochs. The batch size was 64, while the learning rate was 0.0001. The optimizer for both gesture generator and the discriminator is Adam with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). The weights for loss functions (\(\alpha=1\), \(\beta=0.6\), \(\lambda=0.3\)) were set experimentally. The model was trained for approximately 7 hours on a GPU (NVIDIA RTX 3070) with CPU (Intel 12900k). For a 30-second speech input, the overall compilation time from loading the speech input to feature extraction to final motion file generation takes about 12.3 seconds in total, either by loading the pre-trained model on CPU or loading it on GPU. ## IV Evaluation Metrics ### _Subjective Evaluation_ Our user study was delivered via an anonymous online questionnaire with video clips1. The questionnaire asked participants to rate the statements from strongly disagree value (1) to strongly agree value (7) after watching gesture videos. We made 10 sets of videos by using different speeches. Each set contains two 10s video clips: ground truth and generated gestures from our proposed model. The order in which the videos appear is random, and the entire questionnaire takes about 15 minutes to complete. Our user study is supported by UNSW Research Ethics Compliance Support 2. From the social media, 20 native English speakers (13 male, 7 female, mean = 24.1, standard deviation = 1.8 years old) participated in our user study. Fig 5 below presented the results. Footnote 1: Sample from proposed group and sample from GT group Footnote 2: HC No: HC220411 A two-tailed T-test was used to determine if there was a statistically significant difference in the scores of the GT and proposed groups. Although the mean rating scores of the proposed model are lower than the ground truth, especially in semantic consistency, there was no statistically significant difference among these three criteria. For the naturalness, between the ground truth group (M = 5.41, SD = 1.52) and the proposed group (M = 5.33, SD = 1.56), \(t\) = 0.6210, \(p\) = 0.5349, and the result is not significant at \(p\)\(<\) 0.05. For the time consistency, between the ground truth group (M = 5.40, SD = 1.64) and the proposed group (M = 5.26, SD = 1.59), \(t\) = 0.9317, \(p\) = 0.3520, and the result is not significant at \(p\)\(<\) 0.05. For the semantic consistency, between the ground truth group (M = 5.22, SD = 1.73) and the proposed group (M = 4.99, SD = 1.70), \(t\) = 1.48494, \(p\) = 0.1382, and the result is not significant at \(p\)\(<\) 0.05. Overall, by conventional criteria, we select a significance threshold of \(p\) value: 0.05. We observed all \(p\) values of different criteria are more than 0.05. Then, it indicated the difference between the means of the proposed model and the ground truth is not probably the result of chance. We have no basis in the data to infer that the population means of the proposed model and GT group are different because of the lack of proof of difference. Hence, the difference is considered to be not statistically significant. The results mean the performance of the proposed model is similar to the ground truth. ## VI Ablation Study In this section, we conducted two ablation studies. One is to evaluate the difference between various input speech features, and another is to focus on the various framework structures. Both of them are evaluated by objectively. ### _Audio Features Experiments_ According to the previous data-driven method for gesture generation, they tend to use MFCCs, prosodic and Mel spectrogram as speech audio features. In order to get a better understanding of the impact of the audio feature's type, we proposed five models that used different features input. Detail settings and results are shown in Table V. Same as the quantitative measurement, we trained 100 epochs for each type of model. From the results, we found the MFCCs-based model got the best result in RMSE and Jerk metrics, and a suboptimal result in the Acceleration metric. Although the MFCCs + Prosodic-based model achieved the best performance in the Acceleration metric when compared with the ground truth, it only showed slightly higher than the MFCCs-based model. Hence, MFCCs based model Fig. 4: Qualitative results. Fig. 5: Results of the user study is the best one, which is much closer to the ground truth when comparing other models we trained. ### _Framework Structures Experiments_ In this section, based on the results from the first ablation study, we proposed five framework variants, as described in Table VI. In order to get a better understanding of the proposed framework in detail from the elimination of the key structure of the full gesture generator. The results were presented in Table VII. Changing any structure of our proposed framework will cause lower results on the RMSE metric. We note that the results are similar for no GRU compared to the full model, and the reasons could be: 1) The full model may have been too complex for the task. Removing the GRU layer may have resulted in a simpler model that still captures the relevant information from the data. 2) The efficacy of the model may not be significantly affected if the other layers are very good at catching the necessary patterns of the data. In this instance, the lack of the GRU layer might not affect the other layers' ability to accurately reflect the data. Nevertheless, although no-GRU obtained similar results, the full model produced the best overall performance, especially in RMSE. Removing the speech audio input caused higher Jerk than the ground truth while removing the speech text input resulted in the lowest Acceleration among all frameworks. ## VII Conclusion We propose a new framework that can generate sequences of joint angles from the speech text and speech audio utterances. Based on a conditional GAN network, the proposed neural network model learns the relationship between the co-speech gestures and both semantic and acoustic features from the speech input. In order to train our neural network model, we employ co-speech gestures with corresponding speech audio utterances dataset, which is captured from a single male native English speaker. Unlike most previous works, our model has the capability to generate continuous gestures associated with the acoustic and semantics of speech. The results from both objective and subjective evaluations demonstrate the efficacy of our gesture generation framework for robots and embodied agents. ## Acknowledgment Thanks to Commonwealth's contribution for funding this work through an "Australian Government Research Training Program Scholarship".
2309.11509
Using causal inference to avoid fallouts in data-driven parametric analysis: a case study in the architecture, engineering, and construction industry
The decision-making process in real-world implementations has been affected by a growing reliance on data-driven models. We investigated the synergetic pattern between the data-driven methods, empirical domain knowledge, and first-principles simulations. We showed the potential risk of biased results when using data-driven models without causal analysis. Using a case study assessing the implication of several design solutions on the energy consumption of a building, we proved the necessity of causal analysis during the data-driven modeling process. We concluded that: (a) Data-driven models' accuracy assessment or domain knowledge screening may not rule out biased and spurious results; (b) Data-driven models' feature selection should involve careful consideration of causal relationships, especially colliders; (c) Causal analysis results can be used as an aid to first-principles simulation design and parameter checking to avoid cognitive biases. We proved the benefits of causal analysis when applied to data-driven models in building engineering.
Xia Chen, Ruiji Sun, Ueli Saluz, Stefano Schiavon, Philipp Geyer
2023-09-11T13:54:58Z
http://arxiv.org/abs/2309.11509v1
Using causal inference to avoid fallouts in data-driven parametric analysis: a case study in the architecture, engineering, and construction industry ###### Abstract The decision-making process in real-world implementations has been affected by a growing reliance on data-driven models. We investigated the synergetic pattern between the data-driven methods, empirical domain knowledge, and first-principles simulations. We showed the potential risk of biased results when using data-driven models without causal analysis. Using a case study assessing the implication of several design solutions on the energy consumption of a building, we proved the necessity of causal analysis during the data-driven modeling process. We concluded that: (a) Data-driven models' accuracy assessment or domain knowledge screening may not rule out biased and spurious results; (b) Data-driven models' feature selection should involve careful consideration of causal relationships, especially colliders; (c) Causal analysis results can be used as an aid to first-principles simulation design and parameter checking to avoid cognitive biases. We proved the benefits of causal analysis when applied to data-driven models in building engineering. ## 1 Introduction In recent decades, successful implementations of machine learning (ML) methods, with the momentum of growing data volume, have brought the data-driven approach into various engineering domains. Together with empirical domain knowledge analysis and first-principles simulations, ML methods have become a handy tool for both academic research and industrial application (Bertolini et al., 2021; LeCun et al., 2015; Raschka et al., 2020). Due to their end-to-end learning behavior, good generalization performance, and fast prediction response, they are favored by researchers and engineers, and are gradually being integrated as a decision-making or analysis assistance tool in the architecture, engineering, and construction (AEC) industry (Dimiduk et al., 2018; Marcher et al., 2020; Seyedzadeh et al., 2018). The advantage of MLs' wide adaptability comes from their ability to directly capture hidden patterns from the data during training by minimizing the error, instead of explicitly modeling the physical process with domain knowledge context. However, their prerequisite in modeling processes for a proper performance need to assume that all input variables are independent, or even _independent and identically distributed_ (i.i.d.) (Scholkopf, 2022) by default. That is, the probability distribution of each value (variable) should have no dependence on other values. However, in reality, especially in engineering domains, a case usually requires considering different factors in an interdisciplinary manner. For instance, during the building design or construction phase, the objectives commonly involve building energy performance, environmental impact, cost, occupant's comfort, etc., simultaneously. The well-known mantra in statistics: "_Correlation does not imply causation_" (Aldrich, 1995; Pearl and Mackenzie, 2018), is not sufficiently considered in engineering scenarios (Chakraborty and Elzarka, 2019; Hegde and Rokseth, 2020) when ML methods are used. Unlike first-principles simulations, which encode causal relationships between variables in explicit physical equations, data-driven processes do not include this information. Lacking this process understanding might lead to false implementation and reliability issues for engineers and domain experts. This false implementation situation raises the risk of biased results and spurious conclusions because ML methods rely heavily on the information carried from the distribution of observed data and large predefined sets (Scholkopf et al., 2021). In this study, we propose a synergetic framework. This framework integrates empirical domain knowledge from human experts, simulations, and data-driven methods. Our aim is to promote their combined use in general engineering analysis. We employ a real-world building engineering scenario in the design phase. In this scenario, we highlight a potential "fallout" situation that could arise in a data-driven modeling analysis, followed by the introduction of the causal analysis process. We show the need for causal dependencies checks among variables during the data-driven process for two main reasons. First, fitting data through data-driven methods without considering causal dependencies carries potentially biased estimates. They result in spurious conclusions and risks in engineering scenarios. These limitations are present regardless of the type of machine learning methods, and cannot be eliminated via model accuracy improvement. Secondly, in engineering scenario analysis, the discovery of causal dependencies and the construction of a causal skeleton are practical tools. They help to cross-validate data with domain knowledge, examine whether potential cognitive biases exist in the simulation process, and aid in knowledge discovery. We believe these tools create a crucial link between data-driven methods and human reasoning in design and engineering processes. ## 2 Framework and methodologies ### Synergetic Framework between Experience, Simulation, and Data-driven Methods In engineering, the tools we use for modeling and decision-making can be classified into three main categories: empirical domain knowledge, first-principles simulation, and data-driven models: * quick, intuitive information set. However, it is limited by personal competence and often lacks reproducibility. * **First-principles simulation** is a process based on abstract symbolic abstraction, using mathematical equations and physical/chemical laws to govern the behavior of a system. By starting from basic principles and building up to an understanding of complex phenomena, first-principles simulations are also referred to as "white-box models". * **Data-driven method** is a computational process based on available data rather than theoretical principles or physical laws. These processes employ ML algorithms, statistical models, and data analysis techniques to extract patterns and relationships from datasets. These patterns are then used to make predictions or generate insights about the system, functioning as "black-box models". Table 1 illustrates the main advantages and disadvantages of these three major categories we rely on in engineering. In engineering scenarios, we possess, reuse, and iterate on invariant patterns that can be applied to many cases. These patterns form what is known as knowledge and experience (Chen et al., 2022). For instance: the case of sinking library 1 updates our consideration of the relationship between building type/usage and building structural engineering. In first-principles simulations, the relationships between these variables are naturally embedded into symbolic formulas and numerical modeling processes as knowledge. However, this type of information input is absent in the data-driven process. We propose that the data-driven method should include an additional, transferable piece of information: causal dependencies among variables. We illustrate this idea in Figure 1, demonstrating how causal dependencies extracted from the data interact with experiential domain knowledge, first-principles simulations, and data-driven approaches in a synergetic manner. In Figure 1, red arrows indicate how causal relationships interact with other engineering modeling approaches. Causality is commonly confused with correlation, but the former presents a different interpretation from observational data: It analyzes the asymmetric change and response between cause and effect, aids in analyzing interventional scenarios, counterfactuals, and answers "what-if" questions. This reasoning ability is essential for informative and sequential decision-making support. Additionally, the extracted causality information provides a feedback loop for users to validate and update their domain knowledge, fostering unbiased modeling. ### Causality Causality research has become a critical topic and has made substantial contributions across various fields with the widespread adoption of data-driven methods in the past decade (Scholkopf, 2022; Spirtes, 2010). Causal inference examines parameters or properties, considering cause-effect logical sequences to avoid unrealistic conclusions. For a systematic discussion of causal inference research, we refer to research to the works of Pearl (Pearl, 2009); Spirtes et al. (Spirtes, 2010, Spirtes et al., 2000), and Peters et al. (Peters et al., 2017). Our previous research (Chen et al., 2022) introduced causal inference into the energy-efficient building design process, using a four-step framework that combined causal structure finding and causal effect estimation. In this study, we aim to demonstrate the importance of checking causal dependencies in the context of the general AEC domain. This section briefly clarifies foundational ideas related to causal analysis. **Causal finding algorithms** are methods for identifying and returning equivalence classes of proper causal structure based on observational data in an unsupervised, data-driven manner. Essentially, they distinguish asymmetries in sampling distributions to identify feature dependencies and causal directions. Typical causal structure finding algorithms based on observational data fall into three categories: constraint-based, score-based, and hybrid (Kalisch and Buhlmann, 2014). In this study, we chose one of the typical score-based methods with a greedy mechanism (DeVore and Temlyakov, 1996), Greedy-Equivalent-Search (GES) (Chickering, 2002, 2002). **Directed Acyclic Graphs (DAGs)** are graph diagrams composed of variables (nodes) connected via unidirectional arrows (paths) to depict hypothesized causal relationships (Judea, 2010). A causal skeleton DAG with a fixed structure embeds the causal dependencies of given data. A DAG demonstration in the building engineering domain is presented in Figure 2. Three major types of DAG structure combinations are: * **Directed path** denotes a directed edge \(x\!\!\rightarrow\!y\) of \(x\) (cause) on \(y\) (effect). Intuitively, it means that \(y\) is directly influenced by the status of \(x\), altering \(x\) by external intervention would also alter \(y\). * **Backdoor path** exists in two variables in a confounding structure where the common cause is not controlled (Figure 2, left), or two variables in a collider structure where the common effect is controlled, effect variables connected by this backdoor path have a non-causal association and would lead to potential bias with distorting association. \begin{table} \begin{tabular}{l l l l} \hline \hline & \multicolumn{2}{c}{**Advantages**} & \multicolumn{1}{c}{**Disadvantages**} \\ \hline **Empirical** & \(\bullet\) & No extra efforts needed for modeling; & \(\bullet\) & Rule of thumb, heavily relies on personal ability; \\ **domain** & \(\bullet\) & Foundation for scientific inquiry and hypothesis testing; & \(\bullet\) & Limited extent and reliability in non-standard cases; \\ \hline **First-principles** & \(\bullet\) & Good interpretability; & \(\bullet\) & Time-consuming in detailed simulation; \\ **simulation** & \(\bullet\) & Flexible in modeling details; & \(\bullet\) & Modeling efforts required in each new scenario; \\ \hline **Data-driven** & \(\bullet\) & Fast response in prediction; & \(\bullet\) & Black-box, trustfulness issues; \\ **method** & \(\bullet\) & Universal approximator; & \(\bullet\) & Data-hungry for training; \\ & \(\bullet\) & End-to-end learning behavior; & & \\ \hline \hline \end{tabular} \end{table} Table 1: The characteristics of relying on empirical domain knowledge, first-principles simulation, and data-driven approach for engineering modeling. * **Closed path** exists in collider structures where two variables have the same effect (Figure 2, right). Unlike directed and backdoor paths, this path is causal-wise irrelevant: there is no causal path between the two variables via the collider structure, unless the common effect is controlled. **DAG rules** are principled structural guidelines that enable users to investigate cases for identifying appropriate sets of covariates in complex DAGs and for removing structural bias through adjustments, e.g., d-Separation, backdoor criterion (Pearl et al., 2000), and their extensions. DAGs, often defined by prior knowledge, could be incomplete (Guo et al., 2020). In the development of a causal diagram, users utilize their best available prior knowledge to set up the most plausible causal diagram. Subsequently, they adhere to strict DAG rules to identify the causal dependencies between given exposure inputs and the target outcome from the case. In the remaining content, all DAGs are generated and modified by DAGitty (Textor, 2015; Textor et al., 2016). Figure 1: Illustration of the potentially synergetic nature of the three main engineering modeling processes. Causal dependencies extracted from data represent a type of invariant, transferable knowledge, which play a vital role in offering a feedback loop and interacting with the user’s empirical domain knowledge. Except for the data and domain knowledge input, the causal dependencies information support contributes to validation for first-principles simulation, and unbiased estimation/reasoning for data-driven methods. The red arrows indicate how the causal relationships interact with other engineering modeling approaches. A "fallout" situation in this context refers to an instance in causal analysis where an indirect, biased relationship exists between an exposure (or cause) and an outcome (or effect), primarily due to the presence of the backdoor path or the opening of the closing path. ### Machine Learning In this study, we focus on ML methods applied to supervised learning tasks, which typically involve addressing a classification or regression problem with labeled data. To ensure that the fallout is irrelevant to the type of data-driven methods used, we examined three mainstream ML methodologies (Singh et al., 2016): tree-based models (Clark and Pregibon, 2017), kernel machines (Hofmann et al., 2008), and neural networks (LeCun et al., 2015), which are mechanistically different and widely applied in engineering domains (Seyedzadeh et al., 2018; Chakraborty and Elzarka, 2019; Hegde and Rokseth, 2020). Brief introductions to their mechanisms are given in Appendix. Beyond these methods, the evaluation of uncertainties is critical for supporting the decision-making process (Chen and Geyer, 2022; Tian et al., 2018), leading us to include a probabilistic, tree-based, gradient-boosting surrogate model - NGBoost (Duan et al., 2020) in our case study. Instead of generating output as a point prediction, the design of NGBoost incorporates a predictive uncertainties quantification process, offering insights into the output range within the set of feature input descriptions in a data-driven manner. ## 3 Case study ### Scenario Setup We studied the effect of different designs on energy use for heating (Energy Usage Intensity of heating, _EUI Heating_) by varying _insulation standards_ and _heating systems_. To prepare our dataset, we utilized a parametric office building simulation model. This model represents a realistic design space by incorporating a wide range of configurations for building components and zones to train our ML models (training data). The causal reasoning within space is validated by a real-world design project from our previous research (Chen et al., 2022) (test case): a mixed-usage, four-floor building known as Building.Lab, located on a tech campus in Regensburg, Germany. We simulated three sets of thermal characteristics to explore design variations in insulation values. These were based on existing standards: the 2020 German Energy Act for Buildings (_GEG_), Net Zero Energy Building (_NZEB_), and _Passive House_. These standards, from baseline to high, have different requirements for components' thermal conductivity (U-values), with a higher standard indicating better building thermal behavior and less energy loss. We also configured three typical building heating systems: _boiler_, air-sourced heat pump (_ASHIP_), and district heating (_DH_). For the modeling tool, we used Grasshopper (McNeel et al., 2022), with Honeybee (Ladybug Tools, 2021) serving as a high-level simulation interface for EnergyPlus. Figure 2: Causal confounder and collider examples in the context of architectural engineering domain. Failing to identify the causal relationship cause spurious association (backdoor path) and biased results. Left: confounder bias when the common cause is not controlled; Right: collider bias when the common effect is controlled. In terms of data-driven modeling approaches, as discussed in Section 2.3, we applied Decision Tree (DT), Support Vector Machine for Regression (SVR), Artificial Neural Network (ANN, with Multi-Layer Perception chosen as a basic variation), and NGBoost across all scenarios. We applied three metrics to facilitate performance comparison across different numerical scales of results: Normalized Root Mean Square Error (NRMSE), Symmetric Mean Absolute Percentage Error (SMAPE), and Coefficient of determination (R-squared or \(R^{2}\)). We chose \(R^{2}\) as our primary reference. The reasoning behind this choice and detailed interpretations of these three metrics are available in [Chicco et al., 2021]. Table 2 lists the input features from the simulation, their ranges, and the corresponding test case setting. To avoid the extrapolation problem (which arises when the test case sample falls outside of the given training dataset's convex hull [Balestriero et al., 2021]), all feature values in the test case are within the range of training data. We fitted and fine-tuned ML models with the training data to achieve well-generalization performance, and used them later to predict different scenarios in the test case, in which all values are extracted from the Building.Lab project in a real-world context. Further information regarding modeling configuration, data generation process, and training strategy of data-driven models are available in Appendix. With the set training data and test case, we first set up two scenarios: * _Scenario I_: Full-scale modeling with all input features for EUI heating prediction as the benchmark. * feature selection by domain knowledge, or only some features are observable/available during data collection. Scenario I presents an ideal case in research or engineering, demonstrating how the data-driven process helps to provide analytical insights into potential design scenarios. However, in real-world cases, data is rarely as complete as in an ideal scenario due to the presence of unobserved factors, the need for simplification because of the expensive data collection and computation efforts, or subjective manual filtering by end-users using their own domain knowledge or analytical tools. In Scenario II, we illustrate the potential risks of introducing subjective bias associated with such incomplete data: We selected the following input features that are typically cared for by architects or engineers in the building \begin{table} \begin{tabular}{l c c} \hline \hline **Building feature / Variable** & **Training data range** & **Test case setting** \\ \hline _Orientation [\({}^{\circ}\)]_ & [0, 180] & 12.5 \\ _Number of Floors_ & [1, 10] & 4 \\ _Floor Height [m]_ & [2.8, 4.5] & 3.48 \\ _Open Office: Heating Setpoint [\({}^{\circ}\)C]_ & [21, 24] & 22 \\ _Open Office: Air Change Rate (ACH) [1/h]_ & [4, 6] & 4 \\ _Open Office: People Per Area (PPA) [people/m\({}^{2}\)]_ & [0.05, 0.2] & 0.15 \\ _Volume [m\({}^{\ast}\)]_ & [4400, 146000] & 6807 \\ _Area\({}^{1}\) [m\({}^{\ast}\)]_ & [1300, 36000] & 1956 \\ _Construction Area\({}^{2}\) [\%]_ & [3, 11.5] & 6 \\ _Window to Wall Ratio North [\%]_ & [0, 0.7] & 0.5 \\ _Window to Wall Ratio East [\%]_ & [0, 0.7] & 0.45 \\ _Window to Wall Ratio South [\%]_ & [0, 0.7] & 0.34 \\ _Window to Wall Ratio West [\%]_ & [0, 0.7] & 0.23 \\ _Insulation Standard_ & base, medium, high & Unknown \\ _Heating System_ & Boiler, ASHP\({}^{3}\), DH\({}^{4}\) & Unknown \\ _Energy Usage Intensity (EUI) Heating [kWh/m\({}^{2}\)a]_ & [14.6, 327.1] & Unknown \\ \hline \hline \end{tabular} * _Floor area gross; \({}^{2}\) Areas covered by walls, columns, or any structural elements;_ * _ASHP: air-sourced heat pump; \({}^{4}\) DH: district heating;_ \end{table} Table 2: Ranges in training data features and value extracted from the test case. All values in the test case are extracted from the Building.Lab project for the case study. design phase for energy performance evaluation (Marcher et al., 2020; Chen et al., 2022; Roman et al., 2020): _Open Office: Heating Setpoint, Open Office: ACH, Open Office: PPA, Volume, Area, and Window to Wall Ratios_. In both scenarios, ML models are fitted and evaluated using the training data, then used to predict the output with test case inputs plus different insulation standard and heating system combinations. ### Benchmark and Fallout Table 3 presents the prediction results of different models fitted with the training data in the setting of both scenarios. The results demonstrate the model capabilities in this training case; all ML methods trained by full input features show acceptable performance. The \(R^{2}\) of all models is above 0.85, while ANN and NGBoost reach an accuracy above 0.95. With the masked feature setting but the same training process as in Scenario I, the result shows only a minor performance decrease in Scenario II: All models maintain their accuracy (\(R^{2}\)) above 0.8, with ANN and NGBoost remaining around 0.9. We even observed a slight performance improvement for SVR in Scenario II. NRMSE and SMAPE results also align with this interpretation (see Appendix). Next, the test case is fed with variations for insulation standard and energy system into trained models for both scenarios. We illustrate the corresponding results from different variation combinations in Figure 3. Based on the result of Scenario I (Figure 2(a), right), we concluded the following insights: 1. The test case prediction results from ANN and NGBoost are more similar; they also achieve better accuracy in the training process evaluation. 2. The choice of the energy system is the factor that affects the EUI heating the most, with the air-source heat pump (ASHP) system requiring the least energy consumption, and the boiler system the most. 3. Regardless of heating system variation, higher building component thermal standards contribute to reducing total energy consumption, as expected. With almost the same accuracy performance, the test case prediction result in Scenario II displays unusual patterns that contradict domain intuition, as shown in Figure 2(b). Although the choice of the heating system still shows the deterministic impact on EUI heating, the trend acts oppositely in insulation standard variation: The difference between the building insulation standards is either barely noticeable or even presents an inversed trend. Within the same heating system choice, a higher insulation standard results in more energy consumption in heating. This opposing trend even shows in the ANN, which achieves 0.94 in \(R^{2}\) during performance evaluation. Furthermore, we observed a drastic increase in the uncertainty range in the output of NGBoost compared to Scenario I (see orange scatter distributions in Figure 3). Based on the result from Scenario II, **wrong conclusions** could easily be drawn, potentially misguiding decision-making process in real-world projects or research, e.g.: _"In this case, insulation standard choices are unimportant, or adapting a lower insulation standard could help to reduce the energy usage of the building."_ This conclusion drawn from Scenario II clearly conflicts with the result from Scenario I and with common knowledge. We refer to Scenario II as a case of biased estimation or a fallout. This fallout is directly linked to potential economic and energy loss, as well as risks if implemented in real-world engineering construction scenarios. Given that the cost of implementing higher insulation standards in buildings is typically an important factor, this misleading conclusion could lead to the decision of investment reduction or underestimation. Such uncertain performance in the analysis could cause severe trust issues when adopting data-driven methods in engineering scenarios and decision-making processes. This is because real-world scenarios are less likely to provide complete data without hidden variables. It is less relevant to the modeling approach and cannot be ruled out by \begin{table} \begin{tabular}{c c c} \hline \hline **Model** & **R2 (Scenario I)** & **R2 (Scenario II)** \\ \hline _Decision Tree_ & 0.86 & 0.81 \\ _SVR_ & 0.87 & 0.87 \\ _ANN_ & 0.96 & 0.94 \\ _NGBoost_ & 0.95 & 0.88 \\ \hline \hline \end{tabular} \end{table} Table 3: 5-fold cross-validation performance result comparison of different models: Scenario I & II performance evaluation. As the only difference between the two scenarios is the feature selection, a closer examination of the input analysis, more specifically, the causal dependency analysis, is necessary. ### Causal Dependencies Analysis From a causal inference analysis perspective, the hidden relationships among input features cause the biased outcomes observed in Scenario II. Similar cases have been discussed in medical statistic research (Patil et al., 1981). In this section, we demonstrate that for the AEC domain, causal discovery can aid designers and engineers in comprehensively examining whether hidden relationships have been neglected and, by controlling them accordingly, avoid subjective bias and biased estimation. For a more intuitive engineering interpretation and evaluation, we expand upon Figure 3 and present a coherent causal dependencies analysis process to demonstrate that the analysis help avoid the fallout situation, as shown in Figure 4. Figure 3: Test case prediction result based on: (a) Scenario I trained with full-scale features; (b) Scenario II trained with masked features selected manually based on domain knowledge. In both subgraphs, the left part shows the selected features with set exposures (treatment inputs we want to vary) and outcome based on the scenario, while the right part is the prediction result on the test case: The y-axis lists different combinations of insulation standard and heating system setting, while the x-axis gives the EUI Heating prediction result from different models (by different markers). The first step of causal dependencies analysis is causal discovery, which is responsible for extracting a causal skeleton from training data in an unsupervised manner. The skeleton and process itself bring a critical nexus for connecting data-driven results with domain knowledge validation through causal skeleton pruning. In our case study, the pruning Figure 4: Causal dependencies analysis process, the dotted box is the content of Figure 3. (a), (b): Causal structure finding via GES: knowledge extraction based on the training dataset. Minor skeleton adjustments via domain knowledge are marked in orange; (c), (d), and (e): Scenario I; (f), (g) and (h): Scenario II: Blocking Construction Area would close the direct causal path from _Insulation Standard \(\rightarrow\) Construction Area \(\rightarrow\) EUI Heating_, and open a biasing path from _Insulation Standard \(\rightarrow\) Area \(\rightarrow\) Volume \(\rightarrow\) EUI Heating_, which leads to spurious conclusion; (i), (j) and (k): Corrected Scenario II with no biasing path. process is relatively straightforward, as demonstrated in Figure 3(b); only minor adjustments (marked in orange) are made based on the original skeleton generated by GES: 1. Adding a causal dependency (arrow) from _Window to Wall Ratio (WWR)_ to _EUI Heating_, since the causal connection between these two variables is slightly indirect. This is due to us manually merging all WWRs into one for a more simplified illustration. 2. Replacing the bidirectional arrow between _Number of Floors_ and _Area_ with a unidirectional arrow, as the number of floors is typically a variable given based on urban regulations determining the feasible floor area on a specific site. Subsequent to the setup of the causal skeleton, the exposure inputs _(Insulation Standard_ and _Heating System_) and the target outcome (_EUI Heating_) are integrated into the skeleton, thereby establishing the causal flow, as illustrated in Figure 3(e). Based on the skeleton and scenario setting, we identified three crucial intermediate features: _Window to Wall Ratio, Volume,_ and _Construction Area_. These features demonstrate direct causal effect connections to the target outcome and simultaneously carry causal dependencies with other features within the model. Among these three features, _Construction Area_ is at most important: It is the only feature that shares a common cause with the outcome (_EUI Heating_), and the common cause being one of the exposure inputs (_Insulation Standard_). This is expected given that the construction area is an input in the EUI estimation. The fact that it shares a cause with the outcome means that blocking the _Construction Area_ would close the causal path from: _Insulation Standard \(\rightarrow\) Construction Area \(\rightarrow\) EUI Heating_, and open a biasing path (a detour connection from exposure to the outcome) as: _Insulation Standard \(\rightarrow\) Area \(\rightarrow\) Volume \(\rightarrow\) EUI Heating_ (Figure 3(h)). This explains the unusual prediction results in Scenario II with variations in Insulation Standard. To correctly estimate the direct effect of _Insulation Standard_ on _EUI heating_, we should either involve the feature _Construction Area_ in the model to keep the causal path open, or we need to exclude _Construction Area_, _Area_, and _Volume_ together to avoid the biasing path. In other words, causal dependencies exist between the building insulation standard, construction area, building area, and volume; controlling the intermediate one and varying the rest leads to a biased sampling situation. From an engineering domain perspective, this causal finding conclusion mentioned above is derivable and can withstand cross-validation of domain knowledge, as the construction area serves as a common effect reflecting the configuration of the building area and building insulation standards: It is important to note that a larger building area and volume do not necessarily result in a proportional increase in the construction area. For instance, the thickness of building internal walls (non-loadbearing) and facades within the same insulation standard remains unchanged. Consequently, as the total building area expands, the building construction area proportion correspondingly shrinks. Meanwhile, higher building insulation standards correlate with better thermal isolation behavior for building facades. Better isolation typically equates to a thicker structure installation, hence the increase in construction area. Although we consider the _Construction Area_ not directly affecting the _EUI Heating_ since we vary the insulation standards, removing this feature from the model means the model samples through possible ranges from training data (refer to Table 2) and hence cancels out the consequential changes of _Insulation Standard_, while building _Area_ and _Volume_ are fixed, leading to more biases samples. ### Validation Building upon the conclusion from the causal dependencies analysis above, we can state: _"To properly investigate the causal effect from the Insulation Standard to EUI, the Construction Area should not be ignored for an unbiased effect estimation."_ With the same features selected as in Scenario II, _Construction Area_ is additionally included. The corresponding performance with the updated feature set is given in Table 4:, while the test case prediction result is illustrated in Figure 3(j). Notably, with only a slight decrease in accuracy compared to the performance in Scenario I (Table 3), the prediction trend and uncertainty ranges of the EUI Heating align with the output in Scenario I again. ### Occam's Razor for Knowledge Discovery: Identifying the Minimal Sufficient Adjustment Set Causal discovery analysis could also contribute to determining the minimal number of required variables thanks to the concept of "minimal sufficient adjustment sets". A causal DAG helps to answer the following common question in the data-driven process: _"Which variables (features) should we include in our model to get an unbiased estimate of the effect?"_ A "minimal sufficient adjustment set" refers to the smallest set of variables that need to be adjusted to reliably estimate a causal effect. These sets can be identified manually [Greenland et al., 1999, Shrier and Platt, 2008] or with a computer package [Textor et al., 2016]. In this context, the well-known concept of Occam's razor is appropriate for the causal model preference [Pearl et al., 2000]. Take our case as an example, one minimal sufficient adjustment set would include: _Construction Area_, _Floor Height_, and _Volume_. A skeleton illustration is given in Figure 5. As a result, we observe a similar unbiased trend in the case prediction as in Scenario I (Figure 3a). Combined with the prediction result, we recognize the potential for knowledge discovery in engineering scenarios by interpreting features present in the minimal sufficient adjustment set. Finally, it is essential to point out that DAGs and the minimal sufficient adjustment set solely provide identification information to ensure unbiased estimation, rather than addressing estimation performance. In engineering contexts, this data-driven process needs to relate to domain knowledge and thus be given context by the task-specific scenario for further analysis. ## 4 Discussion We utilize a fallout case to demonstrate an easily identifiable error when using data-driven models. However, identifying such errors could be much more challenging for designers in many cases, potentially leading to a distrust in data-driven methods. While these easily identifiable errors primarily appear in data-driven methods, similar risks of biased information exist when using first-principles simulations. First-principles simulations, extensively developed by numerous engineers and experts, carry their own biases [Rakitta and Wernery, 2021, Klotz, 2011, Zalewski et al., 2017]. The difference is that these biases are often hidden or subtle due to the established and extensively developed nature of these simulations. Cognitive biases [Minsky, 1991], which refer to systematic errors in thinking that affect people's decisions and judgments, can also cause such fallout situations. An example of a cognitive bias relevant in this context is confirmation bias, where engineers might favor information (e.g., a familiar type of design pattern, system deployment, or validation method) that confirms their preexisting beliefs or hypotheses while ignoring or downplaying contrary evidence. This bias leads to a skewed acquisition or utilization of personal domain knowledge. Considering the potential for cognitive biases, simulation results also bear fallout risks and often lack an appropriate adjustment mechanism. In this context, \begin{table} \begin{tabular}{c l} \hline \hline **Model** & \(\mathbf{R^{2}}\) \\ \hline _Decision Tree_ & 0.81 \\ _SVR_ & 0.90 \\ _ANN_ & 0.96 \\ _NGBoost_ & 0.90 \\ \hline \hline \end{tabular} \end{table} Table 4: 5-fold cross-validation performance result comparison of different models: Validation Scenario Figure 5: Minimal sufficient adjustment set based on the case: With _Floor Height, Volume_ and _Construction Area_ as extra inputs, the model generates unbiased estimation with sufficient information from the dataset. causal analysis serves as a useful tool for identifying potential biases in prior data, thus building a bridge that links and reinforces domain knowledge with data-driven methods. We argue that data-driven methods and first-principles simulations are not inherently conflicting. Rather, combining them may offer a practical solution to manage and mitigate the risk of biased outcomes. While managing cognitive biases is crucial, another significant aspect to consider is the process of feature selection. In the context of causal analysis, it may seem that the more features (input variables) involved in the modeling process, the more comprehensive the causal skeleton should be. Simply feeding more features into the modeling process doesn't necessarily contribute to the accuracy improvement. We perceive this as a trade-off between precision and accuracy in describing the case: * More detailed features formalize a good representation of the target case, reducing uncertainty with a more accurate description, but also raise the risk of biased variation analysis. * Using fewer detailed features certainly reduces the risk of biased result analysis; however, a too simple feature representation might overlook important factors that could affect the result and lead to incorrect conclusions. ## 5 Conclusion The evolution of engineering analysis methodologies has fostered synergetic interaction among data, domain knowledge, simulations, and data-driven methods. Our case study highlights the potential pitfalls of relying solely on data-driven methods without incorporating causal analysis. We proved that it is critical to examine causal relationships when performing a data-driven analysis to avoid misleading results. Consequently, we advocate for more attention and involvement in causal inference analysis in the engineering community. Moreover, we believe that extracting invariant and transferable information from data is crucial in bridging the gap between domain knowledge, simulations, and data-driven methods in engineering and transcending individual capabilities' limitations. ## 6 Acknowledgement We gratefully acknowledge the German Research Foundation (DFG) support for funding the project under grant GE 1652/3-2 in the Researcher Unit FOR 2363 and under grant GE 1652/4-1 as a Heisenberg professorship. ## 7 Appendix ### Mechanism Introduction of Machine Learning Methods _Tree-based models_ seek to identify optimal split points in the data to enhance prediction accuracy. The term "tree" refers to a decision tree, which forms the foundation of tree-based models. The decision tree algorithm identifies which data feature to split on and when to cease splitting based on information gain criteria (i.e., minimizing entropy in data split). While straightforward to interpret, decision trees are generally weak predictors. Enhanced ensemble methods such as bagging, random forest, boosting (Dietterich, 2000), and gradient boosting (Natekin and Knoll, 2013) have been adapted to improve performance but lead to less interpretable behavior. _Kernel machines_ utilize a linear classifier to address non-linear problems by defining a separating hyperplane to fit in data and make predictions. A kernel corresponds to a dot product in a typically high-dimensional feature space (Hofmann et al., 2008). In this space, estimation methods are linear, and all formulations are made in terms of kernel evaluations, thereby avoiding explicit computation in the high-dimensional feature space. _Neural networks_ comprise input, hidden, and output layers, where each layer is a group of neurons, loosely modeling the neurons in a biological brain. The connections between neurons (also called nodes) carry associated weights/biases. The data is fed into the network and passes through all neurons with activation functions (which add non-linearity to the output) in the forward propagation to produce output. The backpropagation mechanism (LeCun et al., 1988) updates neuron weights/biases according to the difference between prediction and output (loss function evaluation). ### Modeling Configuration for Generating Training Data The test case is a mixed-usage 4-floor building named Building.Lab on a tech campus in Regensburg, Germany (Chen et al., 2022b). The function of this 1,956 m\({}^{2}\) building is office and seminar use as well as housing, which consists of four above-ground stories and one underground level with a concrete skeleton structure. For supporting decision-making in energy-efficient building design, we developed a parametric model of an office building in a generic H-shape that covers a wide configuration variety of building components and zones. We varied this model to generate a representative training dataset for well-generalizing models on the target scenarios covering the design space characteristics of the case and similar buildings for performance evaluation. An illustration of the data generation process is given in Figure 6. For the variation of building insulation standards, we simulated three component thermal characteristic sets based on real-world building energy standards and, from low to high: 2020 German Energy Act for Buildings (GEG), Net Zero Energy Building (NZEB), and Passive House. The standards have different requirements for components' thermal conductivity (U-values), as presented in Table 5. As for heating systems, three typical building energy systems are simulated: boiler, air-source heat pump (ASHP), and district heating (DH). All systems have been modeled with convective hot water baseboards as their secondary energy system. The hot water loop temperature was 50\({}^{\circ}\)C for the air-sourced heat pump system variant and 80\({}^{\circ}\)C for the boiler and district heating system variants. The piping system was modeled as adiabatic. The heating setpoint scales a typical office hour schedule to a new target setpoint. During off-work hours (starting from 6 pm), only 75% of the setpoint is set. Starting at 6 am, setpoints are increased hourly to 85%, 95%, and 100%. The minimum heating temperature is set to 21\({}^{\circ}\)C as we referred to the national standard DIN EN 16798-1 [Beu], and we intend to find sustainable and high-performing solutions (all options to be inside category I with PPD\(<\)6%). As the comfort temperature is 22\({}^{\circ}\)C \(\pm\) 2K for environments below 16\({}^{\circ}\)C, we chose 21-24\({}^{\circ}\)C. In this simulation model, no cooling system and mechanical ventilation were modeled. The zone ventilation was only set by the air change rate per hour based on exterior air volume demands set from DIN EN 16798-1. Figure 6: Automatic data generation process with parametric modeling: a generic H-shape office building. The parameter ranges are determined with the consideration of covering the test case scenario and densely sampled with variations. Each sample is fed iteratively into the energy simulation pipeline composited by Grasshopper, Python, and intermediate models. 918 samples were generated as the training dataset. \begin{table} \begin{tabular}{p{113.8pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline **Insulation standard of U-Values in building components** & **Base: GEG (2020 German Energy Act for Buildings)** & **Medium: NZEB (Net Zero Energy Building)** & **High: Passive House** \\ \hline **Base plate** & 0.2625 & 0.206 & 0.15 \\ **Roof** & 0.15 & 0.135 & 0.12 \\ **Exterior wall, bearing, above ground** & 0.21 & 0.18 & 0.15 \\ **Exterior wall, bearing, under ground** & 0.2625 & 0.206 & 0.15 \\ **Window** & 0.975 & 0.888 & 0.8 \\ \hline \end{tabular} \end{table} Table 5: Different insulation standard requirements for building component thermal characteristics [W/m\({}^{2}\)K] To validate the simulation result, we sampled the generated data (Training data) by different insulation standards and heating systems, as presented in Table 6 and Table 7, respectively. ### Training Process and Result Validation During the model training process, a hyperparameter grid-search strategy with 5-fold cross-validation (Refaeilzadeh et al., 2009) is applied for fitting data scheme changes in each scenario for all ML models. From an intuitive understanding, it means the same model with all hyperparameter setting combinations are cross evaluated within the 80/20 split training data, to compare and ensure the models' best performance for test case validation. The results analysis by three evaluation metrics in all scenarios is presented in Table 8.
2309.10617
Intelligent Debris Mass Estimation Model for Autonomous Underwater Vehicle
Marine debris poses a significant threat to the survival of marine wildlife, often leading to entanglement and starvation, ultimately resulting in death. Therefore, removing debris from the ocean is crucial to restore the natural balance and allow marine life to thrive. Instance segmentation is an advanced form of object detection that identifies objects and precisely locates and separates them, making it an essential tool for autonomous underwater vehicles (AUVs) to navigate and interact with their underwater environment effectively. AUVs use image segmentation to analyze images captured by their cameras to navigate underwater environments. In this paper, we use instance segmentation to calculate the area of individual objects within an image, we use YOLOV7 in Roboflow to generate a set of bounding boxes for each object in the image with a class label and a confidence score for every detection. A segmentation mask is then created for each object by applying a binary mask to the object's bounding box. The masks are generated by applying a binary threshold to the output of a convolutional neural network trained to segment objects from the background. Finally, refining the segmentation mask for each object is done by applying post-processing techniques such as morphological operations and contour detection, to improve the accuracy and quality of the mask. The process of estimating the area of instance segmentation involves calculating the area of each segmented instance separately and then summing up the areas of all instances to obtain the total area. The calculation is carried out using standard formulas based on the shape of the object, such as rectangles and circles. In cases where the object is complex, the Monte Carlo method is used to estimate the area. This method provides a higher degree of accuracy than traditional methods, especially when using a large number of samples.
Mohana Sri S, Swethaa S, Aouthithiye Barathwaj SR Y, Sai Ganesh CS
2023-09-19T13:47:31Z
http://arxiv.org/abs/2309.10617v3
# Intelligent Debris Mass Estimation Model for Autonomous Underwater Vehicle ###### Abstract Marine debris have detrimental effects on marine life including entanglement and ingestion by marine organisms. Estimating the mass of marine debris is essential to understand the severity and depths of its impact on marine aquaculture. The methodology involved conducting a comparative analysis of all YOLO algorithms and their performance that enables future researchers to study and select the appropriate model for their specific needs. In this paper, we use instance segmentation to calculate the area of individual objects within an image, using YOLOv7 in Roboflow. YOLOv7 is a fast and accurate object detection algorithm, capable of processing up to 160 frames per second with the highest accuracy of 56.8 % among well-known object detectors. To perform instance segmentation, we use YOLOv7 in Roboflow to generate a set of bounding boxes for each object in the image with a class label and a confidence score for every detection. A segmentation mask is then created for each object by applying a binary mask to the object's bounding box. The masks are generated by applying a binary threshold to the output of a convolutional neural network trained to segment objects from the background. Finally, refining the segmentation mask for each object is done by applying post-processing techniques such as morphological operations and contour detection, to improve the accuracy and quality of the mask. The process of estimating the area of instance segmentation involves calculating the area of each segmented instance separately and then summing up the areas of all instances to obtain the total area. The calculation is carried out using standard formulas based on the shape of the object, such as rectangles and circles. In cases where the object is complex, the Monte Carlo method is used to estimate the area. This method provides a higher degree of accuracy than traditional methods, especially when using a large number of samples. Computer vision, Debris, Marine debris, Debris Mass Estimation, Autonomous Underwater Vehicles, Yolo algorithms,Instance segmentation, Machine learning. ## I Introduction Coastal pollution causing by marine debris owe a disastrous outgrowth on ecosystems, human and marine lives. Marine debris has a profound and serious impact on marine ecosystems. Marine lives are harmed through ingestion, entanglement, and habitat destruction by marine debris mistaking it for food leading to their internal injuries and even death. Debris mother coral reefs and other habitats that can disrupts ecosystems and food chains leading to cascading ecological effects. Marine Debris abide for decades or even centuries in the environment resulting in a tedious presence of pollutants in marine aquaculture. The consequences of marine debris elongate beyond immediate carnage. The Chronic hazard to debris leads to physiological and behavioral changes in marine organisms certainly debilitating their reproductive systems and overall wellness. The outcomes of marine debris extend beyond wildlife embracing the hidden adverse impacts on human health. The United Nation's 2023 Sustainable Development Goals (SDG) submission highlights annual plastic production has been surging from 1.5 million tonnes in the 1950s to a surprising 288 million tonnes in 2012 over the past six decades with the East Asia, Europe, and North America as the major contribriters. Global approximations suggest that in 2010 about 275 million tonnes of waste were generated by 192 coastal countries with 4.8 to 12.7 million tonnes ending up in marine environments [1]. According to the United Nations Environment Programme (UNEP), 60 major cities of India produces 15,343 tonnes of waste and it is disposed into the South Asian seas daily. The Data from the 2022 Swachh Sagar, Surakshit Sagar campaign disclosed Indian coastline accumulates around 0.98 metric tonnes of debris per kilometer. The United Nations Environment Programme (UNEP)'s fourth meeting in November 2020 exposed 90 million plastic medical masks contributed further to the marine debris crisis as the effect of COVID-19 pandemic. As per a report by THE HINDU news updated on November 18, 2018, Bindu Sulochanan, a marine ecologist at Mangalore's Central Marine Fisheries Research Institute (CMFRI), discovered plastic in the stomachs of various species since 2009. The Central Marine Fisheries Research Institute (CMFRI)'s researchers found plastic in species like mackerel near Mangalore, yellowfish tuna near Kochi, and anchovies off Alappuzha's coast. In 2014, Gujarat's Sasan Gir Forest Department examined a deceased Longman's Beaked Whale weighing a ton, discovering four significant plastic bags blocking its digestive system, bringing out the severe outcomes of plastic debris on marine life [2]. On October 31, 2020, on the Murcian coast in Spain a infected sperm whale having consumed 64 pounds of plastic waste, including debris like ropes and net was discovered. This event recalls a 2016 occurrence where fishing gear and a car engine cover were found within the stomachs of beaced sperm whales along Germany's North Sea coast[3]. The article "Ghosts of the Gulf: Marine Debris a Threat to Corals in the Gulf of Mannar," by Aathira Perinchery on 18 January 2021 in The India Mongabay news discussed abondoned fishing gear and plastic debris endanger Gulf of mannar's corals specifically delicate Acropora causing disintegrate and harm. Also resilient Thoothukudi coral reefs have recovered from bleaching events. The Lakshadweep reefs in the Arabian Sea also shows similar concerns as highlighted by Scientist Rohan Arthur [4]. The scientific study titled "Microplastic Pollution in Seawater and Marine Organisms across the Tropical Eastern Pacific and Galapagos" by Alonzo Alfaro-Nunez, published on February 25, 2022, revealed marine debris occupied 4,53,000 square kilometers in the Tropical Eastern Pacific and Galapagos. 240 specimens of 16 diverse species of fish, squid, and shrimp, which are consumed by humans contained microplastic particles are found on the coast. Among the species studied, carnivorous organisms displayed the highest microplastic presence at 77% in their digestive systems, followed by planktivores at 63% and detritivores at 20%. The giant squid, Dosidicus gigas, exhibited the highest prevalence at 93%, followed by Alopias pelagicus and Coryphaena hippurus, both at 87%. These findings highlighted the alarming levels of microplastic pollution along the Pacific Equator Coast and marks the first documented case of microplastic presence in marine organisms that are consumed by humans in that region [5].The Great Pacific Garbage Patch is located in the North Pacific between California and Hawaii, is a vast region of marine debris. Researches have detected up to 7,50,000 plastic pieces per square kilometer (or 1.9 million per square mile) and more than 200,000 pieces of debris per square kilometer (520,000 per square mile) in the regions of Atlantic garbage patch [6]. A British Broadcasting Corporation (BBC) article from December 5, 2021, reported marine creatures are founded alive in 90% of the debris along this coast. Dr Linsley Haram of the Smithsonian, Environmental Research centre, study emphasizes that the lasting habitat are formed by enduring plastics of 79000 tonnes [7]. This alarming pattern is predicted to almost triple by the year 2040 and could surge dramatically to 33 billion tons by 2050. Hence to eradicate the prevailing alarming situations marine debris monitoring plays a vital role in evaluating pollution characteristics for determining suitable action to pollution control. Artificial intelligence technologies like computer vision in marine science have great potential for determining marine debris. Using computer vision for the detection and identification of plastic objects in the marine environment helps to uncover accurate extent that is essential for proposing corrective actions. This paper introduces the integration of Inhibiting (Artificial intelligence) AI algorithm alongside AUVs equipped with advanced sensors, cameras, and imaging systems for the purpose of estimating marine debris mass through instance segmentation techniques, employing different versions of YOLO (You Only Look Once) algorithms. Further the torque required by the AUVs motor to pull the debris is also determined. The collected data are uploaded to the cloud platform and the output is displayed in the form of website. It helps in accurately detecting the amount of debris present in aquatic environments enabling exact measurements of debris size, shape and density aiding more precise mass calculations allowing for continuous and real-time monitoring of debris accumulation evaluating the efficacy of cleanup ensuring cleaner future for our oceans. This paper explains the techniques employed for mass estimation and embedding the AI algorithm in the AUV along with a thorough exploration of the conducted experiments and their corresponding findings. ## II Literature Review Autonomous underwater vehicles (AUVs) effectively contribute to detect and remove marine debris. Numerous deep-learning algorithms are evaluated to detect visually the marine debris. A large dataset of debris is annotated and trained with convention neural networks for object detection. To fit the algorithm for real-time application the model is evaluated on various platforms [8]. Achieving rapid detection, identification of autonomous underwater vehicles (AUVs) and cooperative objects remains a challenge due to the complexities of the underwater environment. This paper employs the YOLO (You Only Look Once) algorithm to identify underwater debris as it have good impact on target detection accuracy and recognition speed. Image enhancement techniques like histogram equalization and the Contrast-limited adaptive histogram equalization (CLAHE) algorithm are applied to enhance the images. These enhanced images are trained on both the yolov2 and yolov3 networks. The experiments outcome revealed the combination of YOLOv3 network and the Contrast-limited adaptive histogram equalization (CLAHE) algorithm effectively fulfills the criteria for rapid and accurate recognition in the detection and identification of underwater vehicles [9]. To overcome the difficulties in assessing the characteristics of marine debris on hard to reach places a method was developed using a segmentation model and images obtained by unmanned aerial vehicle (UAV)s. The conventional statistical estimation method overestimated coastal debris items at 6741 (\(\pm\)1960.0) more than the mapping method. The developed method offered segmentation model with a F1-score of approximately 0.74 for estimating a covered area around 177.4 m2 [10]. In underwater environments, the conventional stereo visual SLAM (simultaneous localization and mapping) technique relies on trackable features for camera positioning and mapping but these dependable points are often absent. To enhance the precision of vision-based localization systems in underwater environments an innovative approach is introduced. By Integrating point and line data it investigated a stereo point and line SLAM (PL-SLAM) algorithm that enhances localization and validated through precise experiments using an AR-marker [11]. To estimate PMD (Plastic Marine Debris) volumes a new strategy that combines unmanned aerial vehicle (UAV) surveys with deep learning-based image processing was introduced. A 3D model and orthoscopic beach image are formed using Structure from Motion software by employing data from UAVs. It enabled image-based edge detection for Physical Media-Dependent(PMD) volume calculation. It offered a rapid, precise, and objective alternative to subjective beach surveys by providing (Physical Media-Dependent ) PMD volume estimation accuracy below 5% [12]. This paper focused on optimizing AUV vision for real-time and low-light object detection. It achieved efficiency enhancements in state-of-the-art object detectors EfficientDets by up to 2.6% AP across different levels without improving the Graphics processing unit GPU latency. It also introduced a new dataset for detecting in-water debris and trained the improved detectors. The effectiveness and speed of two low-light underwater image enhancement strategies are evaluated [13]. The YOLO algorithm is enhanced by post-processing techniques such as non-maximum suppression (NMS) to refine the detected object bounding boxes, removing duplicates or overlapping detections and optimizing the network architecture. This proposed method demonstrated encouraging outcomes in debris detection accuracy and computational efficiency. One limitation of the approach is that it depends on clear and high-resolution underwater images for ideal performance which is not always be feasible in real-world scenarios and should focus on addressing the challenges posed by low visibility and varying environmental conditions in underwater debris detection [14]. In this article the authors proposed a multi-stage algorithm that combined image preprocessing, feature extraction, and classification techniques. One limitation of the approach is that it struggled with detecting small or partially buried debris objects. Future research should explore techniques for improving the algorithm's performance in these challenging scenarios to enhance underwater debris detection capabilities [15]. ## III Methodologies The AUV is launched by checking the sensors, cameras, propulsion and communication systems working status. To accelerate the autonomous movement under the ocean, Navigation and control systems are initialized. After being launched in a suitable platform, it records images and videos of the submerged environment and classify them. [16] AUV'S like the REMUS 6000 a subsidiary of Kongsberg Maritime, designed by the Naval Oceanographic Office, uses Dual-frequency Side-Scan Sonar and Synthetic Aperture Sonar (SAS) to assemble a 2D image of the seafloor via sound waves and the objects resting on it. It uses standard sensors and camera with high intensity for capturing high resolution images of the marine debris. [17] Iver4 900 Unmanned underwater vehicles (UUV) developed and manufactured by L3Harris OceanServer use Dual Frequency Sonar Side Scan, to capture and process detailed images of marine debris by detecting the echoes emitted from the signals and bouncing off from the seafloor. The Inertial Navigation System (INS) offers accurate positioning, orientation, and velocity data, ensuring precise AUV navigation for targeted marine debris detection. The Sound Velocity Profiler (SVP) sensor measures water's speed of sound facilitating precise marine debris identification through correct signal interpretation. Similarly [18] The Bluefin-21 AUV utilizes its standard payloads, side scan sonar,sub-bottom profiler, and the multi-beam echo-sounder to detect marine debris. These sensors captures the detailed images of the seafloor and underwater objects potentially identifying marine debris based on its distinct acoustic signature and shape. The camera system enhancing marine debris detection by capturing high-resolution black and white images of the seafloor and any debris present. These AUVs detect the type of marine debris by implementing the computer vision techniques to the data stored on the AUV's onboard data storage system that can be visualized and analyzed in real-time or post-mission using specialized software tools. [19] The EvoLogics's SONOBOT 5 uncrewed surface vehicle which came out in March 2023, employs single-beam/multibeam echosounders, side-scan sonar, and High Definition (HD) camera to capture images and videos. Notably, it introduces Object Recognition (OR), an onboard AI-based system that swiftly identifies and highlights objects from raw side-scan sonar or video output, operating even during mission. Neural network algorithms handle sonar data processing in real-time on dedicated hardware. A cloud-based ecosystem enhances Object Recognition (OR), providing updates and allowing user dataset uploads to train for new object recognition. Raw data is analyzed onboard instantly, showcasing an integration of advanced technology that advances marine exploration, surveillance, and object recognition. In this paper to get more accurate classification of debris and to estimate their mass, instance segmentation is employed to calculate the area of individual objects within an image, leveraging different iterations of YOLO algorithms within the Roboflow framework. In instance segmentation, the area of each mask is determined by counting the number of pixels in the mask. This involves creating a binary mask for each object in the image, where the pixels that correspond to the object are set to 1, and the background pixels are set to 0. After creating the binary mask, the area of the object is calculated by counting the number of pixels in the mask that have a value of 1. This pixel count corresponds to the area of the object in pixels. The size of each pixel in the image is typically determined by the imaging system or camera that captured the image. Eq (1) describes about the estimation of pixel size with the information of sensor size, number of pixels, distance of the sample object and the focal length of the camera. \[PixelSize=\left(\frac{SensorSize}{No.ofPixels}\right)\times\left(\frac{ DistToObject}{FocalLength}\right) \tag{1}\] \[AreaofEachMask=PixelSize \tag{2}\] \[\times No.ofPixelsinEachMask\] The area of total identified debris in a frame involves calculating the area of each segmented mask of each class separately and then summing up the areas of all the masks to obtain the total area. In cases where the object is complex, the Monte Carlo method is used to estimate the area. To estimate the area of an object using the Monte Carlo method, place 100 random points inside a rectangle of known area and count the number of points that lie within the object. The area of the object is proportional to the number of points that lie inside it and is given by this formula: Eq (3). \[AreaofObject=AreaofRectangle \tag{3}\] \[\times\frac{No.ofPointsInsideObject}{TotalNo.ofPointsinRectangle}\] Once the area is achieved, the volume of the marine debris is calculated by using standard formulas for regular shapes and for irregular shapes it is calculated using underwater Light Detection and Ranging (LiDAR) and photogrammetry. The computed mass is not the actually mass of the object, however they are helpful to identify an approximate value of debris deposited in the sea bed. Several other mass estimation techniques are also developed and applied for fish and coral reef biomass estimation. This aids cleaning robots to predetermine the capacity it can collect and carry back to the surface. Mathematical models are also used to estimate the force required to move debris based on factors such as size, shape, water flow speed, viscosity, and density. Uncertainty may exist in these estimates due to factors such as water resistance and turbulence, which can be accounted for by including parameters such as the Reynolds number which describes the flow regime of the water, and the turbulence intensity in the water Eq (4). \[Re=\frac{(\rho\nu L)}{\mu} \tag{4}\] where Re is the Reynolds number, \(\rho\) is the density of the fluid, \(\nu\) is the velocity of the fluid, L is the characteristic length of the object or flow, and \(\mu\) is the dynamic viscosity of the fluid. Encoders are used to measure a motor's rotational speed and calculate the torque being produced, which can be used to determine the force being applied to the debris. The torque is calculated using the formula Eq (5). \[Torque=k\times I\times(V-K^{\prime}\times w) \tag{5}\] where k is a constant, I is the current flowing through the motor, V is the voltage applied to the motor, w is the motor's rotational speed measured in radians per second, and k' is a constant that depends on the motor's design. Once the torque is calculated it is used to determine the force being applied to the debris using formula Eq (6) \[Force=\frac{Torque}{Radius} \tag{6}\] Acoustic Doppler Current Profilers ADCPs are used to detect and measure the velocity of water and the size and shape of underwater debris to estimate the mass of debris. The data is collected and preprocessed to remove noise and inconsistencies before feature extraction and model training. For feature extraction Principal Component Analysis (PCA) is used which is a useful technique for dimensionality reduction, identifying patterns in large underwater datasets, for reducing noise and identifying the most important features in a dataset that helps to identify features such as the presence of marine life, effects of anthropogenic noise, and patterns in the distribution of objects or environmental changes. The technique involves transforming the data into a new coordinate system where the first principal component explains the maximum variation in the data, and each subsequent component explains the remaining variance in order of importance Testing and validating the instance segmentation model using a separate set of data to evaluate its performance. The performance metrics include mean Average Precision (mAP), mean Intersection over Union (mIoU), and F1-score. Once the instance segmentation model is trained and validated, the data log is performed for every prediction by the AUV's embedded system that interface with sensors and stores the data in memory. The XBee module equipped with a gateway device receives data from the XBee network and acts as a bridge between the local XBee network and the internet. It forwards data from the XBee network to the cloud platform and uses the cloud platform's Application programming interface (API) to send the data to the cloud. The collected data is uploaded to the Google Cloud platform, which offers a range of tools for complex data analysis and processing, including BigQuery, Cloud Dataflow, and Cloud Dataproc. Uploading data to Google Cloud ensures that the data is backed up and can be recovered in case of data loss. Finally, the output is obtained in the form of a website for real-time monitoring of the computed data. Figure 1 illustrates the block diagram of the proposed methodology ### _Estimation Outcomes_ The volume and density are estimated for the debris found. The results are illustrated in Table 1. ### _Computer Vision_ Computer vision is dominating the current era and lots of research is being carried out by numerous researchers in this Fig. 1: Block Diagram of Proposed Methodology field. It is a field of artificial intelligence and computer science that focuses on enabling computers to interpret and understand visual information from the world. It involves developing algorithms and techniques to extract meaningful information from images and videos. Computer vision instructs machines to understand, grasp, and analyze a high-level understanding of visual contents. Its subfields include scene or object recognition, object detection, video tracking, object segmentation, pose and motion estimation, scene modeling, and image restoration. Leveraging the capabilities of computer vision, precise detection and estimation of the mass of marine debris can be achieved. By harnessing advanced image processing algorithms and pattern recognition techniques, computer vision enables the automated identification and categorization of various types of marine debris in aquatic environments. The application of computer vision in marine debris mass estimation not only expedites the data collection process but also minimizes human intervention, reducing potential biases and errors. This technology empowers researchers, environmentalists, and policymakers with real-time, data-driven insights to make informed decisions for marine conservation and protection. This paper focuses on the object detection, instance segmentation and its relevant subfields as the most important and popular tasks of computer vision. #### Iii-B1 Object Detection Object detection is a significant field in the domain of computer vision. It plays a primary role in the use of Intelligent Debris Mass Estimation Model for Autonomous Underwater Vehicles (AUVs) enabling it to identify, accurately detect and localize debris object leading towards competent marine debris mass estimation model. In this paper amongst the object detection algorithms different versions of YOLO algorithm is used for detecting and estimating the mass of marine debris as it posses the ability to manage images at a rapid rate of 45 Frames Per Second (FPS), achieving twice the average of mean Average Precision (mAP) and showing high-caliber detection accuracy comparing to other real-time systems making it a perfect choice for real-time processing. It works by the principle of dividing an image into a SxS grid of dimensions where each grid cell contains m bounding boxes. The network generates class probability and bounding box offset values within these bounding boxes. By choosing the bounding boxes with class probabilities above a specific threshold, the object is located within the image. It compares the detected debris instances with known debris categories and from expected debris types it identifies anomalies in the data. #### Iii-B2 YOLOv3 YOLO v3 introduced a new architecture called "Darknet-53", which consists of 53 convolutional layers approving for finer feature extraction and representation of objects. It enables detection at discrete resolutions by making use of three different scales in the architecture. For smaller objects it further introduced the idea of "feature pyramid networks" (FPN) to collect information from different scales and to enhance detection performance. FPN let the model to detect dissimilar object sizes more efficiently by combining high-resolution features from early layers and low-resolution features from deeper layers. To improve accuracy it refined YOLO architecture and offered multi-scale detection. YOLO v3 achieved state-of-the-art accuracy with real-time detection. It developed a more resilient loss function called the focal loss to address the problem of class disproportion in object detection. YOLOv3 showed a remarkable enhancement in accuracy by illustrating a mAP of around 79-82% on the Pascal VOC dataset compared to YOLOv2. It achieved an average precision AP of 36.2% and AP50 of 60.6% at 20 FPS making state-of-the-art during the time and 2x faster on MS COCO dataset [20]. #### Iii-B3 YOLOv4 YOLO v4 employed a modified CSPDarknet53 backbone architecture incorporating a Cross-Stage Partial Network (CSP) design to enhance feature extraction accuracy and speed. It brought forward the concept of the bag-of-freebies and the bag-of-specials to enhance model performance and Achieved state-of-the-art accuracy on the COCO benchmark when upholding real-time or near real-time performance. YOLO v4 introduced the Spatial Attention Module (SAM) to enhance feature representation and detection accuracy. It adopted diverse training strategies including DropBlock regularization, class-balanced loss, and focal loss, addressing generalization and challenging object detection. Advanced data augmentation techniques, like mosaic data augmentation and mixup augmentation, improved the model's adaptability to complex scenes and variations. Weighted-Residual-Connections (WRC) were introduced to enhance gradient flow and training convergence by employing weighted connections. YOLO v4 also proposed an ensemble approach, leveraging multiple models to enhance detection accuracy. It harnessed features like Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections (CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT), Mish-activation and more to achieve state-of-the-art results, reaching 43.5% AP (65.7% AP50) on the MS COCO dataset at 65 FPS on Tesla V100 in real-time [21]. #### Iii-B4 YOLOv5 YOLO v5 was developed independently inspired by the YOLO architecture. YOLOv5 is not endorsed \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **TYPES OF BERS** & **DIMENSIGNNOLIME (cm)** & **DENSITY (cm 3)** & **(g/cm 3)** \\ \hline Plastic beverage bottles & 6 x 6 x 23 bottles & 829 & 1.4 \\ \hline Plastic bags & 10 x 7 x 4 bags & 280 & 1.2 \\ \hline Food containers & 16 x 12 x 7 & 1344 & 1.35 \\ \hline Glass beverage bottles & 7 x 7 x 26 & 1274 & 2.5 \\ \hline Fishing gear & 210 x 15 x 10 & 31500 & 7 \\ \hline Wood & 41 x 10 x 14 & 5740 & 6.5 \\ \hline Tyres & 19 x 10 x 40 & 7600 & 2.2 \\ \hline \end{tabular} \end{table} TABLE I: Volume and density estimations of Marine debris by the original authors but it gained prominence for enhancing speed, simplicity, and accuracy in object detection. Its streamlined design incorporated a single network head that yielded quicker training and inference. YOLOv5 introduced an efficient model with anchor-free prediction and single-scale methodology. Instead of using predefined anchor boxes, YOLO v5 employed a CenterNet-style object detection approach directly. The integration of EfficientNet backbone further improved feature extraction and attained competitive accuracy.It considerably reduced the model size and inference duration. Evaluated on MS COCO dataset test-dev 2017, YOLOv5x achieved an AP of 50.7% with an image size of 640 pixels. Using a batch size of 32, it can achieve a speed of 200 FPS on an NVIDIA V100. Using a larger input size of 1536 pixels, YOLOv5 achieves an AP of 55.8% [22]. #### Iii-C5 YOLOv6 YOLOv6 stands out as the most accurate object detector, illustrated by YOLOv6 Nano achieving a 35.6% mAP on COCO dataset and sustaining over 1200 FPS on NVIDIA Tesla T4 Graphics processing unit (GPU) with a batch size of 32. The achievement is attributed to novel approaches like reparameterized backbones, model quantization, and diverse augmentations. Unlike its predecessors, YOLOv6 employs an anchor-free method for object detection, enhancing generalization and reducing post-processing time. The model architecture features a revamped reparameterized backbone and neck, utilizing Varifocal loss (VFL) for classification and Distribution Focal loss (DFL) for detection. YOLOv6's strategies, like prolonged training, quantization, and knowledge distillation, render it optimal for real-time industrial use, boasting 51% faster speed due to significantly fewer priors. The EfficientRep backbone, comprising RepBlock, RepConv, and CSPstackRep blocks, underpins YOLOv6 [23]. #### Iii-C6 YOLOv7 YOLOv7 is the fastest and most accurate real-time object detection model for computer vision tasks. The model is important for distributed real-world computer vision applications. The integration of YOLOv7 with BlendMask is used to perform instance segmentation. Therefore, the YOLOv7 object detection model was fine-tuned on the MS COCO instance segmentation dataset and trained for 30 epochs. YOLOv7 provides a greatly improved real-time object detection accuracy without increasing the inference costs. YOLOv7 surpasses all previous object detectors in terms of both speed and accuracy, ranging from 5 FPS to as much as 160 FPS. The YOLOv7 algorithm achieves the highest accuracy among all other real-time object detection models while achieving 30 FPS or higher using a GPU V100. Comparison with other real-time object detectors YOLOv7 achieves state-of-the-art (SOTA) performance. Source Compared to the best performing Cascade-Mask R-CNN models, YOLOv7 achieves 2% higher accuracy at a dramatically increased inference speed (509% faster). This is impressive because such Region-based Convolutional Neural Network(R-CNN) versions use multi-step architectures that previously achieved significantly higher detection accuracies than single-stage detector architectures. YOLOv7 outperforms YOLO, YOLOX, Scaled-YOLOv4, YOLOv5, End-to-End Object Detection with Transformers (DETR), Vision Transformers (ViT), Adapter-B, and many more object detection algorithms in speed and accuracy [24]. #### Iii-C7 YOLOv8 YOLOv8 is a versatile deep learning model introduced by Ultraltics. YOLOv8 offers capabilities in object detection, instance segmentation and image classification. Its training speed exceeds the conventional two-stage object detection models. A distinctive feature of YOLOv8 is its anchor-free design. It reduces the volume of box predictions and quickens Non-maximum Suppression (NMS) processing. YOLOv8 employs mosaic augmentation during training. Due to future drawbacks it is deactivated in the final ten epochs. YOLOv8 model can be operated via command line interface (CLI) or can be installed as a PIP package. It includes various integrations for tasks such as labeling, training, and deployment. YOLOv8 offers five scaled variants: YOLOv8n (nano), YOLOv8s (small), YOLOv8m (medium), YOLOv8l (large), and YOLOv8x (extra large). YOLOv8x achieved an AP of 53.9% at an image size of 640 pixels comparing to YOLOv5's 50.7% on the same input size. In the experiments conducted on the MS COCO dataset test-dev 2017 it achieved a speed of 280 FPS on an NVIDIA A100 and TensorRT [25]. ### _Drawbacks_ Accurate measurement of object size is necessary for estimating the mass in realtion with Object detection. The grid-based methodology of YOLO resulted in uncertain localization and object boundaries specifically for complex shaped debris. Debris overlapped within a single grid cell resulted in incorrect detection of merged debris as it has the possibility to accumulate and cluster in the marine realm. It detects using a single bounding box that may envelops both objects making it difficult to differentiate and classify individual objects. Due to its grid structure it struggled to detect tiny debris and not adequately capture fine details and classify them. The aspect ratios of the fixed bounding box did not adequately handle the objects with extreme aspect ratios. So in some cases YOLO's real-time focus prioritized speed over accuracy. ### _Instance Segmentation_ Instance segmentation is pivotal in the Intelligent Debris Mass Estimation Model for Autonomous Underwater Vehicles (AUVs) for accurately detecting, segmenting, and estimating debris in underwater settings. Unlike conventional object detection, this advanced computer vision technique not only identifies objects but also precisely outlines individual object boundaries. Instance segmentation algorithms such as YOLO v5, YOLOv7, and YOLOv8 effectively locate debris instances in captured images. These algorithms are trained on extensive annotated underwater datasets and use deep learning to recognize diverse debris types. In contrast to traditional approach, instance segmentation provides pixel-level classification which is crucial for precise mass estimation by delineating object shapes, including irregularities and overlaps. YOLO instance segmentation ensures precise object boundaries and spatial arrangements. Overlapping objects are handled adeptly by YOLO, distinguishing individual instances in the same area. Segmentation masks are created by applying binary masks and thresholds. They are refined through post-processing for improved accuracy [26]. #### Iii-A1 YOLOv5 The YOLOv5 instance segmentation models are renowned for remarkable speed and precision in real-time instance segmentation tasks. It comprises of an object detection head and the ProtoNet to generate prototype masks for segmentation, similar to an FCN with SiLU activations. Detection layers operate at three different scales, each yielding three anchor boxes and ProtoNet outputs prototype masks. The final convolutional detection heads have 351 channels differing from the standard 255. To secure confinement masks are clipped to bounding. YOLOv5 Extra Large (yolov5x-seg) achieves 41.4 mask mAP on A100 GPU with TensorRT running at 833 FPS with 1.2ms latency being the top-performing model. All models trained for 300 epochs on COCO dataset using NVIDIA A100 GPU. YOLOv5 exceeds ResNet101-backed models by perfoming faster. #### Iii-A2 YOLOv7 The YOLOv7 object detection model underwent fine-tuning using the MS COCO instance segmentation dataset and their training extended to 30 epochs. The combination of YOLOv7 and BlendMask enhances the instance segmentation capabilities of YOLOv7. This training process yielded cutting-edge real-time instance segmentation results. The incorporation of YOLOv7 into YOLO-Pose opens up opportunities for keypoint detection in Pose Estimation. The further refinement of YOLOv7-W6 model focused in detecting people in the MS COCO keypoint detection dataset and obtained state-of-the-art real-time pose estimation performance. #### Iii-A3 YOLOv8 The YOLOv8-Seg model is an evolution of the YOLOv8 object detection model that performs instance segmentation of the provided image. The CSPDarknet53 feature extractor is the foundation of the YOLOv8-Seg model at its core. In place of the traditional YOLO neck architecture this model adopted a novel C2f module. Two segmentation heads are responsible for predicting the instance segmentation masks for the given image. Similar to YOLOv8 this model contains five detection modules and a prediction layer. The YOLOv8-Seg model demonstrated state-of-the-art results while maintaining high speed and efficiency. ## IV Experiments and Results To measure the yolo model's performance, several sets of experiments were carried out on a trash-can dataset of images containing variety of objects. To ensure optimum efficiency for object detection and instance segmentation tasks the models were configured with clear-cut backbone architecture and hyperparameters. Certain hyperparameters like learning rate, batch size, and regularization parameters were carefully chosen to impact the model's learning. The dataset was cautiously annotated that displayed classes with precise bounding box coordinates and their respective class labels. On the basis of objects shapes and sizes in the dataset the anchor box sizes and aspect ratios were selected. Data Augmentation Techniques such as cropping, flipping, rotation, and others were applied to images in each batch before inputting it to the model to enhance the model's generalization by introducing a broad range of variations. The dataset was iteratively processed multiple times using optimization algorithms, including stochastic gradient descent, to fine-tune the model's weights. To monitor the model's performance throughout training, the dataset's subset was set up for validation purpose. If the performance on the validation set began to downturn during the process, the training was halted early to avoid overfitting. After training the models' performance are evaluated using metrics - Precision (P), Recall (R), mean Average Precision [email protected], [email protected] :0.95 and F1 score. These metrics measure the model's efficiency to accurately identify and segment objects within the images refining their real-world relevance. ### _Dataset and Annotations_ The [27] TrashCan dataset, introduced by Hong et al.,was utilized for the study and annotated using Roboflow, as it is a robust platform for annotating datasets for various computer vision tasks. The dataset images were uploaded and annotation process started by creating masks, polygons and bounding boxes to label the objects. To verify whether the images are properly prepared, image preprocessing techniques like resizing and augmentation are applied. For annotating bounding boxes around the objects are drawn in object detection. For instance segmentation, annotation tools like masks or polygons is used for precisely tracing the edges of each individual object in the image. The mask for each instance is a binary image, where pixels within the mask correspond to the object, while pixels outside the mask depict the background. After the annotation process class labels are assigned to each annotated objects. This step determines the class of each object, helping the model in learning to distinguish between different object classes. The annotation accuracy is reviewed and verified by validation process. Once the process is complete, the annotated dataset is exported in compatible formats such as YOLO Darknet, YOLO V3 Keras, YOLO V4 Pytorch, YOLO V5 Pytorch,, YOLO V7 Pytorch and YOLOV8. The exported datatset is integrated into the YOLO Algorithms and the model attained 97.2% mAP. ### _Object Detection_ #### Iv-B1 YOLOv3 YOLOv3 demonstrated precision(P) and recall(R) scores of 0.9028 and 0.954 respectively. This dual success emphasized the model's ability to minimize false positives while capturing a substantial number of true positives. YOLOv3 showcased an impressive mAP score, attesting to its efficiency in detecting trash cans across diverse scenarios. At an IoU threshold of 0.5, the model achieved an [email protected] of 0.9632. Over a range of IoU thresholds from 0.5 to 0.95, it attained 0.7822, highlighting its adaptability to various object overlaps. The F1 score of 0.9276 underscored YOLOv3's equilibrium between precision and recall, encapsulating its comprehensive detection prowess. The results of object detection using YOLOv3 are displayed in Figure 2. #### Iv-B2 YOLOv5 YOLOv5 exhibited precision(P) and recall(R) results of 0.965 and 0.9602, respectively. This dual accomplishment highlighted the model's capacity to minimize false positives while effectively capturing a substantial count of true positives. YOLOv5 demonstrated an impressive mean Average Precision (mAP) affirming its effectiveness in identifying trash can across a wide array of scenarios. At an Intersection over Union (IoU) threshold of 0.5, the model attained an [email protected] of 0.9668. Across a range of IoU thresholds spanning from 0.5 to 0.95, it achieved a score of 0.804, showcasing its flexibility in accommodating various levels of object overlap. The F1 score of 0.9624 underscored YOLOv5's equilibrium between precision and recall, encapsulating its comprehensive detection capabilities. #### Iv-B3 YOLOv7 YOLOv7 displayed precision and recall scores of 0.9762 and 0.9626 correspondingly. This twofold achievement highlighted the model's aptitude for minimizing false positives while successfully capturing a significant number of true positives. YOLOv7 presented an exceptional mean Average Precision (mAP) score, affirming its efficacy in recognizing trash cans across diverse scenarios. With an Intersection over Union (IoU) threshold of 0.5, the model accomplished an [email protected] of 0.9708. Spanning a range of IoU thresholds from 0.5 to 0.95, it achieved a value of 0.808, underscoring its versatility in accommodating varying degrees of object overlap. The F1 score of 0.9692 highlighted YOLOv7's balance between precision and recall, encapsulating its all-encompassing detection prowess. The outcome of object detection with YOLOv7 is presented in Figure 3. #### Iv-B4 YOLOv8 The YOLOv8 model showcased impressive precision and recall metrics, with a precision (P) score of 0.913 and a recall (R) score of 0.9548. These values underscore the model's precision and its capacity to effectively reduce both false positives and false negatives. YOLOv8's performance was especially noteworthy in terms of the mean average precision, where it achieved an outstanding mAP score. To be specific, at an intersection over union (IoU) threshold of 0.5, the [email protected] score stood at 0.967. Furthermore, across a range of IoU thresholds from 0.5 to 0.95, the [email protected]:0.95 score reached an impressive 0.832, highlighting its consistent and dependable performance across varying levels of object overlap. The F1 score, a well-balanced metric considering both precision and recall, reached 0.9333. This accomplishment validates the model's ability to strike a harmonious balance between these two critical aspects, demonstrating its comprehensive and adaptable performance. The graphical representation of the object detection results obtained using YOLOv8 is showcased in Figure 4. The results of object detection using YOLOv8 are showcased in Figure 5. ### _Instance segmentation_ #### Iv-C1 YOLOv5 The YOLOv5 instance segmentation model exhibited impressive precision and recall outcomes. It attained a precision (P) score of 0.9762 and a recall (R) score of 0.9626, emphasizing its accuracy in minimizing false positives and false negatives. YOLOv5's performance excelled in mean average precision evaluation, achieving a notable mAP score. Notably, at an intersection over union (IoU) threshold of 0.5, the [email protected] score reached 0.9722. Moreover, across IoU thresholds spanning from 0.5 to 0.95, the [email protected]:0.95 demonstrated consistency by achieving 0.7574 underscoring its reliability across diverse object overlap scenarios. The F1 \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline **Class** & **Images** & **Labels** & **P** & **R** & **mAP@\_5** & **mAP@\_5:95** \\ \hline All & 117 & 152 & 0.903 & 0.954 & 0.963 & 0.783 \\ \hline Crab & 117 & 16 & 0.843 & 1 & 0.988 & 0.832 \\ \hline Fish & 117 & 37 & 0.983 & 1 & 0.995 & 0.814 \\ \hline Machines & 117 & 58 & 0.914 & 0.914 & 0.944 & 0.737 \\ \hline Trash & 117 & 41 & 0.871 & 0.902 & 0.926 & 0.748 \\ \hline \end{tabular} \end{table} TABLE II: YOLOv3 Object detection experiment outcome Fig. 4: YOLOv8 Object detection graph outcome Fig. 3: YOLOv7 Object detected outcome Fig. 2: YOLOv3 Object detection graph outcome score, a holistic metric that combines precision and recall reached 0.9692. This success further underscores the model's capacity to strike a harmonious balance between precision and recall, accentuating its adaptability and all-encompassing performance. Figure 6 illustrates the findings of Instance segmentation achieved through YOLOv5. #### V-C2 YOLOv7 The YOLOv7 instance segmentation model yielded impressive outcomes concerning precision and recall. It secured a precision (P) rating of 0.632 and a recall (R) score of 0.726, illustrating its precision in minimizing both false positives and false negatives. YOLOv7's prowess was most evident in its mean average precision (mAP) evaluation, where it achieved a remarkable mAP score. Specifically, at an intersection over union (IoU) threshold of 0.5, the [email protected] score reached 0.7218. Furthermore, across IoU thresholds spanning from 0.5 to 0.95, the [email protected]:0.95 consistently demonstrated reliability by attaining 0.517, spotlighting its steadfastness across various scenarios of object overlap. The F1 score, an all-encompassing metric that amalgamates precision and recall, achieved 0.6756. This achievement further \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline **Class** & **Images** & **Labels** & **P** & **R** & **[email protected]** & **[email protected]:95** \\ \hline All & 117 & 152 & 0.913 & 0.955 & 0.967 & 0.832 \\ \hline Crab & 117 & 16 & 0.874 & 1 & 0.978 & 0.82 \\ \hline Fish & 117 & 37 & 0.992 & 1 & 0.995 & 0.92 \\ \hline Machines & 117 & 58 & 0.837 & 0.914 & 0.947 & 0.713 \\ \hline Trash & 117 & 41 & 0.949 & 0.905 & 0.948 & 0.876 \\ \hline \end{tabular} \end{table} TABLE V: YOLOv8 Object detection experiment outcome Fig. 5: YOLOv8 Object detected outcome \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline **Class** & **Images** & **Labels** & **P** & **R** & **[email protected]** & **[email protected]:95** \\ \hline All & 117 & 152 & 0.976 & 0.963 & 0.971 & 0.808 \\ \hline Crab & 117 & 16 & 0.98 & 1 & 0.995 & 0.845 \\ \hline Fish & 117 & 37 & 0.997 & 1 & 0.995 & 0.841 \\ \hline Machines & 117 & 58 & 0.956 & 0.948 & 0.966 & 0.802 \\ \hline Trash & 117 & 41 & 0.972 & 0.902 & 0.927 & 0.744 \\ \hline \end{tabular} \end{table} TABLE IV: YOLOv7 Object detection experiment outcome Fig. 6: YOLOv5 Instance segmented outcome underscores the model's capacity to maintain a harmonious equilibrium between precision and recall, demonstrating its versatility and comprehensive performance. The depiction of YOLOv7 object detection results can be observed in Figure 7. #### Iv-B3 YOLOv8 : The YOLOv8 instance segmentation model delivered impressive precision and recall results. It achieved a precision (P) score of 0.908 and a recall (R) score of 0.915, emphasizing its precision in effectively minimizing both false positives and false negatives. YOLOv8's excellence was particularly evident in the mean average precision (mAP) assessment, where it secured a noteworthy mAP score. In detail, at an intersection over union (IoU) threshold of 0.5, the model attained an [email protected] score of 0.955. Furthermore, across a range of IoU thresholds spanning from 0.5 to 0.95, the [email protected]:0.95 consistently demonstrated its reliability by achieving 0.758, highlighting its consistent performance across diverse scenarios of object overlap. The F1 score, an all-encompassing metric that combines precision and recall, reached 0.9114. This accomplishment further underscores the model's ability to maintain a harmonious equilibrium between precision and recall, accentuating its adaptability and comprehensive performance. The representation of the results from YOLOv8 object detection is depicted in Figure 8. ## V Conclusion In this paper different versions of YOLO object detection and instance segmentation models performance and their respective Precision (P), Recall (R), mean Average Precision (mAP)@0.5, [email protected] :0.95 and F1 score are obtained. The comparative study of these models accuracy are done in which the instance segmentation models provided better accuracy in detecting marine debris making it a robust model to precisely estimate the mass of the marine debris. By using the Monte Carlo method, underwater LiDAR, photogrammetry, standard formulas for volume and density the mass of the marine debris is obtained. The Reynolds number, Archimedes principle, Sutherland's equation are calculated for obtaining the torque value to pull the load and it is given to the motor. These collected data are stored in the Google Cloud and the data obtained are presented in the website format. Figure 9 illustrates the output of the proposed website. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline **Class** & **Images** & **Labels** & \multicolumn{4}{c|}{**Box**} & \multicolumn{4}{c|}{**Mask**} \\ \hline & Parameters & P & R & mAP @ 50 & mAP@50:95 & P & R & mAP@50 & mAP@50:95 \\ \hline All & 117 & 152 & 0.971 & 0.963 & 0.971 & 0.808 & 0.976 & 0.963 & 0.972 & 0.757 \\ \hline Crab & 117 & 16 & 0.995 & 1 & 0.995 & 0.845 & 0.98 & 1 & 0.995 & 0.691 \\ \hline Fish & 117 & 37 & 0.995 & 1 & 0.995 & 0.841 & 0.997 & 1 & 0.995 & 0.835 \\ \hline Machines & 117 & 58 & 0.966 & 0.948 & 0.966 & 0.802 & 0.956 & 0.948 & 0.966 & 0.726 \\ \hline Trash & 117 & 41 & 0.927 & 0.902 & 0.927 & 0.744 & 0.972 & 0.902 & 0.933 & 0.778 \\ \hline \end{tabular} \end{table} TABLE VI: YOLOv5 instance segmentation experiment outcome Fig. 8: YOLOv8 Instance segmented outcome Fig. 7: YOLOv7 Instance segmented outcome Fig. 9: Outcome of the Proposed Website
2302.00010
Structure Formation Paradigm and Axion Quark Nugget dark matter model
We advocate an idea that ``non-baryonic" dark matter in form of nuggets made of standard model quarks and gluons (similar to the old idea of the Witten's strangelets) could play a crucial role in structure formation. The corresponding macroscopically large objects, which are called the axion quark nuggets (AQN) behave as {\it chameleons}: they do not interact with the surrounding material in dilute environment, but they become strongly interacting objects in sufficiently dense environment. The AQN model was invented long ago with a single motivation to explain the observed similarity $\Omega_{\rm DM}\sim \Omega_{\rm visible}$ between visible and DM components. This relation represents a very generic feature of this framework, not sensitive to any parameters of the construction. We argue that the strong visible-DM interaction may dramatically modify the conventional structure formation pattern at small scales to contribute to a resolution of a variety of interconnected problems (such as Core-Cusp problem, etc) which have been a matter of active research and debates in recent years. We also argue that the same visible-DM interaction at small scales is always accompanied by a broad band diffuse radiation. We speculate that the recently observed excesses of the UV emission by JWST at high redshifts and by GALEX in our own galaxy might be a direct manifestation of this AQN-induced radiation. We also speculate that the very same source of energy injection could contribute to the resolution of another long standing problem related to the Extragalactic Background Light (EBL) with known discrepancies in many frequency bands (from UV to optical, IR light and radio emissions).
Ariel Zhitnitsky
2023-01-31T19:00:00Z
http://arxiv.org/abs/2302.00010v1
# Structure Formation Paradigm and Axion Quark Nugget dark matter model ###### Abstract We advocate an idea that "non-baryonic" dark matter in form of nuggets made of standard model quarks and gluons (similar to the old idea of the Witten's strangelets) could play a crucial role in structure formation. The corresponding macroscopically large objects, which are called the axion quark nuggets (AQN) behave as _chameleons_: they do not interact with the surrounding material in dilute environment, but they become strongly interacting objects in sufficiently dense environment. The AQN model was invented long ago with a single motivation to explain the observed similarity \(\Omega_{\rm DM}\sim\Omega_{\rm visible}\) between visible and DM components. This relation represents a very generic feature of this framework, not sensitive to any parameters of the construction. We argue that the strong visible-DM interaction may dramatically modify the conventional structure formation pattern at small scales to contribute to a resolution of a variety of interconnected problems (such as Core-Cusp problem, etc) which have been a matter of active research and debates in recent years. We also argue that the same visible-DM interaction at small scales is always accompanied by a broad band diffuse radiation. We speculate that the recently observed excesses of the UV emission by JWST at high redshifts and by GALEX in our own galaxy might be a direct manifestation of this AQN-induced radiation. We also speculate that the very same source of energy injection could contribute to the resolution of another long standing problem related to the Extragalactic Background Light (EBL) with known discrepancies in many frequency bands (from UV to optical, IR light and radio emissions). keywords: dark matter, galaxy formation, axion + Footnote †: journal: Journal of LaTeX Templates ## 1 Introduction Observational precision data gathered during the last quarter of century have guided the development of the so called concordance cosmological model \(\Lambda\)CDM of a flat universe, \(\Omega\simeq 1\), wherein the visible hadronic matter represents only \(\Omega_{B}\simeq 0.05\) a tiny fraction of the total energy density, see recent review [1], and interesting historical comments [2]. Most of the matter component of the universe is thought to be stored in some unknown kind of cold dark matter, \(\Omega_{DM}\simeq 0.25\). The largest contribution \(\Omega_{\Lambda}\simeq 0.70\) to the total density is cosmological dark energy with negative pressure, another mystery which will not be discussed here. There is a fundamental difference between dark matter and ordinary matter (aside from the trivial difference dark vs. visible). Indeed, DM played a crucial role in the formation of the present structure in the universe. Without dark matter, the universe would have remained too uniform to form the galaxies. Ordinary matter could not produce fluctuations to create any significant structures because it remains tightly coupled to radiation, preventing it from clustering, until recent epochs. On the other hand, dark matter, which is not coupled to photons, would permit tiny fluctuations to grow for a long, long time before the ordinary matter decoupled from radiation. Then, the ordinary matter would be rapidly drawn to the dense clumps of dark matter and form the observed structure. The required material is called the Cold Dark Matter (CDM), and the obvious candidates are weakly interacting fundamental particles of any sort which are long-lived, cold and collision-less, see old review article [3] advocating this picture, and more recent review article [4] with extended list of updates. While this model works very well on large scales, a number of discrepancies have arisen between numerical simulations and observations on sub-galactic scales, see e.g. recent reviews [4; 5] and references on original papers therein. Such discrepancies have stimulated numerous alternative proposals including, e.g. Self-Interacting dark matter, Self-Annihilating dark matter, Decaying dark matter, and many others, see [4; 5] and references therein. There are many other cosmological/astrophysical observations which apparently also suggest that the standard assumption (that the dark matter made of absolutely stable and "practically non-interacting" fundamental particles) is oversimplified. Some of the observations that may be in conflict with the standard viewpoint are: \(\bullet\) Core-Cusp Problem. The disagreement of the observations with high resolution simulations is alleviated with time, but some questions still remain [4; 5]. \(\bullet\) Missing Satellite Problem. The number of dwarf galaxies in the Local group is smaller than predicted by collision-less cold dark matter (CCDM) simulations. This problem is also becoming less dramatic with time but some questions still remain [4; 5]. \(\bullet\) Too-Big-to-Fail Problem. This problem is also becoming less dramatic with time but some questions still remain [4; 5]. The problems mentioned above1 occur as a result of comparison of the N-body simulations with observations. Therefore, one could hope that these problems could be eventually resolved when better and more precise simulations (for example accounting for the baryon feedback processes, such as gas cooling, star formation, supernovae, active galactic nuclei) become available. However, there are some observations which are not based on comparison with N-body simulations, and which are very hard to understand within conventional CCDM paradigm. We list below some of such puzzling observations. This list is far from being complete, see footnote 1. Footnote 1: There are many more similar problems and very puzzling observations. We refer to the review papers [4; 5] on this matter. There are also different, but related observations which apparently inconsistent with conventional picture of the structure formation, and which will be mentioned in section 4. \(\bullet\) DM-Visible Matter correlation shown on Fig.1 in ref. [5] for the normal Spirals, dwarf Spirals, low surface brightness and the giant elliptical galaxies is very hard to interpret unless DM interacts with SM particles, see also earlier works [6; 7] where such relations had been originally discussed. \(\bullet\) Another manifestation of the DM-Visible Matter correlation is presented on Fig.3 in ref. [5] which shows that the density kernel \(K_{c}(r)\) defined as \[K_{c}(r)\equiv[\rho_{\rm DM}(r)\cdot\rho_{\rm visible}(r)]\sim{\rm const}\ \ \ \ \ \ \ \ \ \ {\rm at}\ \ \ \ \ r\simeq r_{0} \tag{1}\] is almost a constant for all Spiral galaxies when computed at a specific point \(r\simeq r_{0}\), which roughly coincides with observed size of the core. In fact, \(K_{c}(r_{0})\) varies by only factor of 2 or so when masses vary by several orders of magnitude. The density itself may also vary by several orders of magnitude for different galaxies of different masses and sizes. Both these observations unambiguously suggest that the DM and visible matter components somehow know about each other, and start to interact strongly at small scales \(r<r_{0}\), while DM behaves as conventional CCDM for large scales \(r>r_{0}\). The main goal of the present studies is to argue that the aforementioned discrepancies (and many other related problems referred to in footnote 1.) may be alleviated if dark matter is represented in form of the composite, nuclear density objects, the so-called Axion Quark Nuggets (AQN). The AQN DM model was suggested long ago in [8] with a single motivation to explain the observed similarity between the DM and the visible matter densities in the Universe, i.e. \(\Omega_{\rm DM}\sim\Omega_{\rm visible}\), which is very generic and model-independent consequence of the construction, see below. This model is very similar in spirit to the well-known Witten's quark nuggets [9] with several novel features which resolve the previous fundamental problems of the construction [9], to be discussed in next section. For now, in this Introduction we want to make only two important comments on the AQN dynamics which are relevant for the present work: 1. The AQNs behave as chameleon-like particles during the epoch of the structure formation with \(z\in(5-15)\) when re-ionization epoch starts and ionization fraction is expected to be large. This is because the AQN properties strongly depend on environment as we discuss in section 3. They have all features of ordinary CCDM in the very dilute environment. However, they become strongly interacting objects when the ordinary visible matter density becomes sufficiently high, which is indeed the case in central regions of the galaxies. The interaction with surrounding material becomes essential in this case. Precisely this feature explains the observed correlation (1) as we shall argue below. The same visible-DM interaction generates EM radiation in many frequency bands, including the UV emission. One could speculate that the recent JWST observations [10; 11; 12; 13], which apparently detect some excess of the UV radiation from red-shifted galaxies could be a direct manifestation of this UV radiation. 2. The very same interaction of the visible-DM components may lead to many observable effects at present epoch at \(z=0\) as well, including the excessive UV radiation. In fact, the AQNs may be responsible for explanation of the mysterious and puzzling observation [14; 15; 16] suggesting that there is a strong component of the diffuse far-ultraviolet (FUV) background which is very hard to explain by conventional physics in terms of the dust-scattered starlight. Indeed, the analysis carried out in [14; 15; 16] disproves this conventional picture by demonstrating that the observed FUV radiation is highly symmetric being independent of Galactic longitude in contrast with highly asymmetric localization of the brightest UV emitting stars. It has been suggested in [17] that the puzzling radiation could be originated from the AQN nuggets which indeed are uniformly distributed. It is important to emphasize that in this work with the main goal to study the AQN-induced effects which may impact the structure formation at \(z\in(5-15)\) when ionization fraction \(x_{e}\) is expected to be sufficiently large, we use the same basic parameters which had been previously used for variety of different applications in dramatically different environments, including ref. [17] with explanation of the observed puzzling FUV emission at present time. In both cases (present time and high redshifted epoch) the physics is the same and is determined by the coupling \([\rho_{\rm DM}(r)\cdot\rho_{\rm visible}(r)]\), which essentially enters the observed correlation (1). Our presentation is organized as follows. In the next section 2 we overview the basic elements of the AQN construction with main focus on the key features relevant for the present studies. Section 3 represents the main technical portion of this work where we argue that AQNs may dramatically modify the domain in parametrical space when cooling is sufficiently efficient and galaxies may form. In section 4.1 we explain how the observed correlation (1) could emerge within the AQN scenario. In section 4.2 we argue that the AQN-induced processes always accompany the UV radiation. We further speculate that the recent JWST observations [10; 11; 12; 13] could be a direct manifestation of this UV radiation. The JWST observations at large \(z\) are in fact very similar to mysterious FUV studies at present time [14; 15; 16] as reviewed in 4.3. Finally, we conclude with section 5 where we list a number of other mysterious observations in dramatically different environments (during BBN epoch, dark ages, and at present time: on the galactic, Solar and Earth scales) which could be explained within the same framework with the same set of parameters. We also suggest many new tests, which are based on qualitative, model-independent consequences of our proposal, and which can substantiate or refute this proposal. ## 2 The AQN dark matter model We overview the fundamental ideas of the AQN model in subsection 2.1, while in subsection 2.2 we list some specific features of the AQNs relevant for the present work. ### The basics The AQN construction in many respects is similar to the Witten's quark nuggets, see [9; 18; 19]. This type of DM is "cosmologically dark" as a result of smallness of the parameter relevant for cosmology, which is the cross-section-to-mass ratio of the DM particles. This numerically small ratio scales down many observable consequences of an otherwise strongly-interacting DM candidate in form of the AQN nuggets. There are several additional elements in the AQN model in comparison with the older well-known and well-studied theoretical constructions [9; 18; 19]. First, there is an additional stabilization factor for the nuggets provided by the axion domain walls which are copiously produced during the QCD transition. This additional element helps to alleviate a number of problems with the original Witten's model2. Secondly, the nuggets can be made of _matter_ as well as _antimatter_ during the QCD transition. Footnote 2: In particular, a first-order phase transition is not a required feature for nugget formation as the axion domain wall (with internal QCD substructure) plays the role of the squeezer. Another problem of the old construction [9; 18; 19] is that nuggets likely evaporate on the Hubble time-scale. For the AQN model, this is not the case because the vacuum-ground-state energies inside (the color-superconducting phase) and outside the nugget (the hadronic phase) are drastically different. Therefore, these two systems can coexist only in the presence of an external pressure, provided by the axion domain wall. This should be contrasted with the original model [9; 18; 19], which is assumed to be stable at zero external pressure. This difference has dramatic observational consequence- the Witten’s nugget will turn a neutron star (NS) into the quark star if it hits the NS. In contrast, a matter type AQN will not turn an entire star into a new quark phase because the quark matter in the AQNs is supported by external axion domain wall pressure, and therefore, can be extended only to relatively small distance \(\sim m_{a}^{-1}\), which is much shorter than the NS size. The presence of the antimatter nuggets in the AQN framework is an inevitable and the direct consequence of the \(\mathcal{CP}\) violating axion field which is present in the system during the QCD time. As a result of this feature the DM density, \(\Omega_{\rm DM}\), and the visible density, \(\Omega_{\rm visible}\), will automatically assume the same order of magnitude densities \(\Omega_{\rm DM}\sim\Omega_{\rm visible}\) irrespectively to the parameters of the model, such as the axion mass \(m_{a}\). This feature represents a generic property of the construction [8] as both component, the visible, and the dark are proportional to one and the same fundamental dimensional constant of the theory, the \(\Lambda_{\rm QCD}\). We refer to the original papers [20; 21; 22; 23] devoted to the specific questions related to the nugget's formation, generation of the baryon asymmetry, and survival pattern of the nuggets during the evolution in early Universe with its unfriendly environment. We also refer to a recent brief review article [24] which explains a number of subtle points on the formation mechanism, survival pattern of the AQNs during the early stages of the evolution, including the Cosmic Microwave Background (CMB) Big Bang Nucleosynthesis (BBN), and recombination epochs. We conclude this brief review subsection with Table 1 which summarizes the basic features and parameters of the AQNs. The parameter \(\kappa\) in Table 1 is introduced to account for the fact that not all matter striking the nugget will annihilate and not all of the energy released by annihilation will be thermalized in the nuggets. The ratio \(\Delta B/B\ll 1\) in the Table implies that only a small portion \(\Delta B\) of the total (anti)baryon charge \(B\) hidden in form of the AQNs get annihilated during big-bang nucleosynthesis (BBN), Cosmic Microwave Background (CMB), or post-recombination epochs (including the galaxy and star formation), while the dominant portion of the baryon charge survives until the present time. Independent analysis [28] and [27] also support our original claims as cited in the Table 1 that the anti-quark nuggets survive the BBN and CMB epochs. Finally, one should mention here that the AQN model with the same set of parameters may explain a number of other puzzling observations in dramatically different environments (during BBN epoch, dark ages, and at present time: on the galactic, Solar and Earth scales) as highlighted in concluding section 5. ### When the AQNs start to interact in the galactic environment For our present work, however, the most relevant studies are related to the effects which may occur when the AQNs made of antimatter propagate in the environment with sufficiently large visible matter density \begin{table} \begin{tabular}{c c} \hline \hline Property & Typical value or feature \\ \hline AQN’s mass \(\,[M_{N}]\) & \(M_{N}\approx 16\,g\,(B/10^{25})\)[24] \\ baryon charge constraints \(\,[B]\) & \(B\geq 3\cdot 10^{24}\)[24] \\ annihilation cross section \(\,[\sigma]\) & \(\sigma\approx\kappa\pi R^{2}\simeq 1.5\cdot 10^{-9}\mathrm{cm}^{2}\cdot\kappa(R/2. 2\cdot 10^{-5}\mathrm{cm})^{2}\) \\ density of AQNs \(\,[n_{\rm AQN}]\) & \(n_{\rm AQN}\sim 0.3\cdot 10^{-25}\mathrm{cm}^{-3}(10^{25}/B)\)[24] \\ survival pattern during BBN & \(\Delta B/B\ll 1\)[25; 26; 27; 28] \\ survival pattern during CMB & \(\Delta B/B\ll 1\)[25; 27; 29] \\ survival pattern during post-recombination & \(\Delta B/B\ll 1\)[23] \\ \hline \hline \end{tabular} \end{table} Table 1: Basic properties of the AQNs adopted from [30]. \(\rho_{\rm visible}(r)\) entering (1). In this case the annihilation processes start and a large amount of energy will be injected to surrounding material, which may be manifested in many different ways. What is more important for the present studies is that the same annihilation processes will dramatically reduce the ionization portion of the material \(x_{e}\) during the galaxy formation at a redshift \(z\in(5-15)\) because the ions are much more likely to interact with the AQNs in comparison with neutral atoms due to the long-ranged Coulomb attraction. The related computations on the AQN-visible matter interaction originally have been carried out in [31] in application to the galactic neutral environment at present time with a typical density of surrounding baryons of order \(n_{\rm galaxy}\sim{\rm cm}^{-3}\) in the galaxy, similar to the density to be discussed in the present work at a redshift \(z\in(5-15)\). We review the computations [31] with few additional elements which must be implemented in case of propagation of the AQN when galaxies just starting to form. We draw the AQN-structure on Fig 1, where we use typical parameters from the Table 1. There are several distinct length scales of the problem: \(R\sim 10^{-5}\) cm represents the size of the nugget filled by quark matter with \(B\sim 10^{25}\) in CS phase. Much larger scale \(R_{\rm DW}\sim m_{a}^{-1}\) describes the axion DW surrounding the quark matter. The axion DW has the QCD substructure surrounding the quark matter and which has typical width of order \(R_{\rm QCD}\sim 10^{-13}\)cm. Finally, there is always electro-sphere which represents a very generic feature of quark nuggets, including the Witten's original construction. In case of antimatter-nuggets the electro-sphere comprises the positrons. The typical size of the electrosphere is order of \(10^{-8}\)cm, see below. When the AQN enters the region of the baryon density \(n\) the annihilation processes start and the internal temperature increases. A typical internal temperature \(T\) of the AQNs for very dilute galactic environment can be estimated from the condition that the radiative output must balance the flux of energy onto the nugget \[F_{\rm tot}(T)(4\pi R^{2})\approx\kappa\cdot(\pi R^{2})\cdot(2\ {\rm GeV})\cdot n\cdot v_{\rm AQN}, \tag{2}\] Figure 1: AQN-structure (not in scale), adopted from [32]. The dominant portion of the energy \(\sim 2\) GeV produced as a result of a single annihilation process inside the anti-nugget is released in form of the bremsstrahlung radiation with frequencies \(\omega\leq T\), see description and notations in the main text. where \(n\) represents the baryon number density of the surrounding material, and \(F_{\rm tot}(T)\) is total surface emissivity, see below. The left hand side accounts for the total energy radiation from the AQN's surface per unit time while the right hand side accounts for the rate of annihilation events when each successful annihilation event of a single baryon charge produces \(\sim 2m_{p}c^{2}\approx 2\) GeV energy. In Eq. (2) we assume that the nugget is characterized by the geometrical cross section \(\pi R^{2}\) when it propagates in environment with local baryon density \(n\) with velocity \(v_{\rm AQN}\sim 10^{-3}c\). The factor \(\kappa\) accounts for large theoretical uncertainties related to the annihilation processes of the (antimatter) AQN colliding with surrounding material. The total surface emissivity due to the bremsstrahlung radiation from electrosphere at temperature \(T\) has been computed in [31] and it is given by \[F_{\rm tot}\approx\frac{16}{3}\frac{T^{4}\alpha^{5/2}}{\pi}\sqrt[4]{\frac{T}{m }}\,, \tag{3}\] where \(\alpha\approx 1/137\) is the fine structure constant, \(m=511\,\)keV is the mass of electron, and \(T\) is the internal temperature of the AQN. One should emphasize that the emission from the electrosphere is not thermal, and the spectrum is dramatically different from blackbody radiation. From (2) one can estimate a typical internal nugget's temperature when density \(n\) assumes the typical values \(n\sim{\rm cm}^{-3}\) relevant for this work: \[T\sim 0.4\ {\rm eV}\cdot\left(\frac{n}{{\rm cm}^{-3}}\right)^{\frac{4}{17}} \cdot\left(\frac{v_{\rm AQN}}{10^{-3}c}\right)^{\frac{4}{17}}\cdot\kappa^{ \frac{4}{17}}. \tag{4}\] It strongly depends on unknown parameter \(\kappa\) as mentioned above. In case which is relevant for our studies when the surrounding material is a highly ionized plasma the parameter \(\kappa\) effectively gets much larger as the AQN (being negatively charged, see below) attracts more positively charged ions from surrounding material. This attraction consequently effectively increases the cross section and the rate of annihilation, eventually resulting in a larger value of \(T\). Another feature which is relevant for our present studies is the ionization properties of the AQN. Ionization, as usual, occurs in a system as a result of the high internal temperature \(T\). In our case of the AQN characterized by temperature (4) a large number of weakly bound positrons \(\sim Q\) from the electrosphere get excited and can easily leave the system. The corresponding parameter \(Q\) can be estimated as follows: \[Q\approx 4\pi R^{2}\int_{0}^{\infty}n(z,T){\rm d}z\sim\frac{4\pi R^{2}}{\sqrt{2 \pi\alpha}}\left(mT\right)\left(\frac{T}{m}\right)^{\frac{4}{4}}, \tag{5}\] where \(n(z,T)\) is the local density of positrons at distance \(z\) from the nugget's surface, which has been computed in the mean field approximation in [31] and has the following form \[n(z,T)=\frac{T}{2\pi\alpha}\frac{1}{(z+\bar{z})^{2}},\ \ \ \ \bar{z}^{-1}\approx\sqrt{2\pi\alpha}\cdot m \cdot\left(\frac{T}{m}\right)^{\frac{1}{4}}, \tag{6}\] where \(\bar{z}\) is the integration constant is chosen to match the Boltzmann regime at sufficiently large \(z\gg\bar{z}\). Numerical studies [33] support the approximate analytical expression (6). Numerically, the number of weakly bound positrons can be estimated from (5) as follows: \[Q\approx 1.5\cdot 10^{6}\left(\frac{T}{{\rm eV}}\right)^{\frac{5}{4}}\left( \frac{R}{2.25\cdot 10^{-5}{\rm cm}}\right)^{2}. \tag{7}\] In what follows we assume that, to first order, that the finite portion of positrons \(\sim Q\) leave the system as a result of the complicated processes mentioned above, in which case the AQN as a system acquires a negative electric charge \(\sim-|e|Q\) and get partially ionized as a macroscopically large object of mass \(M\simeq m_{p}B\). The ratio \(eQ/M\sim 10^{-19}e/m_{p}\) characterizing this object is very tiny. However, the charge \(Q\) itself is sufficiently large being capable to capture (with consequent possibility of annihilation) the positively charged protons from surrounding ionized plasma. The corresponding capture radius \(R_{\rm cap}(T)\) can be estimated from the condition that the potential energy of the attraction (being negative) is the same order of magnitude as kinetic energy of the protons from highly ionized gas with \(x_{e}=1\) which is characterized by external temperature \(T_{\rm gas}\), i.e. \[\frac{\alpha Q(T)}{R_{\rm cap}(T)}\sim\frac{m_{p}v^{2}}{2}\sim T_{\rm gas}\ \ \Rightarrow\ \ \ R_{\rm cap}(T)\simeq 0.2\ {\rm cm}\left(\frac{T}{{\rm eV}}\right)^{\frac{5}{4}}\cdot\left(\frac{{\rm eV }}{T_{\rm gas}}\right), \tag{8}\] where \(Q\) is estimated by (5), (7). One should emphasize that \(R_{\rm cap}(T)\) depends on both temperatures, the internal \(T\) through the charge \(Q(T)\) as given by (5), (7) and external gas temperature \(T_{\rm gas}\) which is essentially determined by the typical velocities of particles in plasma and can be identified with virial temperature of particles in galaxy. Important point here is that \(R_{\rm cap}(T)\gg R\) such that effective cross section being \(\pi R_{\rm cap}^{2}\) is dramatically larger than geometric size \(\pi R^{2}\) entering (2) which would be the case if gas is represented by neutral atoms. In our relation (8) we also neglected the Debye screening effect as \(\lambda_{D}\gg R_{\rm cap}\) for all relevant values of parameters, where \(\lambda_{D}\) is defined as usual \[\lambda_{D}\approx\sqrt{\frac{T_{\rm gas}}{4\pi\alpha n}}\sim 0.7\cdot 10^{3}\ {\rm cm}\left(\frac{T_{\rm gas}}{{\rm eV}}\right)^{1/2}. \tag{9}\] Precisely this feature of ionization of the AQN characterized by electric charge \(Q(T)\) dramatically enhances the visible-DM interaction in highly ionized environment when cosmologically relevant ratio (\(\sigma/M\)) from table 1 becomes large. We illustrate this enhancement with the following estimates of this ratio for the neutral (\(x_{e}=0\)) and highly ionized (\(x_{e}=1\)) environments: \[\frac{\sigma(x_{e}=0)}{M_{N}}\sim\frac{\pi R^{2}}{M_{N}}\sim 10^{-10}\frac{{ \rm cm}^{2}}{{\rm g}},\qquad\qquad\frac{\sigma(x_{e}=1)}{M_{N}}\sim\frac{\pi R_ {\rm cap}^{2}}{M_{N}}\sim\frac{{\rm cm}^{2}}{{\rm g}}\left(\frac{T}{10\ {\rm eV}}\right)^{\frac{5}{2}}\cdot\left(\frac{{\rm eV}}{T_{\rm gas}} \right)^{2}. \tag{10}\] We conclude this brief overview section on previous works on the AQNs with the following remark: all the parameters of the model as reviewed above had been applied previously for dramatically different studies, in very different circumstances as highlighted in concluding section 5. We do not modify any fundamental parameters of this model by applying this new DM scenario in next section 3 to the problem on structure formation. In particular we shall argue that some puzzling observations such as (1) could be a direct consequence of such visible- DM interaction (10), when the AQNs indeed behave as _chameleon_ like particles. To be more precise, the effective cross section (10) is highly sensitive to the density of the surrounding environment \(n\), its temperature \(T_{\rm gas}\) and its ionization features \(x_{e}\). The corresponding parameters affect the strength of visible-DM interaction and its key element, \(R_{\rm cap}(T)\), which itself depends on the environment according to (4) and (8). ## 3 DM-visible matter coupling Our basic claim of this section is that the structure formation may be dramatically affected by the DM in form of the AQNs which strongly interact with the gas of particles. The study of the structure formation dynamics is obviously a prerogative of the numerical simulations, which is the main technical tool for quantitive analysis. The goal of this work is less ambitious as we want to demonstrate in a pure qualitative analytical way few important characteristics (such as portion of the ionization of the gas, \(x_{e}\)) which may be dramatically affected by the interaction of the AQNs with surrounding plasma during the structure formation. To demonstrate the importance of these key novel qualitative effects is very instructive to compare the analytical formula of our studies with corresponding conventional expressions which are normally used in numerical simulations. Therefore, in what follows we use analytical formulae from the textbook [34] as the benchmark to be compared with corresponding AQN-based expressions. We shall demonstrate that the DM-visible matter coupling will play the dominant role in some circumstances. The corresponding effects may dramatically modify the conventional picture of the structure formation. Our analytical studies obviously do not replace a full scale numerical simulations. However, the comparison with conventional formulae [34] obviously show the basic trend which may crucially modify some elements of the standard picture on structure formation. These modifications have in fact, many common consequences and manifestations with previously introduced ad hoc models which were coined as Self-Interacting dark matter, Self-Annihilating dark matter, Decaying dark matter etc. Our comment here is that the AQN model was invented (in contrast to all ad hoc models mentioned above) for dramatically different purposes as overviewed in Sect. 2.1 with drastically different motivation, not related in anyway to the structure formation problems being the main topic of this work. Nevertheless, there are many model-independent consequences of the AQN construction which may dramatically affect the dynamics at small scales as we argue below in section 3.2. ### Galaxy Formation. The basics picture. Notations. The conventional picture of the structure formation assumes that CCDM particles have undergone violent relaxation such that the asymptotic density distribution \(\rho_{\rm DM}(r)\) can be approximated by an isothermal sphere [34]. If effects of baryon's cooling are ignored the dynamical evolution of the baryons will be similar to that of DM particles. However, the collapse of baryons develops shocks and the gas get reheated to a temperature \(T_{\rm gas}\) at which the pressure balance can prevent further collapse. The corresponding temperature can be estimated as follows [34]: \[\frac{3\rho_{\rm gas}T_{\rm gas}}{2m_{p}}\simeq\frac{\rho_{\rm gas}v^{2}}{2} \qquad\Rightarrow\qquad T_{\rm gas}\simeq\frac{m_{p}v^{2}}{3},\quad{\rm where} \quad\frac{v^{2}}{r}\simeq\frac{GM(r)}{r^{2}}, \tag{11}\] where for an order of magnitude estimates and for simplification we assumed that the gas is entirely consist the hydrogen, though the He fraction could be relatively large. The velocity \(v\) entering (11) is the circular velocity not to be confused with mean-square velocity \(\sigma_{v}^{2}=1/2v^{2}\). The \(T_{\rm gas}\) as defined by (11) is essentially the virial temperature \(T_{\rm vir}\), but we prefer to use notation \(T_{\rm gas}\) as our goal is to study the microscopical interaction of this gas with the AQNs in what follows. As the temperature \(T_{\rm gas}\) becomes sufficiently high the cooling processes must be taken into account. If the cooling processes are sufficiently effective the collapse may proceed further to form more tightly bound object. The corresponding evolution is entirely determined by the relative values of the dynamical time scale \(t_{\rm dyn}\) and the cooling time scale \(t_{\rm cool}\) defined as follows [34]: \[t_{\rm dyn}\simeq\frac{\pi}{2}\left(\frac{R^{3}}{2GM}\right)^{\frac{1}{2}} \simeq 5\cdot 10^{7}{\rm yr}\left(\frac{{\rm GeV}\cdot{\rm cm}^{-3}}{\rho_{ \rm total}}\right)^{\frac{1}{2}},\quad t_{\rm cool}\equiv\frac{E}{E}\simeq \frac{3}{2}\frac{n_{\rm gas}T_{\rm gas}}{\Lambda(T_{\rm gas})},\quad\rho_{ \rm total}\equiv(\rho_{\rm gas}+\rho_{\rm DM}) \tag{12}\] where \(n_{\rm gas}\) is the number density of the material such that \(\rho_{\rm gas}\equiv m_{p}n_{\rm gas}\), while \(\Lambda(T_{\rm gas})\) is the cooling rate which has the meaning of energy emitted from unit volume per unit time with dimensionality [\({\rm erg}\cdot{\rm cm}^{-3}\cdot{\rm s}^{-1}\)]. In all these estimates, including (12) we simplify things and ignore the He portion of the gas as we already mentioned above. The cooling rate is dominated by two processes: the energy loss due to the bremsstrahlung [34] \[\epsilon_{\rm brem}\simeq 1.4\cdot 10^{-27}\frac{{\rm erg}}{{\rm cm}^{3}\cdot{ \rm s}}\left(\frac{T_{\rm gas}}{1K}\right)^{1/2}\left(\frac{n_{\rm gas}}{{\rm cm }^{-3}}\right)^{2}x_{e}^{2}, \tag{13}\] and the loss due to the atomic collisions [34] \[\epsilon_{\rm coll}\simeq 7.5\cdot 10^{-19}\frac{{\rm erg}}{{\rm cm}^{3}\cdot{ \rm s}}\left(\frac{n_{\rm gas}}{{\rm cm}^{-3}}\right)^{2}x_{e}(1-x_{e})\exp \left(-\frac{E_{0}}{T_{\rm gas}}\right), \tag{14}\] where \(x_{e}\) is the ionization fraction and \(E_{0}=13.6~{}{\rm eV}\). Both rates \(\epsilon_{\rm brem}\) and \(\epsilon_{\rm coll}\) dramatically depend on ionization portion of the gas, the \(x_{e}(T_{\rm gas})\) which itself is function of the gas temperature \(T_{\rm gas}\). The corresponding parameter \(x_{e}(T_{\rm gas})\) is determined by the relative strength of two competing processes: the collisional ionization which is characterized by the time scale \(t_{i}\) and the recombination with time scale \(t_{r}\). The corresponding time scales can be estimated as follows [34]: \[t_{i} \simeq \frac{2.2\cdot 10^{14}{\rm s}}{(1-x_{e})}\left(\frac{10^{5}K}{T_{ \rm gas}}\right)^{1/2}\exp\left(\frac{E_{0}}{T_{\rm gas}}\right), \tag{15}\] \[t_{r} \simeq \frac{5.8\cdot 10^{19}{\rm s}}{(x_{e})}\left(\frac{T_{\rm gas}}{10^{5 }K}\right)^{2/3}. \tag{16}\] Equating (15) and (16) determines the ionization fraction \(x_{e}(T_{\rm gas})\) as a function of \(T_{\rm gas}\), \[x_{e}(T_{\rm gas})\simeq\left[1+2.8\cdot 10^{-6}\left(\frac{10^{5}K}{T_{\rm gas }}\right)^{7/6}\exp\left(\frac{E_{0}}{T_{\rm gas}}\right)\right]^{-1}. \tag{17}\] This expression for \(x_{e}(T_{\rm gas})\) can be substituted to formulae for \(\epsilon_{\rm brem}\) and \(\epsilon_{\rm coll}\) as given by (13) and (14) correspondingly to estimate the key parameter, the cooling rate \(\Lambda(T_{\rm gas})\) entering formula (12). Assuming that \(x_{e}(T_{\rm gas})\simeq 1\) one can expand (17) to arrive to the following simplified expression for the cooling rate which is not valid for low temperatures when \(x_{e}\) strongly deviates from unity and expansion is not justified, \[\Lambda(T_{\rm gas})\simeq 10^{-24}\frac{\rm erg}{{\rm cm}^{3}\cdot{\rm s}} \left[0.44\left(\frac{T_{\rm gas}}{10^{5}K}\right)^{1/2}+2.1\left(\frac{T_{ \rm gas}}{10^{5}K}\right)^{-7/6}\right],\quad(\epsilon_{\rm coll}+\epsilon_{ \rm brem})\equiv\left(\frac{n_{\rm gas}}{{\rm cm}^{-3}}\right)^{2}\Lambda(T_{ \rm gas}). \tag{18}\] In this formula the first term in the brackets is due to bremsstrahlung radiation \(\epsilon_{\rm brem}\) while the second term is the result of atomic collisions \(\epsilon_{\rm coll}\). The cooling time scale \(t_{\rm cool}\) can be estimated from (12) as follows \[t_{\rm cool}\equiv\frac{E}{\dot{E}}\simeq 2.5\cdot 10^{6}{\rm yr}\left(\frac{{ \rm cm}^{-3}}{n_{\rm gas}}\right)\left[\left(\frac{T_{\rm gas}}{10^{5}K} \right)^{-1/2}+4.8\left(\frac{T_{\rm gas}}{10^{5}K}\right)^{-13/6}\right]^{-1 }, \tag{19}\] where we literally use expression for \(\Lambda(T_{\rm gas})\) as given by (18). The numerical value in the brackets for the second term (which describes the cooling due to the atomic collisions \(\epsilon_{\rm coll}\)) slightly deviates from the corresponding formula from the textbook [34]. This is because the expression in [34] was modified to better fit with the numerical simulation results. We opted to keep the original expressions (18) as our main goal is the comparison of the AQN-induced mechanism with conventional mechanism to pinpoint the dramatic qualitative deviations from the standard cooling processes as given by (19). The estimate for the cooling time scale \(t_{\rm cool}\) allows us to compare it with the dynamical time scale \(t_{\rm dyn}\) as given by (12), where matter density should be understood as the _total_ matter density of the material, including the DM portion. It is convenient to define the ratio \(\mathcal{R}\) as follows: \[\mathcal{R}\equiv\frac{t_{\rm cool}(\rho_{\rm gas})}{t_{\rm dyn}(\rho_{\rm total })}\leq 1,\qquad\rho_{\rm total}\equiv(\rho_{\rm gas}+\rho_{\rm DM}), \tag{20}\] when \(t_{\rm cool}(\rho_{\rm gas})\) depends exclusively from hadronic gas component \(\rho_{\rm gas}\) in CCDM treatment, while \(t_{\rm dyn}(\rho_{\rm total})\) depends on the total density of the material, including the DM component. If \(\mathcal{R}\) is smaller than unity the cloud will cool rapidly, and gas will undergo (almost) a free fall collapse. Fragmentation into smaller units occurs because smaller mass scales will become gravitationally unstable. The key parameter which determines the parametrical space when the clouds will continue to collapse defines the region when the galaxies (and stars) may form. Precisely this parameter \(\mathcal{R}\) governs the evolution of the system. One can estimate the typical masses, typical sizes, and the typical temperatures \(T_{\rm gas}\) of the clouds for this collapse to happen based on analysis of domain where \(\mathcal{R}\leq 1\). The corresponding studies are obviously a prerogative of the numerical N-body simulations, far outside of the scope of the present work, though some qualitative estimates can be also made [34]. For the purposes of the present work we use the condition (20) as the boundary in parametrical space which specifies the domain when the galaxies may form. We will specifically pinpoint some key modifications of this parameter \(\mathcal{R}\) when the AQNs are present in the system and dramatically modify this domain. Our crucial observation is that \(t_{\rm cool}\) will depend on both: the visible \(\rho_{\rm gas}\) and dark matter \(\rho_{\rm DM}\) components, which qualitatively modifies condition (20). The corresponding analysis represents the topic of the next section 3.2. One should note that all parameters and formulae we use in this section as benchmarks are taken from the textbook [34]. They are obviously outdated in comparison with the standard parameters being used in more recent papers. Nevertheless, we opted to use the parameters and numerics literally from [34] to pinpoint the key differences (in comparison with the conventional treatment of the problem) which occur when the AQN-induced interaction is taken into account. It is very instructive to understand a precise way how the new physics enters and modifies the standard picture in a qualitative parametrical way. ### Galaxy Formation. The AQN-induced modifications. As we discussed in previous section the cooling rate is very sensitive to the ionization fraction \(x_{e}(T_{\rm gas})\) as given by (13) and (14). In conventional picture this factor \(x_{e}(T_{\rm gas})\) is determined by two competing processes as represented by expression (17). The main goal of this section is to demonstrate that this estimate for \(x_{e}(T_{\rm gas})\) is dramatically modified in the presence of the AQNs. As a result the expression for the cooling rate \(t_{\rm cool}\) as given by (19) is also changed. These changes lead to drastic modification of the domain governed by parameter \(\mathcal{R}\leq 1\) when the structure formation can be formed. One of the main qualitative consequences of these modifications is emergence of the relations such as (1) reflecting a strong visible-DM coupling, which was one of the motivations for the present studies. The main physics process which leads to such dramatic variations can be explained as follows. The AQNs start to interact with surrounding material, mostly protons when the density of the gas becomes sufficiently high, around (1 cm\({}^{-3}\)). The corresponding interaction is strongly enhanced in the ionized plasma due to the long-ranged Coulomb attraction between (negatively charged) AQNs and protons. This enhancement of the visible- DM interaction may dramatically _decrease_ the ionization fraction \(x_{e}\) of plasma due to the capturing (with consequent annihilation) of the protons from plasma and subsequent emission of positrons from AQN's electrosphere. These positrons will eventually annihilate with free electrons from the plasma. The corresponding time scale \(t_{\rm AQN}\) for this process could be dramatically smaller than the recombination time scale \(t_{r}\) estimated by (16). As a result the AQN-proton annihilation becomes the dominant processes which dramatically reduces the ionization fraction \(x_{e}\) of plasma. As explained above the ionization fraction directly affects the domain of the parametrical space where \(\mathcal{R}\leq 1\) which describes the region where the galaxies may form. Now we proceed with corresponding estimates. First, we estimate the number of protons being captured by AQNs per unit volume per unit time, \[\frac{dn}{dtdV}\approx\pi R_{\rm cap}^{2}(T)\cdot n_{p}\cdot n_{\rm AQN}\cdot v _{\rm AQN}, \tag{21}\] where \(R_{\rm cap}(T)\) is the capture radius determined by condition (8) when the protons from the surrounding plasma are captured by the negatively charged AQNs and will be eventually annihilated inside the nugget's core. The formula (21) explicitly shows the key role of the ionized media as the cross section for annihilation of the neutral atoms is dramatically smaller as \(R\ll R_{\rm cap}(T)\). By dividing expression (21) by gas number density \(n\) we arrive to estimation for the frequency of the capturing (with consequent annihilation) of the protons. It is more convenient to represent this as time scale \(t_{\rm AQN}\) for capturing of a proton from plasma by AQNs: \[t_{\rm AQN}\approx\left[\pi R_{\rm cap}^{2}(T)\cdot n_{\rm AQN}\cdot v_{\rm AQN }\cdot x_{e}\right]^{-1},\quad n_{p}\equiv x_{e}n_{\rm gas},\quad n_{\rm AQN }(r)\approx\frac{\rho_{\rm DM}(r)}{m_{p}\langle B\rangle}, \tag{22}\] where \(v_{\rm AQN}\approx v\) is the same order of magnitude as the circular velocity as both are determined by pure gravitational forces (11). The time scale \(t_{\rm AQN}\) plays the key role in our discussions which follow because it competes with the recombination time scale (16) when both processes decrease the ionization fraction \(x_{e}\). Numerically the time scale \(t_{\rm AQN}\) can be estimated as follows: \[t_{\rm AQN}\approx\frac{0.5\cdot 10^{19}{\rm s}}{(x_{e})}\left(\frac{T_{ \rm gas}}{10^{5}K}\right)^{3/2}\cdot\left(\frac{T}{10^{5}K}\right)^{-5/2}\cdot \left(\frac{\rho_{\rm DM}(r)}{{\rm GeV}\cdot{\rm cm}^{-3}}\right)^{-1}, \tag{23}\] which is almost one order of magnitude faster than the recombination time scale \(t_{r}\) as estimated by (16) for typical parameters \(T_{\rm gas}\sim 10^{5}K\). One should comment here that the external temperature of the plasma \(T_{\rm gas}\) and internal temperature \(T\) of the AQN's electrosphere are different parameters of the system, though in the galactic environment they may assume similar numerical values. One should also note here that the recombination process and the capturing of the proton by AQNs with its consequent annihilation inside the nugget work in the same direction by decreasing3 the ionization fraction \(x_{e}\). Footnote 3: The non-relativistic positrons emitted from AQNs with typical energies of order \(\sim T\) will be quickly annihilated by surrounding electrons from plasma as the cross section of the annihilation of slow electrons and positrons is of order \(\pi a_{B}^{2}\sim 10^{-16}{\rm cm}^{2}\), and annihilation occurs on the scales much shorter than kpc. The photons emitted due to these annihilation processes will leave the system as the corresponding cross section is very small \(\sim\pi r_{e}^{2}\sim 10^{-24}{\rm cm}^{2}\) where \(r_{e}\equiv\alpha/m_{e}\) is the electron classical radius, and the corresponding mean free path is much longer than kpc scale. To estimate the ionization fraction \(x_{e}(T_{\rm gas})\) when the AQN annihilation processes are operational one should equalize \[t_{i}^{-1}=t_{r}^{-1}+t_{\rm AQN}^{-1} \tag{24}\] which replaces the condition \(t_{i}=t_{r}\) leading to previous expression (17) for \(x_{e}(T_{\rm gas})\). To simplify the problem we consider the region when \(t_{\rm AQN}\ll t_{r}\) and the recombination process can be ignored. The corresponding condition is: \[t_{\rm AQN}\ll t_{r}\quad\Rightarrow\quad\left(\frac{\rho_{ \rm DM}(r)}{{\rm GeV}\cdot{\rm cm}^{-3}}\right)\cdot\left(\frac{T_{\rm gas}}{ 10^{5}K}\right)^{-\frac{5}{6}}\cdot\left(\frac{T}{10^{5}K}\right)^{\frac{5}{2} }\gg 0.1, \tag{25}\] which implies that for sufficiently high DM density \(\rho_{\rm DM}\sim 5m_{p}n(r)\) with \(n(r)\sim{\rm cm}^{-3}\) or/and sufficiently low gas temperature \(T_{\rm gas}\leq 10^{6}K\) the AQN-induced processes dominate, in which case the ionization fraction \(x_{e}(T_{\rm gas})\) is entirely determined by the competition of the collisional ionization time scale \(t_{i}\) and \(t_{\rm AQN}\). Equating \(t_{\rm AQN}\) and \(t_{i}\) we arrive to the following estimate for \(x_{e}(T_{\rm gas})\): \[x_{e}(T_{\rm gas})\simeq\left[1+4.4\cdot 10^{-5}\left(\frac{10^{5}K}{T_{ \rm gas}}\right)^{2}\cdot\left(\frac{T}{10^{5}K}\right)^{\frac{5}{2}}\cdot \left(\frac{\rho_{\rm DM}(r)}{{\rm GeV}\cdot{\rm cm}^{-3}}\right)\cdot\exp \left(\frac{E_{0}}{T_{\rm gas}}\right)\right]^{-1}\ \ ({\rm AQN-induced}). \tag{26}\] This expression replaces the previous formula for \(x_{e}(T_{\rm gas})\) in conventional scenario (17). Assuming that the condition (25) is satisfied and \(x_{e}(T_{\rm gas})\approx 1\) one can expand (26) to arrive to the following expression for \((1-x_{e})\) entering the cooling rate due to the atomic collisions (14): \[[1-x_{e}(T_{\rm gas})]\simeq 4.4\cdot 10^{-5}\left(\frac{10^{5}K}{T_{ \rm gas}}\right)^{2}\cdot\left(\frac{T}{10^{5}K}\right)^{\frac{5}{2}}\cdot \left(\frac{\rho_{\rm DM}(r)}{{\rm GeV}\cdot{\rm cm}^{-3}}\right)\cdot\exp \left(\frac{E_{0}}{T_{\rm gas}}\right). \tag{27}\] Now we can substitute this expression for \((1-x_{e})\) to the formula (14) for the cooling rate due to the atomic collisions: \[\epsilon_{\rm coll}\simeq 3.4\cdot 10^{-23}\frac{{\rm erg}}{{\rm cm}^{3} \cdot{\rm s}}\left(\frac{n_{\rm gas}}{{\rm cm}^{-3}}\right)^{2}\cdot\left( \frac{T_{\rm gas}}{10^{5}K}\right)^{-2}\cdot\left(\frac{T}{10^{5}K}\right)^{ \frac{5}{2}}\cdot\left(\frac{\rho_{\rm DM}(r)}{{\rm GeV}\cdot{\rm cm}^{-3}} \right)\ \ ({\rm AQN-induced}). \tag{28}\] At the same time the expression for cooling due to the bremsstrahlung radiation \(\epsilon_{\rm brem}\) remains the same as it is not sensitive to \(x_{e}\) as long as it is close to unity. As a result, the expression for the cooling rate \(\Lambda(T_{\rm gas})\) assumes the form \[\Lambda(T_{\rm gas})\simeq 10^{-24}\frac{{\rm erg}}{{\rm cm}^{3} \cdot{\rm s}}\left[0.44\left(\frac{T_{\rm gas}}{10^{5}K}\right)^{\frac{1}{2}}+ 34\left(\frac{T_{\rm gas}}{10^{5}K}\right)^{-2}\left(\frac{T}{10^{5}K} \right)^{\frac{5}{2}}\left(\frac{\rho_{\rm DM}(r)}{{\rm GeV}{\rm cm}^{-3}} \right)\right], \tag{29}\] where the first term due to the bremsstrahlung radiation remains the same as in the conventional treatment (18), while the second term describing the atomic collisions is dramatically larger by one order of magnitude than in (18). The basic reason for this difference is that the cooling rate due to the atomic collisions is proportional to density of the neutral atoms \(n_{H}\propto(1-x_{e})\) which increases in the presence of the AQNs in comparison with conventional case due to the mechanism described at the very beginning of this section. The increase of the cooling rate \(\Lambda(T_{\text{gas}})\) leads to consequent dramatic modification in the cooling time scale \(t_{\text{cool}}\) as defined by (19). In the presence of the AQNs in the system it gets modified as follows: \[t_{\text{cool}}\equiv\frac{E}{E}\simeq 2.5\cdot 10^{6}\text{yr}\left(\frac{\text{ cm}^{-3}}{n_{\text{gas}}}\right)\left[\left(\frac{T_{\text{gas}}}{10^{5}K} \right)^{-1/2}+77\left(\frac{T_{\text{gas}}}{10^{5}K}\right)^{-3}\left(\frac{T }{10^{5}K}\right)^{\frac{5}{2}}\left(\frac{\rho_{\text{DM}}(r)}{\text{GeV} \cdot\text{cm}^{-3}}\right)\right]^{-1}, \tag{30}\] where the first term in the brackets due to the bremsstrahlung radiation remains the same as in the conventional treatment (19), while the second term describing the atomic collisions is dramatically enhanced in comparison with (19) due to the same reasons described above. There are two key points here: first, \(t_{\text{cool}}\) depends on \(\rho_{\text{DM}}(r)\) which is a highly nontrivial new qualitative effect because in conventional CCDM picture any cooling effects fundamentally cannot depend on \(\rho_{\text{DM}}(r)\) as these effects are entirely determined by the visible baryonic matter in form of the gas. The second point here is that for the temperatures \(T_{\text{gas}}\leq 10^{5}K\) the cooling time scale \(t_{\text{cool}}\) is one order of magnitude shorter than in conventional treatment (19). In fact, the AQN-induced processes remain to be the dominant cooling mechanism up to \(T_{\text{gas}}\simeq 10^{6}K\) for \(\rho_{\text{DM}}\approx 5\rho_{\text{gas}}\), and could remain the dominant mechanism even for higher temperatures. The dramatic modifications in \(t_{\text{cool}}\) implies that the key parameter \(\mathcal{R}\) will also experience crucial qualitative changes in defining of the domain where the structure formation may form, \[\mathcal{R}\equiv\frac{t_{\text{cool}}(\rho_{\text{gas}},\rho_{DM})}{t_{\text {dyn}}(\rho_{\text{total}})}\leq 1,\qquad\rho_{\text{total}}\equiv(\rho_{ \text{gas}}+\rho_{\text{DM}}). \tag{31}\] Indeed, in contrast with the original definition (20) the cooling time \(t_{\text{cool}}(\rho_{\text{gas}},\rho_{DM})\) entering (31) now depends explicitly on both, the \(\rho_{\text{gas}}\) and \(\rho_{DM}\) as expression (30) explicitly shows. We conclude this section with the following generic comment. We have made a large number of technical assumptions in this section to simplify the analysis to argue that the visible-DM interaction may dramatically modify some parameters, such as \(x_{e}\) and cooling time \(t_{\text{cool}}\) as a result of the AQN-induced processes. It was not the goal of this work to perform a full scale simulations and modelling, which is well beyond the scope of the present work. However, we observed a number of qualitative features of the system, which are known to occur, but cannot be easily understood within conventional models of structure formation. In next section 4 we briefly overview some observational consequences of the AQN framework, which are hard to understand within conventional models, but could be easily understood within a new paradigm when the baryonic and DM constituents become strongly interacting components of the system. ## 4 Observable consequences of the visible-DM interaction. The New Paradigm. As we reviewed in Sect.2 the AQN model is dramatically distinct from conventional DM proposals as the central elements of the DM configurations are the same (anti)quarks and gluons from QCD which represents the inherent part of the Standard Model (SM). Therefore, the AQNs become the strongly interacting objects in sufficiently dense environment. In other words, the AQN behaves as a _chameleon_: it does not interact with the surrounding material in dilute neutral environment, but it becomes strongly interacting object in sufficiently dense environment. Therefore, the AQN framework essentially represents an explicit realization of the New Paradigm, when the visible and DM building blocks become strongly interacting components if some conditions are met. It must be contrasted with conventional CCDM paradigm when these distinct components, by definition, never couple. The main purpose of this section is to briefly overview a number of qualitative consequences of this new paradigm. The corresponding properties which are listed below are not very sensitive to any specific numerical values of the parameters used in the previous sections. Instead, these novel features represent the inherent features of the new framework. In next subsection 4.1 we explain in qualitative way how the observed correlation such as (1) could emerge in the New Paradigm. In section 4.2 we argue that the new paradigm inevitably implies emission of the additional energy in different frequency bands, including the UV radiation. Apparently, there are several recent hints from JWST, see e.g. [10; 11; 12; 13] that such excess of radiation has been indeed observed at large \(z\). In section 4.3 we argue that the very same effects could be responsible for the mysterious diffuse UV radiation [14; 15; 16] at present time \(z=0\) as suggested in [17]. ### How the observed correlation (1) could emerge in the New Paradigm? To simplify our qualitative analysis we consider the domain in the parametrical space when the AQN-induced term dominates the conventional cooling due to the bremsstrahlung radiation. The corresponding condition can be estimated from (30) as follows: \[\left(\frac{T_{\rm gas}}{10^{6}K}\right)^{-\frac{5}{2}}\left(\frac{T}{10^{5}K }\right)^{\frac{5}{2}}\left(\frac{\rho_{\rm DM}(r)}{\rm GeV\cdot cm^{-3}} \right)\gg 4.1, \tag{32}\] which is satisfied in the region with \(T_{\rm gas}\leq 10^{6}K\) and \(\rho_{\rm DM}\geq 5\rho_{\rm gas}\) with our benchmark density \(n_{\rm gas}\approx\rm cm^{-3}\). This parametrical region is largely overlap with condition (25) that the AQN-induced processes dominate conventional recombination effects in computations of the ionization fraction \(x_{e}\) such that all our simplification and estimates remain consistent. Therefore, assuming the condition (32) is satisfied our basic requirement (31) defining the domain \(\mathcal{R}\leq 1\) where galaxy may form can be written in the following simple way \[\frac{\left[\tilde{\rho}_{\rm DM}\cdot\tilde{\rho}_{\rm gas}\right]}{\sqrt{ \tilde{\rho}_{\rm DM}+\tilde{\rho}_{\rm gas}}}\left(\frac{T_{\rm gas}}{10^{6}K }\right)^{-3}\left(\frac{T}{10^{5}K}\right)^{\frac{5}{2}}\geq 6.5,\ \ \ \ \tilde{\rho}_{\rm DM}\equiv\left(\frac{\rho_{\rm DM}(r)}{\rm GeV\cdot cm^{-3} }\right),\ \ \ \ \tilde{\rho}_{\rm gas}\equiv\left(\frac{\rho_{\rm gas}(r)}{\rm GeV \cdot cm^{-3}}\right), \tag{33}\] where we use formula (12) for the \(t_{\rm dyn}\) and expression (30) for \(t_{\rm cool}\) assuming the AQN-induced term dominates according to the condition (32). The crucial point here is that the visible-DM interaction representing the key element of a new paradigm explicitly manifests itself in formula (33) which strongly resembles the structure of the correlation (1) inferred from the observations long ago, see review [5]. The coupling of the visible and dark components is explicitly present in the system, and formula (33) is a direct consequence of this interaction. It is important to emphasize that the condition (33) is local in nature as it depends on \([\tilde{\rho}_{\rm DM}(r)\cdot\tilde{\rho}_{\rm gas}(r)]\) in region \(r\leq r_{0}\) where the densities are sufficiently large, roughly \(\rho_{\rm DM}(r)\sim\rho_{\rm gas}(r)\sim 10^{-24}(\rm g\cdot cm^{-3})\) in cgs units. For \(r\gg r_{0}\) the the visible-DM interaction can be ignored and the AQNs behave in all respects as CCDM. The locality of the condition (33) implies that region \(r_{0}\) where the visible-DM interaction becomes the dominant element of the system does not depend on the size of the cloud which is about to collapse to form a galaxy, nor on its mass, which is precisely the claim of [5] as stated in (1). Parameter \(r_{0}\) was identified with the size of the core in [5], while in our microscopical treatment of the system the scale \(r_{0}\) is identified with condition (33). Therefore, the core formation in the AQN framework can be interpreted as a result of strong visible- DM interaction when condition (33) starts to satisfy at \(r\approx r_{0}\). The temperature \(T_{\rm gas}\) entering (33) can be thought as the virial temperature as defined by (11), which indeed assume the values in the range \(T_{\rm gas}\simeq(10^{5}-10^{6})K\). Another parameter \(T\) entering (33) is the internal temperature of the AQNs, and should not be confused with \(T_{\rm gas}\). This parameter is very hard to estimate as reviewed in section 2, but it must also lie in the range \(T\simeq(10^{5}-10^{6})K\) for the environment under consideration. We shall not elaborate in details on this matter in the present work. ### The AQNs as the UV emitters in early galaxies The processes which lead to the correlation (33) as discussed above are always accompanied by the radiation in many different frequency bands as we discuss below. Indeed, the total amount of energy being produced per unit time is determined by the right hand side of (2). This energy will be released into the space in many different forms, including the axion and neutrino emissions. However, the dominant portion of the emission will be in form of radiation from electrosphere according to (3). The spectrum of the radiation is very broad \(\omega\leq T\) as computed in [31] and depicted on Fig. 1. This is the dominant radiation process. There is also annihilation of the emitted positrons with electrons from plasma as mentioned in footnote 3. However, the total released energy due to these annihilation processes is obviously suppressed by factor \(m_{e}/m_{p}\ll 1\). In what follows we estimate the total amount of energy being produced as a result of the annihilation processes during the galaxy formation. We start from expression (21) for number of annihilation events per unit time per unit volume. We multiply this expression by factor \(2m_{p}c^{2}\simeq 2\) GeV to arrive to estimate for energy being produced as a result of these annihilation processes: \[\frac{dE}{dtdV}\approx\pi R_{\rm cap}^{2}(T)\cdot n\cdot n_{\rm AQN}\cdot v_{ \rm AQN}\cdot x_{e}\cdot(2\ {\rm GeV}) \tag{34}\] By dividing expression (34) by gas number density \(n\) we arrive to estimation of the energy being released per unit time per single proton (or hydrogen atom) in plasma. To estimate the total energy being released by this mechanism we have to multiply the estimate (34) by the Hubble time at redshift \(z\). Thus, we arrive to an order of magnitude estimate for the total energy released by the AQNs due to the annihilation events during the Hubble time per single proton (or hydrogen atom) in plasma: \[\frac{dE}{dt}H^{-1}\sim 4\cdot 10^{-7}\ {\rm GeV}\cdot x_{e}\cdot\left(\frac{T_ {\rm gas}}{10^{6}K}\right)^{-3/2}\cdot\left(\frac{T}{10^{5}K}\right)^{5/2} \cdot\left(\frac{\rho_{\rm DM}(r)}{10^{-3}\ {\rm GeV}\cdot{\rm cm}^{-3}}\right), \tag{35}\] which of course represents a tiny portion (\(\sim 10^{-7}\)) in comparison with \(m_{p}c^{2}\). In estimate (35) we use \(H^{-1}\simeq 10^{9}\)yr. This estimate suggests that the total amount of the DM as well as the typical size of the AQNs will not be affected during the Hubble time due to the annihilation processes. Few comments are in order. First of all, an order of magnitude estimate (35) should be considered as an upper limit for the energy released due to the annihilation events. Indeed, there are many other forms of the emission such as the axion and neutrino emissions. Furthermore, a typical internal temperature \(T\) of the AQNs in peripheral regions of the galaxy is well below than \(10^{5}K\) as the number of annihilation events in these regions is very tiny, which also decreases the estimate (35). The peripheral regions do not contribute to the emission at all, as the AQNs behave as conventional CCDM particles outside of the cores, as we already mentioned. All these effects obviously further reduce the total energy being released due to the annihilation events. The most important comment here is as follows. The dominant portion of the electromagnetic emission (35) is very broadband with typical frequencies around \(\omega\leq T\simeq 10^{5}K\). As a result, the UV radiation is expected to occur from the AQN-induced processes as a typical internal temperature could reach values \(T\simeq 10\) eV or even higher. This UV emission is very generic consequence of the AQN framework, and always occurs in addition to the UV emission from the stars, which is considered to be conventional source of the UV radiation in galaxies. The intensity of this emission in the AQN framework should be about the same order of magnitude as the observed excess of the diffuse UV emission in our galaxy, see next subsection 4.3. It is interesting to note that there are several recent hints from JWST, see e.g. [10; 11; 12; 13], suggesting that the excess of the UV radiation from red-shifted galaxies has been indeed observed. In fact, it has been argued that the UV luminosity would need to be boosted by about a factor of \(\sim 2.5\) to match the observations at \(z\sim 11\) according to [10]. If these observations will be confirmed by future analysis it could support our interpretation that the observed excess of the UV emission could be due to the AQN annihilation processes. ### The AQNs as the UV emitters at present time Our claim that the excess of the UV emission must be present in all galaxies can be tested in our own galaxy by studying the detail characteristics of the diffuse UV emission. In fact such studies had been recently carried out in [14; 15; 16]. The corresponding results are very hard to understand if interpreted in terms of the conventional astrophysical phenomena when the dominant source of the diffused UV background is the dust-scattered radiation of the UV emitting stars. The analysis [14; 15; 16] very convincingly disproves this conventional picture. The arguments are based on a number of very puzzling observations which are not compatible with standard picture. We mention here just two of these mysterious observations: 1. The diffuse radiation is very uniform in both hemispheres, see Figs 7-10 in [14]. This feature should be contrasted to the strong non-uniformity in distribution of the UV emitting stars; 2. The diffuse radiation is almost entirely independent of Galactic longitude. This feature must be contrasted with localization of the brightest UV emitting stars which are overwhelmingly confined to the longitude range \(180^{0}-360^{0}\). These and several similar observations strongly suggest that the diffuse background radiation can hardly originate in dust-scattered starlight. The authors of [14] conclude that the source of the diffuse FUV emission is unknown -that is the mystery that is referred to in the title of the paper [14]. We proposed in [17] that this excess in UV radiation is the result of the dark matter annihilation events within the AQN dark matter model. The excess of the UV radiation observed at \(z=0\) and studied in [14; 15; 16] has precisely the same nature and originated from the same source in form of the dark matter AQNs as advocated in this work for red-shifted galaxies as overviewed in previous subsection 4.2. The proposal [17] is supported by demonstrating that intensity and the spectral features of the AQN induced emissions are consistent with the corresponding characteristics of the observed excess [14; 15; 16] of the UV radiation. The excess of the UV radiation measured by GALEX over its bandpass (1380-2500)\(\AA\) varies between \((300-1800)\cdot[\text{photons cm}^{-2}\ \text{s}^{-1}\text{sr}^{-1}\AA^{-1}]\) depending on the galactic latitude, see Fig. 14 in [14]. One can represent this measurement in conventional units as follows \[I_{\nu}^{\text{FUV}}\approx(300-1800)\cdot\int_{1380}^{2500}\text{d}\lambda \frac{\text{h}\nu}{\text{cm}^{2}\cdot\text{s}\cdot\text{sr}\cdot\AA}\approx(3.6-22)\cdot 10^{-6}\ \frac{\text{erg}}{\text{cm}^{2}\cdot\text{s}\cdot\text{sr}}\approx(3.6-22) \frac{\text{nW}}{\text{m}^{2}\cdot\text{sr}}, \tag{36}\] where the photon's count was multiplied by factor \(h\nu=hc/\lambda\) and integrated over its bandpass (1380-2500)\(\AA\) assuming the flat spectrum. The observed intensity (36) is consistent with the AQN proposal [17]. We expect that a similar intensity could contribute to the UV emission (in addition to the luminosity from the UV emitting stars) of the red-shifted galaxies as mentioned at the very end of the previous subsection 4.2. We conclude this section with the following generic comment. We advocate a new paradigm that the visible and DM components become strongly interacting components if some conditions are met. This should be contrasted with conventional CCDM paradigm when DM and visible matter components never interact. The new paradigm has many observable consequences, such as emergence of the correlations (1) mentioned in subsection 4.1 and excess of the diffuse UV emission (along with radiation in other frequency bands) as highlighted in subsections 4.2 and 4.3. There are many more mysterious and puzzling observations at dramatically different scales in different systems which also suggest that a new source of energy apparently is present in variety of systems, from galactic scale to the Sun and Earth, which is the topic of the Concluding comments of section 5. ## 5 Concluding comments and Future Developments The presence of the _antimatter_ nuggets4 in the system implies, as reviewed in Sect.2, that there will be annihilation events (and continues injection of energy at different frequency bands, from UV to the radio bands) leading to large number of observable effects on different scales: from Early Universe to the galactic scales to the Sun and the terrestrial rare events. Footnote 4: We remind the readers that the antimatter in this framework appears as natural resolution of the long standing puzzle on similarity between visible and DM components, \(\Omega_{\text{DM}}\sim\Omega_{\text{visible}}\) irrespectively to the parameters of the model. This feature is a result of the dynamics of the \(\mathcal{CP}\) violating axion field during the QCD formation period, see Sect. 2 for review. No any other DM models, including the original Witten’s construction [9] can provide such similarity between these two matter components of the Universe without fine-tunings. The structure formation dynamics, which is the topic of this work, is obviously a prerogative of the numerical simulations. However, our goal in this work was to pinpoint the elements in the dynamics of the galaxy formation where AQN-induced processes dominate the dynamics, and dramatically modify the structure at small scales. In Sect. 5.1 below we list the basic claims of our studies. In Sect. 5.2 we list some possible new tests which can substantiate or refute this new paradigm. Finally, in Sect. 5.3 we describe several other mysterious and puzzling observations, which can be understood within the same AQN framework, and which indirectly support our proposal. The evidences mentioned in Sect. 5.3 are collected in dramatically different environments such as the Early Universe, post-recombination epoch, solar corona, Earth's atmosphere. ### Basic results. Our basic results can be summarized as follows. We argued in Sect. 4.1 that the observed correlation such as (1) could naturally emerge in the AQN framework. The condition (31) defining the domain when galaxies may form (33) assumes dramatically different structure because the cooling time \(t_{\rm cool}(\rho_{\rm gas},\rho_{DM})\) entering (31) now depends explicitly on both, the \(\rho_{\rm gas}\) and \(\rho_{DM}\), which is qualitatively distinct feature from conventional picture when \(t_{\rm cool}(\rho_{\rm gas})\) could only depend on the visible baryonic component. The basic technical reason for such dramatic modification of the cooling rate (in comparison with conventional estimates) is due to the decreasing of the ionization fraction \(x_{e}\) in the presence of the dark matter AQNs at small scales. Our hope is that many of the puzzling problems listed in Sect.1 (such as Core-Cusp problem etc), including correlation (1) may find their resolutions if this new feature of the system will be incorporated in future simulations on structure formation. The processes which lead to the condition (33) where the galaxies may form will be always accompanied by injection of some energy in different frequency bands (from radio to UV) in the same parts of the galaxies due to the annihilation processes in that regions. The estimate (35) provides an upper limit for the total released energy during the Hubble time per single proton (hydrogen atom). In particular, one can argue that the excess of the UV emission which is apparently observed in red-shifted galaxies and in our own galaxy could be related to these annihilation processes as mentioned in sections 4.2 and 4.3 correspondingly. ### New possible tests The main element of the AQN framework is that all new effects are determined by the line of sight \(\Gamma\) which includes both: the DM and visible matter distributions: \[\Phi_{\Gamma}\propto\int_{\Gamma}dl\,\rho_{\rm gas}(l)\rho_{\rm DM}(l), \tag{37}\] which is inevitable feature of the framework when the DM consist the AQNs being made from the same strongly interacting quarks and gluons the baryonic matter made of. The coefficient of the proportionality (the strength of the interaction) is very sensitive to the environment. It becomes strong at small scales when the density of the gas is relatively large. It is a highly non-linear effect as emphasized in overview section 2. The eq. (37) is precisely the coupling we used in all our estimates starting with (21). Exactly this interaction leads to all dramatic consequences at small scales mentioned above in subsection 5.1. The coupling (37) should be contrasted with conventional WIMP-like models when DM and visible components do not couple. Some modifications of the DM models, such as Self-Interacting dark matter, Self-Annihilating dark matter, Decaying dark matter, and many others, depend on DM distribution, but not on visible component. Therefore, some specific morphological correlations of the DM and visible matter distributions originated from (37) can be explicitly studied in future. The same formula (37) essentially determines the intensity and the spectral features of the emission due to this visible-DM coupling. In particular, as we mentioned in section 4.3 the well-established excess of the diffuse UV emission [14; 15; 16] which cannot be explained by conventional astrophysical sources could be naturally understood within the AQN framework [17]. In this case one can study the morphology of the DM and visible matter distributions as well as ionization features of the clouds along the line of sight \(\Gamma\)[35]. Furthermore, one could expect a similar emission to occur in red shifted galaxies (of course with rescaling of the corresponding frequency bands for non-vanishing \(z\)) as mentioned in 4.2. This is because the source of the emission excess in red-shifted galaxies and in our own galaxy is one and the same and it is originated from the same AQN annihilation processes. The luminosity of the AQN-induced FUV emission was estimated in [17] and it is consistent with observed intensity as given by (36), while conventional WIMP-like models can generate the intensity which is 17 orders of magnitude smaller than observed [14]. ### Other (indirect) evidences for a new Paradigm There are many hints (outside the galactic scale) suggesting that the annihilation events (which is inevitable feature of this framework) may indeed took place in early Universe as well as in present epoch. In particular, the AQNs do not affect BBN production for H and He, but might be responsible for a resolution of the "Primordial Lithium Puzzle" due to its large electric charge \(Z=3\), see [26] for the details. In addition to the UV excessive radiation mentioned at the very end of Sect. 4, the AQNs may also help to alleviate the tension between standard model cosmology and the recent EDGES observation of a stronger than anticipated 21 cm absorption feature as argued in [29]. The AQNs might be also responsible for famous long standing problem of the "Solar Corona Mystery" [36; 37] when the so-called "nanoflares" conjectured by Parker long ago [38] are identified with the annihilation events in the AQN framework. The corresponding very rare AQN-induced events on Earth cannot be studied by any conventional DM instruments today because their small sizes such that the corresponding AQN flux is at least 20 orders of magnitude smaller than the WIMP's flux due to very heavy nugget's mass as reviewed in Section 2. However, the cosmic ray (CR) laboratories with typical size of 100 km are capable to study such rare events. In fact, there are several unusual and mysterious observations of the CR-like events which might be related to the AQN propagating in atmosphere. In particular, it includes the ANITA observation [39; 40] of two anomalous events with noninverted polarity which can be explained within AQN framework [41]. It also includes the Multi-Modal Clustering Events observed by HORIZON 10T [42; 43] which impossible to understand in terms of the CR events, but which could be interpreted in terms of the AQN annihilation events in atmosphere as argued in [44]. Similar mysterious CR-like events had been also recorded by AUGER [32] and Telescope Array [45]. The CR-like events can also manifest themselves in form of the acoustic and seismic signals [30], and could be in principle recorded if dedicated instruments are present in the same area where CR detectors are located. In this case the synchronization between different types of instruments could play a vital role in the discovery of the DM. The same DM source in form of the AQNs could resolve a number of different, but related puzzles. In particular, the same AQNs could be the source of ionizing radiation which is known to be present well above the galactic plane [46]. Furthermore, the same DM source in form of the AQNs may also contribute to the resolution of another long standing problem related to the Extragalactic Background Light (EBL). Indeed, it has been known for some time that the conventional measurements cannot be explained by diffuse galaxy halos or intergalactic stars. The discrepancy could be as large as factor \(\sim(2-3)\) or even more, see e.g. recent reviews [47; 48]. Our comment here is that the AQNs may fulfill this shortage as the energy injection at different scales is an inevitable feature of this construction, as the _antimatter_ nuggets (along with matter nuggets) represent the DM density in this framework, see Sect 2. Furthermore, the spectrum of the emission is very broad and includes UV, optical, IR light, and even the radio frequency bands. On the observational side, there are indeed a numerous number of hints suggesting the excessive diffuse radiation in many frequency bands, from UV to the radio emissions. As the latest examples one could mention the observed "anomalous flux" [49] of the COB excess. One could also mention that the observed widths of Ly-\(\alpha\) forest absorption lines are much wider compared to conventional numerical simulations. Observations suggest that there is a non-canonical heating process in IGM neglected in simulations such that an additional energy \(\sim 6.9\) eV per baryon is required to match the observations, which can be done e.g. with dark photon DM model [50]. Another recent example is the observed excess in radio frequency bands, \(\nu\in(150\ {\rm MHz},8.4\ {\rm GHz})\) where significant discrepancy remains as large as factor \(\sim 5\)[51]. Our comment here is as follows. Every single mysterious and puzzling effect mentioned above can be in principle explained with some specifically designed model with specifically chosen parameters such as [50]. In contrast, the AQN model was invented long ago [8] with dramatically different motivation for drastically different purposes. Nevertheless, a large number of puzzling, mysterious and anomalous events mentioned above could be simultaneously explained within the same framework with the same set of parameters. In particular, the required energy injection to explain the observed widths of Ly-\(\alpha\) forest could be naturally explained by the AQN annihilation processes with very broad spectrum and total energy estimated by (35). The same energy injection could be responsible for EBL excesses in different frequency bands, as mentioned above. We conclude this work with the following final comment. We advocate an idea that the basic paradigm on nature of DM should be changed: from old paradigm (when DM is non-baryonic weakly interacting particles) to new paradigm when DM is baryonic and strongly interacting composite system, made from (anti)quarks and gluons of the Standard Model as reviewed in Sect.2. The AQNs in this framework behave as _chameleon_-like particles: they behave as conventional DM components in low density environment, and become strongly interacting macroscopically large objects in relatively high density environment. The new paradigm has many consequences which are mentioned above and in Sect.4, and which are consistent with all presently available cosmological, astrophysical, satellite and ground-based observations. In fact, it may even shed some light on the long standing puzzles and mysteries as mentioned above in Sect.5.3. The structure formation dynamics, which is the topic of this work, is obviously a prerogative of the numerical simulations as we mentioned many times in the text. The goal of the present work with analytical estimates is to pinpoint the exact places and elements in the dynamics of the galaxy formation where AQN-induced processes become dominating and lead to a dramatic deviation at small scales from the conventional paradigm. Therefore, with this work we advocate the community to incorporate this new crucial element on visible -DM interaction (37) in the numerical simulations. If future observations along with numerical simulations (which would incorporate the visible -DM interaction) will confirm and substantiate the basic consequences of this work as listed above and in Sect.4 it would represent a strong argument suggesting that the resolution of two long standing puzzles in cosmology - the nature of the DM and the matter-antimatter asymmetry of our Universe- are intimately linked. The corresponding deep connection is automatically implemented and incorporated in the AQN framework by its construction as briefly overviewed in Sect.2. ## Acknowledgements I am thankful to Joel Primack for long and very useful discussions on many different aspects of the new paradigm advocated in the present work. I am also thankful to Ludo Van Waerbeke for collaboration on many completed and ongoing projects related to this new paradigm and for very useful explanation on how the astro/cosmology community (astro-ph in terms of the arXiv nomenclature) operates, which is very different from hep-ph physics community practices. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada.
2309.13478
CA-PCA: Manifold Dimension Estimation, Adapted for Curvature
The success of algorithms in the analysis of high-dimensional data is often attributed to the manifold hypothesis, which supposes that this data lie on or near a manifold of much lower dimension. It is often useful to determine or estimate the dimension of this manifold before performing dimension reduction, for instance. Existing methods for dimension estimation are calibrated using a flat unit ball. In this paper, we develop CA-PCA, a version of local PCA based instead on a calibration of a quadratic embedding, acknowledging the curvature of the underlying manifold. Numerous careful experiments show that this adaptation improves the estimator in a wide range of settings.
Anna C. Gilbert, Kevin O'Neill
2023-09-23T21:06:17Z
http://arxiv.org/abs/2309.13478v2
# CA-PCA: Manifold Dimension Estimation, Adapted for Curvature ###### Abstract The success of algorithms in the analysis of high-dimensional data is often attributed to the manifold hypothesis, which supposes that this data lie on or near a manifold of much lower dimension. It is often useful to determine or estimate the dimension of this manifold before performing dimension reduction, for instance. Existing methods for dimension estimation are calibrated using a flat unit ball. In this paper, we develop CA-PCA, a version of local PCA based instead on a calibration of a quadratic embedding, acknowledging the curvature of the underlying manifold. Numerous careful experiments show that this adaptation improves the estimator in a wide range of settings. ## 1 Introduction Much of modern data analysis in high dimensions relies on the premise that data, while embedded in a high-dimensional space, lie on or near a submanifold of lower dimension. This allows one to embed the data in a space of lower dimension while preserving much of the essential structure, with benefits including faster computation and data visualization. This lower dimension, hereafter referred to as the intrinsic dimension (ID) of the underlying manifold, often enters as a parameter of the dimension-reduction scheme. For instance, in each of the Johnson-Lindenstrauss-type results for manifolds by [13] and [4] the target dimension depends on the ID. Furthermore, the ID is a parameter of popular dimension reduction methods such as t-SNE [28] and multidimensional scaling [12, 16]. Therefore, it may be beneficial to estimate the ID before running further analysis since compressing the data too much may destroy underlying structure and it may be computationally expensive to re-run algorithms with a new dimension parameter, if such an error is even detectable. [6] uses their ID estimator to obtain sample complexity bounds for Generative Adversarial Networks (GANs). An interesting direct use of ID is found in [25], in which it is interpreted as a measure of complexity or variability for basketball plays. To similar effect, [3] uses intrinsic dimension to measure the complexity of the space of observed neural patterns in the human brain. The literature on ID estimators is vast, and we refer the reader to [8] and [9] for a comprehensive review. We highlight recent progress of [14], as well as [18], [6], and [22], which prove results about the number of samples needed to estimate the ID with a given probability. For our current analysis, we focus on the use of principle components analysis (PCA) as an ID estimator. There are a variety of ways this has been done (see [7, 17, 19, 24, 29]), but each version of local PCA works roughly by finding the eigenvalues of the covariance matrix for a data point and its nearest neighbors. The ID is then estimated using the fact that we expect the eigenvalues to be high for eigenvectors which are close to the tangent space of the underlying manifold and the eigenvalues to be low for eigenvectors which are nearly orthogonal to the tangent space. We expect \(d\) eigenvalues to be larger than the rest, where \(d\) is the ID. The remaining \(D-d\) eigenvalues will be small, but often nonzero due to effects of curvature or error in measurement. Local PCA and other ID estimators are often calibrated via a subspace or unit ball intersected with a subspace. While a manifold may be locally well-approximated by its tangent space near a point, in practice one may not always have enough sampled data to zoom in sufficiently close. _Our key insight is that if one expects data to lie on a manifold with nontrivial curvature rather than a subspace, we should calibrate the ID estimator using a manifold with nontrivial curvature rather than a subspace. In particular, our main contribution is to calibrate a version of local PCA to a quadratic embedding, producing a new ID estimator, which we deem curvature-adjusted PCA (CA-PCA)._ Provided the underlying manifold is \(C^{2}\)-smooth, it will be well-approximated by a quadratic embedding, in fact, better approximated than by its tangent plane. In principle, this insight could be applied to any number of existing ID estimators. We choose to apply it to a single estimator for which the adjustment is relatively simple, then test the new version extensively. In particular, we adapt a version of local PCA found in [22] for the curvature of a manifold and apply the new version to various examples of data sampled from manifolds. We note that this adaption of local PCA necessitates an entirely novel analysis. The main benefit of our estimator is that we are better able to estimate the ID in cases where the sample size is small. It allows one to consider either a larger number of nearest neighbors for the ID estimation as it adjusts for the neighborhood starting to "go around" the manifold or a smaller number of nearest neighbors with the increased power to distinguish between whether the variance in eigenvalues is due to curvature or statistical noise. Our method achieves this increase in accuracy by regularizing fit of the eigenvectors of the data to a curvature adjusted benchmark. The use of quadratic embeddings for data lying on or near a manifold is not new. It was used to approximate a manifold (of known dimension) in [1] and [11]. [21] approximate manifolds with spherelets. [27] consider the application of PCA to neighborhoods of a manifold modeled via quadratic embedding to analyze estimation of tangent space, yet does not consider ID estimation. However, to the best of our knowledge, this is the first time quadratic embeddings have been used in combination with PCA to estimate the ID of a manifold. Our paper is outlined as follows. In Section 2, we describe the version of local PCA found in [22]. In Section 3, we compute the limiting distribution of eigenvalues expected for the covariance matrix of a quadratic embedding, which is then used to derive our test. The formal calculations are saved for Appendix A. Experiments on data sampled from manifolds, both synthetic and simulated, are described in Section 4. Discussion is in Section 5. ## 2 Background and Problem Setup Let \(\{x_{1},...,x_{k}\}\) be a sample of points in \(\mathbb{R}^{D}\). The covariance matrix \(\Sigma[x_{1},...,x_{k}]\) of this sample is constructed as follows. Let \(\bar{x}=\frac{1}{k}\sum_{i=1}^{k}x_{i}\) and define \[\hat{\Sigma}[x_{1},...,x_{k}]=\frac{1}{k-1}\sum_{i=1}^{k}(x_{i}-\bar{x})(x_{i} -\bar{x})^{T},\] where \(\bar{x}\) and each \(x_{i}\) are interpreted as column vectors. A continuous version \(\Sigma[\mu]\) may be computed for finite measures \(\mu\) on \(\mathbb{R}^{D}\) by replacing the summation with integration and dividing by the total volume of \(\mu\). Observe that \(\hat{\Sigma}[x_{1},...,x_{k}]\) and \(\Sigma[\mu]\) are always symmetric, positive semidefinite matrices. Let \(\vec{\lambda}\hat{\Sigma}[x_{1},...,x_{k}]\) denote a vector consisting of the eigenvalues of \(\hat{\Sigma}[x_{1},...,x_{k}]\) in decreasing order and similarly for \(\vec{\lambda}\Sigma[\mu]\). The idea behind using local PCA for ID estimation is that if \(x_{1},...,x_{k}\) are sampled from a small neighborhood where the underlying manifold is well-approximated by its \(d\)-dimensional tangent space, then one expects the first \(d\) elements of \(\vec{\lambda}\hat{\Sigma}[x_{1},...,x_{k}]\) to be much larger than the last \(D-d\) elements. There are many ways to translate this observation into practice; see citations in the Introduction for reference. Here, we focus on a formulation of the test described by [22] which requires no human judgment or arbitrary threshold cutoffs; presents the possibility of a simple modification to adjust for curvature of the underlying manifold; and is supported by evidence from our experiments as well as proof of statistical convergence [22]. **Lemma 2.1** ([22]).: _Let \(W\) be a \(d\)-dimensional subspace of \(\mathbb{R}^{D}\) (\(D\geq d\)) and let \(\nu\) denote the \(d\)-dimensional Lebesgue measure on \(W\) intersected with the unit ball of \(\mathbb{R}^{D}\). Then_ \[\vec{\lambda}\Sigma[\nu]:=\vec{\lambda}(d,D):=\frac{1}{d+2}(\underbrace{1,...,1}_{d\text{ times }},\underbrace{0,...,0}_{D-d\text{ times }}).\] An elementary argument shows that for \(r>0\) and \(v\in\mathbb{R}^{D}\), \[\frac{1}{r^{2}}\vec{\lambda}\hat{\Sigma}[rx_{1}-v,...,rx_{k}-v]=\vec{\lambda} \hat{\Sigma}[x_{1},...,x_{k}].\] Thus, given points sampled from a \(d\)-dimensional ball of radius \(r\) centered away from the origin in \(\mathbb{R}^{D}\), we expect \(1/r^{2}\) times the associated eigenvalues to be close to \(\vec{\lambda}(d,D)\). Let \(X\subset\mathbb{R}^{D}\) be a collection of points, presumably on or near a \(d\)-dimensional manifold embedded in \(\mathbb{R}^{D}\). Let \(x\in X\) and \(\{x_{1},...,x_{k}\}\) be the neighbors of \(x\) in \(X\) lying within distance \(r\) of \(x\). Then, the test described in [22] determines an estimated ID \(\hat{d}\) at \(x\) by \[\hat{d}=\operatorname*{argmin}_{d}\left\|\frac{1}{r^{2}}\vec{\lambda}\hat{ \Sigma}[x_{1},...,x_{k}]-\vec{\lambda}(d,D)\right\|_{2}.\] ## 3 Theoretical Analysis to set the stage for CA-PCA Before we specify the CA-PCA algorithm, we establish several theoretical results that help us set the stage for the algorithm. We delineate the limiting distribution of the eigenvalues of the covariance matrix for the uniform distribution on a Riemannian manifold of dimension \(d\). ### Limiting Distribution of Eigenvalues Given a \(C^{2}\), \(d\)-dimensional manifold \(\mathcal{M}\subset\mathbb{R}^{D}\) and a point \(p\in\mathcal{M}\), there exists an orthonormal set of coordinates \((x_{1},...,x_{D})\) for \(\mathbb{R}^{D}\) such that \(\mathcal{M}\) is locally the graph of a \(C^{2}\) function \(F:\mathbb{R}^{d}\rightarrow\mathbb{R}^{D-d}\). Without loss of generality, we take \(p\) to be the origin in \(\mathbb{R}^{D}\). Since \(F\) is well-approximated by its Taylor series of order 2, we consider the quadratic embedding \[Q:(x_{1},...,x_{d})\mapsto(x_{1},...,x_{d},Q_{1}(x_{1},...,x_{d}),...,Q_{D-d}( x_{1},...,x_{d})), \tag{1}\] where \(Q_{j}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is a quadratic form of the form \(Q_{j}(x)=x^{T}M_{j}x\) for a symmetric \(d\times d\) matrix \(M_{j}\) (\(1\leq j\leq D-d\)). We denote the eigenvalues of \(M_{j}\) as \(\lambda_{j,1},...,\lambda_{j,d}\). Denote the graph of \(Q\) as \(\mathcal{M}_{Q}\) and the \(d\)-dimensional Riemannian volume form on \(\mathcal{M}_{Q}\) by \(d\mu_{Q}\). Our goal is to compute the eigenvalues of the covariance matrix for the uniform (with respect to \(\mu_{Q}\)) distribution on \(\mathcal{M}_{Q}\cap B_{r}(0)\) for some radius \(r>0\). By rescaling, take \(r=1\). We denote this matrix \(\Sigma\). For this purpose, we define \(S:\mathbb{R}^{d}\rightarrow\mathbb{R}\), the density function with respect to \(d\mu_{Q}\), as \(d\mu_{Q}(x_{1},...,x_{d})=S(x_{1},...,x_{d})dx\), where \(dx\) is the Lebesgue measure on \(\mathbb{R}^{d}\). Define quadratic forms \(\tilde{Q}_{j}(x)=x^{T}M_{j}^{2}x\), and note that the eigenvalues of the matrix \(M_{j}^{2}\) are \(\lambda_{j,1}^{2},...,\lambda_{j,d}^{2}\). Let \(\Lambda=\max_{1\leq k\leq D-d}\left\|M_{k}\right\|\). **Lemma 3.1**.: _Under the above assumptions and notation, the density function of \(d\mu_{Q}\) is_ \[S(x)=1+2\sum_{j=1}^{D-d}\tilde{Q}_{j}(x)+O(\Lambda^{4}). \tag{2}\] Proof.: Given (1), \[S(x)=\sqrt{|\det(g_{ij}(x))_{1\leq i,j\leq d}|},\] where \(g_{ij}(x)=g_{i}(x)\cdot g_{j}(x)\) and \[g_{i}(x)=\left(0,...,1,...,0,\frac{\partial Q_{1}}{\partial x_{i}}(x),..., \frac{\partial Q_{D-d}}{\partial x_{i}}(x)\right).\] Since \(\nabla Q_{j}(x)=2M_{j}x\), we have \[g_{i}(x)=\left(0,...,1,...,0,(2M_{1}e_{i})\cdot x,...,(2M_{D-d}e_{i})\cdot x \right),\] where \(e_{1},...,e_{d}\) is the standard ordered basis of \(\mathbb{R}^{d}\). Thus, \[g_{ij}(x)=\delta_{ij}+\sum_{k=1}^{D-d}(2M_{k}e_{i}\cdot x)(2M_{k}e_{j}\cdot x), \tag{3}\] and the trace of the matrix \((g_{ij})=(g_{ij})_{1\leq i,j,\leq d}\) is \[\mathrm{Tr}(g_{ij}(x))=d+\sum_{j=1}^{D-d}\sum_{i=1}^{d}\left|2M_{j}e_{i}\cdot x \right|^{2}. \tag{4}\] Consider a symmetric \(d\times d\) matrix \(M\) with columns \(m_{1},...,m_{d}\) (thus \(Me_{i}=m_{i}\)). Then, \[x^{T}M^{2}x=(x^{T}M^{T})\cdot(Mx)=\|Mx\|^{2}=\sum_{i=1}^{d}|m_{i}\cdot x|^{2}. \tag{5}\] Plugging (5) applied to \(M=M_{j}\) (\(1\leq j\leq D-d\)) into (4), \[\mathrm{Tr}(g_{ij}(x))_{1\leq i,j\leq d}=d+4\sum_{j=1}^{D-d}\tilde{Q}_{j}(x). \tag{6}\] By (3), the difference of \((g_{ij}(x))_{1\leq i,j\leq d}\) and the identity matrix has entries of order \(O(\Lambda^{2})\). Thus, the eigenvalues of \((g_{ij}(x))_{1\leq i,j\leq d}\) each differ from \(1\) by \(O(\Lambda^{2})\). That is, for fixed \(x_{0}\), and denoting the eigenvalues of \((g_{ij}(x_{0}))_{1\leq i,j\leq d}\) by \(\mu_{1},...,\mu_{d}\), \[\mu_{i}=1+c_{i},\] where \(c_{i}=O(\Lambda^{2})\) and \[\sum_{i}c_{i}=4\sum_{j=1}^{D-d}\tilde{Q}_{j}(x_{0}).\] Thus, \[\det(g_{ij}(x))=\prod_{i=1}^{d}(1+c_{i})=1+4\sum_{j=1}^{D-d}\tilde{Q}_{j}(x)+O (\Lambda^{4}).\] By the Taylor expansion \(\sqrt{1+t}=1+\frac{1}{2}t+O(t^{2})\), we have the desired conclusion. Recall, our goal is to compute the eigenvalues of the covariance matrix \(\Sigma\). Let \[R=\{x\in\mathbb{R}^{d}:|x|^{2}+\sum_{j=1}^{D-d}(Q_{j}(x))^{2}\leq 1\}\] and \[\tilde{S}(x)=\frac{S(x)}{\int_{R}S(x)dx}.\] Then, each of the entries of \(\Sigma\) is of one of the following forms 1. \(I_{1}(i)=\int_{R}(x_{i}-\bar{x}_{i})^{2}\tilde{S}(x)dx\)\((1\leq i\leq d)\) 2. \(I_{2}(i,j)=\int_{R}(x_{i}-\bar{x}_{i})(x_{j}-\bar{x}_{j})\tilde{S}(x)dx\)\((1\leq i\neq j\leq d)\) 3. \(I_{3}(i,j)=\int_{R}(x_{i}-\bar{x}_{i})(Q_{j}(x)-\bar{Q}_{j})\tilde{S}(x)dx\)\((1\leq i\leq d,1\leq j\leq D-d)\) 4. \(I_{4}(i)=\int_{R}(Q_{i}(x)^{2}-\bar{Q}_{i}^{2})\tilde{S}(x)dx\)\((1\leq i\leq D-d)\) 5. \(I_{5}(i,j)=\int_{R}(Q_{i}(x)-\bar{Q}_{i})(Q_{j}(x)-\bar{Q}_{j})\tilde{S}(x)dx\)\((1\leq i\neq j\leq D-d)\), where \[\bar{x}_{i}=\int_{R}x_{i}\tilde{S}(x)dx,\ \ \ \ \ \bar{Q}_{i}=\int_{R}Q_{i}(x) \tilde{S}(x)dx.\] Since \(x_{i}\) is an odd function and \(R\) is a region symmetric about the origin, we immediately see that \(\bar{x}_{i}=0\) for \(1\leq i\leq d\). Furthermore, \(Q_{j}(x)-\bar{Q}_{j}\) is an even function and \(x_{i}-\bar{x}_{i}=x_{i}\) is odd, so \(I_{3}(i,j)=0\) for all \(1\leq i\leq d,1\leq j\leq D-d\). As a result, \(\Sigma\) is a block diagonal matrix with an upper block \(\Sigma_{1}\) of size \(d\times d\) and a lower block \(\Sigma_{2}\) of size \((D-d)\times(D-d)\), that is \[\Sigma=\left[\begin{array}{c|c}\Sigma_{1}&0\\ \hline 0&\Sigma_{2}\\ \end{array}\right]. \tag{7}\] In [27], the authors consider separately the "uncorrelated case," corresponding to taking \(I_{5}(i,j)=0\) for all \(1\leq i\neq j\leq D-d\) above. One could use this assumption to calculate the eigenvalues of \(\Sigma_{2}\) explicitly (\(\frac{2}{(d+2)^{2}}A_{j}\), as will be shown in the proof of Proposition 3.2); however, only the sum of these eigenvalues will be needed for comparison with the eigenvalues of \(\Sigma_{1}\), so we avoid making this assumption ourselves. This motivates the statement of our main proposition. To simplify the following expressions, we fix the notation \[A_{j}=\sum_{k=1}^{d}\lambda_{j,k}^{2},\ \ A=\sum_{j=1}^{D-d}A_{j}=\sum_{j,k} \lambda_{j,k}^{2}\] and \[B_{j}=\left(\sum_{k=1}^{d}\lambda_{j,k}\right)^{2},\ \ B=\sum_{j=1}^{D-d}B_{j}= \sum_{j}\left(\sum_{k}\lambda_{j,k}\right)^{2}.\] Observe that each of \(A,A_{j},B,B_{j}\) is \(O(\Lambda^{2})\). **Proposition 3.2**.: _Let \(\Sigma_{1},\Sigma_{2}\) be as above. Denote the eigenvalues of \(\Sigma_{1}\) by \(\lambda_{1},...,\lambda_{d}\). Then \(\mbox{Tr}(\Sigma_{1})\) is_ \[\frac{d}{d+2}+\frac{4d-2d^{2}}{(d+2)^{3}(d+4)}A-\frac{3d+4}{(d+2)^{2}(d+4)}B+O (\Lambda^{4}) \tag{8}\] _and \(\mbox{Tr}(\Sigma_{2})\) is_ \[\frac{2}{(d+2)^{2}}A+O(\Lambda^{3}). \tag{9}\] _Furthermore, for \(1\leq i\leq d\), \(\lambda_{i}\) lies between_ \[\frac{1}{d+2}-\frac{20d^{2}+82d+76}{(d+2)^{3}(d+4)}A-\frac{11d+24}{2(d+2)^{2}( d+4)}B \tag{10}\] _and_ \[\frac{1}{d+2}+\frac{5d^{2}+18d+24}{(d+2)^{3}(d+4)}A+\frac{d}{2(d+2)^{2}(d+4)}B \tag{11}\] _up to an error of \(O(\Lambda^{4})\)._ The bounds in (10) and (11) are not sharp. Our method of proof ignores much potential cancellation; however, it still reveals that \(|\frac{1}{d+2}-\lambda_{i}|\leq O(d^{-2}\sum_{j=d+1}^{D}\lambda_{j}^{2})\). In order to determine \(\mbox{Tr}(\Sigma_{1})\) and \(\mbox{Tr}(\Sigma_{2})\), we only need the diagonal entries of \(\Sigma\), that is, the integrals \(I_{1}(i)\) and \(I_{4}(j)\). To determine the bounds on \(\lambda_{1},...,\lambda_{d}\), we compute \(I_{1}(i)\) with \(x_{i}\) replaced by an arbitrary direction which may be chosen to maximize or minimize the integral. We perform these computations in Appendix A. The method is long, but a straightforward application of integration in radial coordinates, integration of polynomial functions over the sphere, truncation of Taylor series, and Lemma 3.1. Since \(B\) may not generally be calculated as a function of \(A\), we have not yet achieved an explicit relation between \(\mbox{Tr}(\Sigma_{1})\) and \(\mbox{Tr}(\Sigma_{2})\). We take our "best guess" of how they may relate to be the expectation of what occurs when the eigenvalues are of random sign. Specifically, let be i.i.d. variables taking on value 1 and -1, each with probability \(1/2\). Let \(\lambda_{i,j}=\epsilon_{i,j}\alpha_{i,j}\), where \(\alpha_{i,1},...,\alpha_{i,d}\) are fixed. Then, \[\mathbb{E}B_{j}=\mathbb{E}(\sum_{i=1}^{d}\epsilon_{i,j}\lambda_{i,j})^{2}= \mathbb{E}\sum_{1\leq i_{1},i_{2}\leq d}\epsilon_{i_{1},j}\epsilon_{i_{2},j} \lambda_{i_{1},j}\lambda_{i_{2},j}=\sum_{i=1}^{d}\lambda_{i,j}^{2}=A_{j}.\] Thus, for the purpose of deriving our test in Subsection 3.2 we will assume \(A_{j}=B_{j}\) for all \(1\leq j\leq D-d\). In particular, \(A=B\). Since in practice \(D-d\) is often large, it may be reasonable to use expectation as above. Substituting \(A=B\) into (8), \[\text{Tr}(\Sigma_{1})=\frac{d}{d+2}-\frac{5d^{2}+6d+8}{(d+2)^{3}(d+4)}A \tag{12}\] with average eigenvalue \[\frac{1}{d+2}-\frac{5d^{2}+6d+8}{d(d+2)^{3}(d+4)}A. \tag{13}\] ### CA-PCA Algorithm ``` Input:\(X,N,k\) for\(n=1,...,N\)do Sample \(x\in X\) uniformly at random Choose \(\{x_{1},...,x_{k+1}\}\)\((k+1)\)-nearest neighbors to \(x\) Set \(r=(\|x-x_{k}\|_{2}+\|x-x_{k+1}\|_{2})/2\) Form \(\hat{\Sigma}[x_{1},...,x_{k}]\) Calculate eigenvalues \[(\hat{\lambda}_{1},...,\hat{\lambda}_{D})=\frac{1}{r^{2}}\vec{\lambda}\hat{ \Sigma}[x_{1},...,x_{k}]\] for\(d=1,...,D\)do Compute \((\lambda_{1}^{(d)},...,\lambda_{D}^{(d)})\) by substituting \(\hat{\lambda}_{1},...,\hat{\lambda}_{D}\) into (14) endfor Solve \[\hat{d}^{(n)} =\operatorname*{argmin}_{1\leq d\leq D}\|\vec{\lambda}(d,D)-( \lambda_{1}^{(d)},...,\lambda_{D}^{(d)})\|_{2}\] \[+\frac{1-\delta_{D}(d)}{D-d}\|(\hat{\lambda}_{1},...,\hat{ \lambda}_{D})-(\lambda_{1}^{(d)},...,\lambda_{D}^{(d)})\|_{1}\] endfor Return Mean \((\hat{d}^{(1)},...,\hat{d}^{(N)})\) ``` **Algorithm 1** CA-PCA Proposition 3.2 provides a formula for \(\sum_{i=1}^{d}\lambda_{i}=\text{Tr}(\Sigma_{1})\). Ideally, we would like to know the individual values of \(\lambda_{i}\) to compare to the eigenvalues coming from the sampled points. However, in practice, we are only given the sampled eigenvalues so we will assume that each \(\lambda_{i}\) is equal. This is a reasonable assumption given the bounds in (10) and (11). Furthermore, the sampled eigenvalues may differ from each other due to statistical noise, so it may be faulty to assume the difference is due to curvature effects in \(\mathcal{M}\). Thus, we assume \[\lambda_{i}=\frac{1}{d+2}-c_{h}(d)A,1\leq i\leq d,\] where \(c_{h}(d)=\frac{5d^{2}+6d+8}{d(d+2)^{3}(d+4)}\) by (13). We assume \(\Lambda\) is small so we may ignore the \(O(\Lambda^{4})\) terms. Letting \(c_{l}(d)=\frac{1}{(d+2)^{2}}\), rewrite (9) as \[\sum_{j=d+1}^{D}\lambda_{j}=c_{l}(d)\sum_{j=1}^{D-d}A_{j-d}=c_{l}(d)A.\] By substitution, \[\frac{1}{d+2}=\lambda_{i}+\frac{c_{h}(d)}{c_{l}(d)}\sum_{j=d+1}^{D}\lambda_{j} =\lambda_{i}+c(d)\sum_{j=d+1}^{D}\lambda_{j}.\] by setting \(c(d)=c_{h}(d)/c_{l}(d)=(5d^{2}+6d+8)/(d(d+2)(d+4))\). Given \(x\in X\), let \(x_{1},...,x_{k+1}\) denote the \(k+1\) nearest neighbors of \(x\), in increasing order of distance from \(x\). Let \(r=(\|x-x_{k}\|_{2}+\|x-x_{k+1}\|_{2})/2\) and determine sample eigenvalues \(\hat{\lambda}_{1}\geq...\geq\hat{\lambda}_{D}\) from the matrix \(\frac{1}{r^{2}}\hat{\Sigma}[x_{1},...,x_{k}]\). Under our small \(\Lambda\) assumption we expect that as the number of samples increases, \((\hat{\lambda}_{1},...,\hat{\lambda}_{d})\) will converge to the eigenvalues of \(\Sigma_{1}\) and \((\hat{\lambda}_{d+1},...,\hat{\lambda}_{D})\) will converge to the eigenvalues of \(\Sigma_{2}\). Thus, we will treat \((\hat{\lambda}_{1},...,\hat{\lambda}_{d})\) as "coming from" \(\Sigma_{1}\) and expect \[\lambda_{i}^{(d)}:=\hat{\lambda}_{i}+c(d)\sum_{j=d+1}^{D}\hat{\lambda}_{j} \approx\frac{1}{d+2},1\leq i\leq d. \tag{14}\] Setting \(\lambda_{j}^{(d)}=0\) for \(d+1\leq j\leq D\), we are tempted to choose \(1\leq d\leq D\) to minimize the quantity \[\|\vec{\lambda}(d,D)-(\lambda_{1}^{(d)},...,\lambda_{D}^{(d)})\|_{2}=\sqrt{ \sum_{i=1}^{d}\Big{(}\frac{1}{d+2}-\lambda_{i}^{(d)}\Big{)}^{2}}. \tag{15}\] However, a small amount of statistical noise can easily lead us to pick the wrong value of \(d\). For example, suppose our data are sampled from \(\mathbb{R}^{2}\) and determine the eigenvalues \((0.21,0.15)\). Taking \(d=2\), the value of (15) is \(\|(0.25,0.25)-(0.21,0.15)\|_{2}=.1077\). Taking \(d=1\), then we add \(19/15*0.15\) back to \(0.21\) to get \((\lambda_{1}^{(1)},\lambda_{2}^{(1)})=(0.4,0)\), then compute (15) to be \(\|(1/3,0)-(.4,0)\|_{2}=0.0667\). We pick the dimension corresponding to the smaller error; thus, \(d=1\). However, the second eigenvalue \(0.15\) in this example corresponds to a very high curvature of our manifold. For the quadratic embedding \(x\mapsto(x,cx^{2})\), our model predicts \(c=\sqrt{27/40}\), in which case we are working at larger scales relative to our manifold, violating small \(\Lambda\) assumptions. Thus, we choose \(1\leq d\leq D\) to minimize \[\|\vec{\lambda}(d,D)-(\lambda_{1}^{(d)},...,\lambda_{D}^{(d)})\|_{2}+\frac{1- \delta_{D}(d)}{D-d}\|(\hat{\lambda}_{1},...,\hat{\lambda}_{D})-(\lambda_{1}^{ (d)},...,\lambda_{D}^{(d)})\|_{1}, \tag{16}\] where \(\delta_{D}(D)=1\) and \(\delta_{D}(d)=0\) for \(d\neq D\). (In the case \(d=D\), we take 0/0=0.) Here, the second term in (16) is taken to represent an implied notion of the curvature of the manifold. We use the \(\ell^{1}\) norm since the lower eigenvalues have an additive effect in modifying the upper ones. Furthermore, a \(1/(D-d)\) weight is chosen for this second term to make it represent the average of the lowest \(D-d\) eigenvalues, or in other words, the average of the curvatures of all of the \(Q_{j}\). If our data were to be remeasured with an additional feature/dimension, we would need to include an additional \(Q_{j}\) to describe the embedding; however, this shouldn't change the ID nor the eigenvalues of the other \(Q_{j}\). One may consider this modification as analogous to LASSO, in which one attempts to minimize an \(\ell^{2}\) error in conjunction with the size of the coefficients. ## 4 Experiments ### Outline We ran CA-PCA on a number of examples of point clouds, each contained in \(\mathbb{R}^{D}\) for some \(D\) and (presumably) sampled from manifolds. The results were compared with the approach from [22] (PCA) and the maximum likelihood estimator from [20] (LB). To fully compare CA-PCA to all other existing, competitive ID estimators would be onerous. [8] provides five criteria for a successful estimator: computaional feasability, robustness to multiscaling, robustness to high dimension, a work envelope, and accuracy. Among eleven estimators, none is the clear favorite as different tests satisfy different criteria. For this reason, and to reflect our main insight, we emphasize a comparison between CA-PCA and PCA. However, we still compare CA-PCA with LB to demonstrate that our estimator is at least competitive with another popular version. Given a point cloud \(X\subset\mathbb{R}^{D}\), a random point of \(X\) was sampled, denoted \(x\). The \(k\) nearest neighbors of \(x\) were computed and used to run principle components analysis, from which \(D\) eigenvalues were obtained. These eigenvalues were normalized by multiplying by \(1/r^{2}\), where \(r\) was chosen by taking the arithmetic mean of the distance of the \(k\)-th nearest neighbor of \(x\) and the distance of the \((k+1)\)-st nearest neighbor of \(x\). The CA-PCA test was run to determine an estimated ID. This process was repeated by sampling \(N\) points randomly with replacement from \(X\) and all of the estimated dimensions were averaged to produce a final estimate. An average estimated ID was then computed over a range of values of \(k\), although the same points \(x\) were sampled for each \(k\) to reduce time spent computing the nearest neighbors. Our results are depicted by graphing the averaged estimated ID as a function of \(k\) and comparing to the actual ID \(d\). \(N=200\) unless otherwise stated. The same process was used for the PCA and LB tests, except for the LB test no computation or rescaling of eigenvalues was needed, merely distances between points. Also, the same randomly sampled points \(x\) were used for each test. In the case of point clouds in \(\mathbb{R}^{D}\) for large \(D\) (see Isomap faces and airplane photos below), the \(D-k\) tail eigenvalues equal to zero were removed, in effect replacing the normalizing factor \(\frac{1}{D-d}\) by \(\frac{1}{k-d}\). When possible, manifolds are chosen to be without boundary, as points sampled from boundaries may produce a consistent underestimation of the ID, introducing a bias which may interfere with simple comparison of PCA and CA-PCA. ### Some Synthetic Data One possible parametrization for a Klein bottle embedded in \(\mathbb{R}^{4}\) is given by \[x =(a+b\cos(v))\cos(u)\] \[y =(a+b\cos(v))\sin(u)\] \[z =b\sin(v)\cos(u/2)\] \[w =b\sin(v)\sin(u/2)\] for \(u,v\in[0,2\pi)\). Choosing \(a=10\) and \(b=5\), \(400\) points were sampled via the uniform distribution for \(u,v\) on \([0,2\pi)^{2}\). Running the three tests on these \(400\) points, we see that both PCA and CA-PCA appear to converge to the correct dimension of \(2\), although the latter is much faster (see Figure 0(a)). The LB estimator fails to converge, though it is much closer to \(2\) than \(3\). On this note, it is also clear that CA-PCA provides an estimated dimension closer to \(2\) than \(1\) for smaller values of \(k\) than PCA. We randomly sampled 5,000 points from the unit sphere \(S^{7}\subset\mathbb{R}^{8}\) with respect to the uniform measure. Here, CA-PCA converges to the true dimension of \(7\) much faster than PCA (see Figure 0(b)). While LB reaches the true dimension first, the estimate decreases (gets worse) as \(k\) increases. For \(S^{7}\) and the Klein bottle, CA-PCA consistently produces a higher estimated ID than PCA. One may suspect that this behavior partially explains some of the faster convergence; however, the next example shows that CA-PCA can produce a lower estimate than PCA. Consider the parametrization of a curve in \(\mathbb{R}^{8}\) via the map \[\theta\mapsto(\cos\theta,\sin\theta,\cos 2\theta,...,\cos 4\theta,\sin 4 \theta),\theta\in[0,2\pi).\] A point cloud in \(\mathbb{R}^{8}\) was determined by sampling \(100\) points from this curve, uniformly randomly over \(\theta\in[0,2\pi)\). The results of the tests are shown in Figure 0(c). As the number of nearest neighbors increases and the collection of nearest neighbors wraps further around the curve, each of the tests provides an estimated ID increasingly higher than the true ID of \(1\). However, CA-PCA does so at a slower rate than PCA and provides a lower ID than PCA for each value of \(k\). We next move to some more difficult examples, where both the dimension and codimension are higher than in most previous examples. In Figure 1(a), we have the results of the tests applied to 20,000 points sampled randomly from \(SO(5)\) with respect to the Haar measure and viewed as Figure 1: Results for Synthetic Data 5x5 matrices (thus as elements of \(\mathbb{R}^{5\times 5}\cong\mathbb{R}^{25}\)). Here, CA-PCA overshoots the dimensions for larger values of \(k\) and becomes a worse predictor than PCA. However, it still gets closer to the true dimension of 10 for low values of \(k\) and stays within a reasonable range. The next example is a Lie group embedded in \(\mathbb{R}^{15}\), constructed by taking the direct sum of the three-dimensional \(SO(3)\) viewed as a subset of \(\mathbb{R}^{3\times 3}\cong\mathbb{R}^{9}\) and the 3-torus viewed as a subset of \(\mathbb{R}^{6}\) through the parameterization \[(\theta_{1},\theta_{2},\theta_{3})\mapsto(\sin\theta_{1},\cos\theta_{1},\sin \theta_{2},\cos\theta_{2},\sin\theta_{3},\cos\theta_{3}).\] Results for 20,000 randomly sampled points are depicted in Figure 1(b). CA-PCA fails to converge to the true dimension of 6, although it remains close even for small values of \(k\) and outperforms PCA at every value of \(k\) while competing with LB. In Figure 2, we have plotted the average estimated dimension over a large number of points. Since there is plenty of "wiggle room" with the dimension and codimension so high, one may ask if the estimators consistently approximate the true dimension, or if they do so in the average only after providing alternating over- and underestimates. Figure 3 shows the standard deviation of the estimated dimensions for each test at fixed values of \(k\). Since these are particularly small for both \(SO(5)\) and \(SO(3)\oplus\mathbb{T}^{3}\), the estimators, in particular PCA and CA-PCA, have consistently approximated the true ID in these cases. ### Some Simulated Data An.stl file (3D image) of an airplane was rotated at a random angle around the \(z\)-axis before being projected onto the \(x\)-\(z\) plane to determine a two-dimensional "photograph" of the airplane. Repeating this process, we obtained 200 images of size 432 by 288 pixels, viewed as vectors in \(\mathbb{R}^{432\times 288}\cong\mathbb{R}^{124,416}\). (A similar approach was taken in [2].) The expected dimension of this point cloud is 1, given that the images were generated via a 1-dimensional group of symmetries. Results are depicted in Figure 3(a). As one can see, both PCA and CA-PCA provide close to the "correct" answer for small values of \(k\); however, for \(k>13\), the estimated dimension for PCA blows up while for CA-PCA it remains between 1 and approximately 2. Considering only PCA, one may suspect that the point cloud Figure 2: Synthetic Manifolds in Higher Dimensions does not lie on or near a manifold, or consider an estimated dimension so far from the true value the estimator does not provide any benefits. In contrast, with CA-PCA the results are certainly consistent with data coming from a manifold. While the estimated dimension may be 2, a small amount of error assumed by the user might give them the range of 1 to 3, which may be sufficient. We obtained 10,000 images of an airplane by applying a random element of \(SO(3)\) (uniformly with respect to Haar measure) before projecting into two dimensions. The results of the ID estimators are shown in Figure 3(b). Here, it is clear that CA-PCA is not always able to magically estimate the dimension in such cases; however, it remains better than the other tests. The Isomap face database consists of 698 greyscale images of size 64 by 64 pixels and has been used in [20, 23, 26] and many other works. The images are of an artificial face under varying illumination, vertical orientation, and horizontal orientation. Translating the images into vectors in \(\mathbb{R}^{64\times 64}\cong\mathbb{R}^{4096}\), we obtain the results in Figure 3(c). One may expect the true ID to be 3, due to the three parameters which vary in the construction of the photos. However, each of those three parameters has a clear upper and lower bound. For instance, there are photos with the head facing left, right, and center, but none with it facing backwards. By similar reasoning regarding the vertical orientation and brightness, we expect the Figure 4: Results for Simulated Data Figure 3: Standard Deviation of Estimates for Synthetic Manifolds in Higher Dimensions Isomap faces to have similar structure as a three-dimensional unit cube. The problem of determining the dimension of a manifold with boundary presents challenges distinct from those arising in dealing with manifolds without boundary. See [5, 10] for more on this problem. For comparison, we construct a point cloud through random selection of 698 points from the unit cube in \(\mathbb{R}^{3}\) and run the same tests, producing Figure 4(a). Here, all three tests give ID estimates less than 3. While this time it is CA-PCA which is highest and closest to 3, this still seems to suggest that we expect a dimension a little less than 3 for the Isomap faces. While the variations in the vertical and horizontal orientations of the face may be comparable in magnitude, it is unclear whether the variation in the brightness is the same. For this reason, we repeat the above with the unit cube replaced by \([0,1]\times[0,1]\times[0,5]\), with results in Figure 4(b). Again, the estimated dimensions is a little under 3. We note that CA-PCA does tend to get closer to the 'true' dimension of 3, if one chooses to assign this value to a manifold with boundary. It is possible this is no coincidence; that given a sampled point at the center of a face of the unit cube, CA-PCA is better able to determine that the third and lowest eigenvalue is _not_ due to curvature since this would require the first two eigenvalues to be smaller than observed. However, more investigation is needed to determine if CA-PCA truly analyzes manifolds with boundary differently. Given the flexibility CA-PCA has to "move tail eigenvalues over to higher ones," one may wonder if it is better able to estimate the dimension of the Isomap faces (and other sets of images) by consistent underestimation in the setting \(k<<D\). To address this potential issue, we generated 698 random 64x64 grey-scale images (elements of the unit cube in \(\mathbb{R}^{4096}\)) and applied the three ID estimators. Results for PCA and CA-PCA are shown in Figure 4(c). (The LB estimator output values over 300 so its results are not depicted for readability of the graph.) Here, CA-PCA barely provides a lower estimate than PCA, suggesting its performance on the Isomap faces was truly due to proper estimation of the dimension rather than consistent underestimation. ### Analysis of Non-Manifolds Next, we see what happens when we apply the tests to another object which is not a manifold, in this case the union of two manifolds of different dimensions. We sample 200 points from the unit sphere in \(\mathbb{R}^{4}\) and 200 points from the circle determined by the equations \((x_{1}-4)^{2}+x_{2}^{2}=16,x_{3}=0,x_{4}=0\). Running tests on the union of the two datasets gives Figure 5(a). Considering the average, it appears Figure 5: For Comparison with Isomap Faces the estimators tend to a value of 2, the average of the dimension of the 3-sphere and that of the circle. However, the standard deviation of the estimates shown in Figure 5(b) demonstrates that there are many occurrences of 1's and 3's. For comparison, consider the standard deviations for the Klein bottle, shown in Figure 5(c). Thus, CA-PCA, plus PCA and perhaps LB, may potentially assist in determining if data comes from a manifold. ### Robustness to Error Lastly, we test our estimator for robustness by introducing error into the samples. Error was added to each point by independently adding to each coordinate a random number sampled uniformly from the interval \((-\epsilon,\epsilon)\) for some choice of \(\epsilon\). Figure 6(a) shows the results of the tests when this error was introduced to 400 points on the Klein bottle with \(\epsilon=1\) (compare the results to those in Figure 0(a) and \(\epsilon\) to the choice of \(a=10,b=5\)). The same method was applied to 20,000 points from \(SO(5)\) with \(\epsilon=.1\), with the results depicted in Figure 6(b). In both cases of error, CA-PCA's estimates are slightly further away from the true ID, yet still close and competitive with those from PCA. (Compare the results to those of Figure 1(a).) ## 5 Conclusion As we have seen, the use of local PCA as an ID estimator may be improved in a variety of settings by taking into account curvature of the underlying manifold while generally providing good estimates and competing with LB. CA-PCA is merely one of many potential applications of curvature analysis to ID estimators. In particular, calculations in the Appendix show that the \(d\)-dimensional volume of a ball of radius \(r\) intersected with a \(d\)-dimensional manifold \(\mathcal{M}\) is proportional to \(r^{d}+cr^{d+2}\), where \(c\) is a constant depending on \(\mathcal{M}\) (see Subsection A.3). There are many ID estimators derived using a volume proportional to \(r^{d}\) or related notions like average distances from the center of a ball. It would be interesting to see if any of these tests could be improved by taking curvature into account. Figure 6: Analysis of Non-Manifolds Another place for potential further study is that of ID estimation for manifolds with boundary. In our experiments, CA-PCA gets closer to the "true" ID of manifolds with boundary than PCA or LB, though more analysis is needed to determine if this is more than a coincidence. Figure 7: Synthetic Manifolds with Error Integral Computations ### Some Elementary Computations Let \(\sigma(x)\) denote the (non-normalized) surface measure on the sphere induced by Lebesgue measure on \(\mathbb{R}^{d}\). We will often write \(\sigma(x)\) when \(x\in\mathbb{R}^{d}\) and \(\sigma(\theta)\) when \(\theta\in S^{d-1}\), the unit sphere in \(\mathbb{R}^{d}\). The following elementary fact can be found in [15], for instance. **Theorem A.1**.: _Let \(P(x)=x_{1}^{\alpha_{1}}\cdots x_{d}^{\alpha_{d}}\) for \(\alpha_{1},...,\alpha_{d}\in\{0,1,2,...\}\). Write \(\beta_{j}=\frac{1}{2}(\alpha_{j}+1)\). Then, if all \(\alpha_{j}\) are even,_ \[\int_{S^{d-1}}P(x)d\sigma(x)=\frac{2\Gamma(\beta_{1})\cdots\Gamma(\beta_{d})} {\Gamma(\beta_{1}+...+\beta_{d})}.\] _If any \(\alpha_{j}\) are odd, then the above integral is zero._ For our purposes, we will need the values \(\Gamma(1/2)=\sqrt{\pi},\Gamma(3/2)=\frac{\sqrt{\pi}}{2},\Gamma(5/2)=\frac{3 \sqrt{\pi}}{4}\), and \(\Gamma(7/2)=\frac{15\sqrt{\pi}}{8}\) and the identity \(\Gamma(x+1)=x\Gamma(x)\). As particular instances of Theorem A.1, \[\int_{S^{d-1}}d\sigma(x)=\frac{2\pi^{d/2}}{\Gamma(d/2)} \tag{17}\] \[\int_{S^{d-1}}x_{i}^{2}d\sigma(x)=\frac{\pi^{d/2}}{\Gamma(d/2+1)} \tag{18}\] \[\int_{S^{d-1}}x_{i}^{2}x_{j}^{2}d\sigma(x)=\frac{\pi^{d/2}}{2\Gamma(d/2+2)} \tag{19}\] \[\int_{S^{d-1}}x_{i}^{4}d\sigma(x)=\frac{3\pi^{d/2}}{2\Gamma(d/2+2)} \tag{20}\] \[\int_{S^{d-1}}x_{i}^{2}x_{j}^{2}x_{k}^{2}d\sigma(x)=\frac{\pi^{d/2}}{4\Gamma( d/2+3)} \tag{21}\] \[\int_{S^{d-1}}x_{i}^{4}x_{j}^{2}d\sigma(x)=\frac{3\pi^{d/2}}{4\Gamma(d/2+3)} \tag{22}\] \[\int_{S^{d-1}}x_{i}^{6}d\sigma(x)=\frac{15\pi^{d/2}}{4\Gamma(d/2+3)} \tag{23}\] **Lemma A.2**.: _Let \(Q:\mathbb{R}^{d}\rightarrow\mathbb{R}\) be a quadratic form with eigenvalues \(\mu_{1},...,\mu_{d}\). Then,_ \[\int_{S^{d-1}}Q(\theta)d\theta=\frac{\pi^{d/2}}{\Gamma(d/2+1)}\sum_{k}\mu_{k} \tag{24}\] _Also,_ \[\int_{S^{d-1}}Q(\theta)^{2}d\theta=\left(\frac{1}{2}\left(\sum_{k=1}^{d}\mu_ {k}\right)^{2}+\sum_{k=1}^{d}\mu_{k}^{2}\right)\frac{\pi^{d/2}}{\Gamma(d/2+2)}. \tag{25}\] Proof.: To establish (24), we rotate coordinates so the eigenvectors of \(Q\) are \(e_{1},...,e_{d}\) and use (18): \[\int_{S^{d-1}}Q(\theta)d\theta =\int_{S^{d-1}}\sum_{k=1}^{d}\mu_{k}x_{k}^{2}d\theta\] \[=\sum_{k=1}^{d}\mu_{k}\int_{S^{d-1}}x_{k}^{2}d\theta\] \[=\frac{\sqrt{\pi}^{d}}{\Gamma(d/2+1)}\sum_{k=1}^{d}\mu_{k}.\] For (25), we repeat the same rotation of coordinates and use (19) and (20): \[\int_{S^{d-1}}Q(\theta)^{2}d\theta =\int_{S^{d-1}}\sum_{k=1}^{d}\mu_{k}x_{k}^{2}d\theta\] \[=\sum_{1\leq k,l\leq d}\mu_{k}\mu_{l}\int_{S^{d-1}}x_{k}^{2}x_{l} ^{2}d\theta\] \[=\sum_{k=1}^{d}\mu_{k}^{2}\left[\frac{3}{2}\frac{\sqrt{\pi}^{d}} {\Gamma(d/2+2)}\right]+\sum_{k\neq l}\mu_{k}\mu_{l}\left[\frac{1}{2}\frac{ \sqrt{\pi}^{d}}{\Gamma(d/2+2)}\right]\] \[=\left(\frac{1}{2}(\sum_{k}\mu_{k})^{2}+\sum_{k}\mu_{k}^{2} \right)\frac{\sqrt{\pi}^{d}}{\Gamma(d/2+2)}\] ### The Region First, let \[q(x)=\sum_{j=1}^{D-d}Q_{j}(x)^{2}\] and note \(q(x)=O(\Lambda^{2})\) for \(|x|\leq 1\). Writing \(x=r\theta\), with \(r\geq 0\) and \(\theta\in S^{d-1}\), let \(r=r(\theta)\) denote the positive real solution to \[r^{2}+q(r\theta)=r^{2}+r^{4}q(\theta)=1.\] By the quadratic formula and Taylor expansion \(\sqrt{1+t}=1+\frac{t}{2}+O(t^{2})\), \[r^{2}(\theta) =\frac{-1+\sqrt{1+4q(\theta)}}{2q(\theta)}\] \[=\frac{-1+1+2q(\theta)-2q(\theta)^{2}+O(q(\theta)^{3})}{2q(\theta)}\] \[=1-q(\theta)+O(\Lambda^{4}).\] Thus, \[r(\theta)=1-\frac{q(\theta)}{2}+O(\Lambda^{4}). \tag{26}\] and \[r(\theta)^{m}=1-\frac{mq(\theta)}{2}+O(\Lambda^{4}). \tag{27}\] This approximation will be used throughout this section as we integrate over spherical coordinates \((r,\theta)\) and need to find the limits of our integral in \(r\). ### Total Surface Area To begin the computation of the volume of \(\mathcal{M}\cap B_{1}(0)\), we switch to polar coordinates, substitute the expression of \(S(x)\) in (2), and integrate with respect to the radius \[\int_{R}S(x)dx =\int_{S^{d-1}}\int_{0}^{r(\theta)}S(r\theta)r^{d-1}drd\theta\] \[=\int_{S^{d-1}}\int_{0}^{r(\theta)}\left(1+2\sum_{j=1}^{D-d} \tilde{Q}_{j}(r\theta)\right)r^{d-1}drd\theta+O(\Lambda^{4})\] \[=\int_{S^{d-1}}\int_{0}^{r(\theta)}r^{d-1}+2r^{d+1}\sum_{j=1}^{D- d}\tilde{Q}_{j}(\theta)drd\theta\] \[=\int_{S^{d-1}}\frac{1}{d}r(\theta)^{d}+\frac{2}{d+2}r(\theta)^{ d+2}\sum_{j=1}^{D-d}\tilde{Q}_{j}(\theta)d\theta.\] By (27), \[\int_{R}S(x)dx =\int_{S^{d-1}}\frac{1}{d}(1-\frac{dq(\theta)}{2})+\frac{2}{d+2} (1-\frac{(d+2)q(\theta)}{2})\sum_{j=1}^{D-d}\tilde{Q}_{j}(\theta)d\theta+O( \Lambda^{4})\] \[=\int_{S^{d-1}}\frac{1}{d}-\frac{q(\theta)}{2}+\frac{2}{d+2} \sum_{j=1}^{D-d}\tilde{Q}_{j}(\theta)d\theta+O(\Lambda^{4})\] By (17), (24), and (25), we integrate over \(S^{d-1}\) and find \[\int_{R}S(x)dx= \frac{2\pi^{d/2}}{d\Gamma(d/2)}-\frac{\pi^{d/2}}{2\Gamma(d/2+2)} \sum_{j}\left(\frac{1}{2}(\sum_{k}\lambda_{j,k})^{2}+\sum_{k}\lambda_{j,k}^{2 }\right)\] \[+\frac{2\pi^{d/2}}{(d+2)\Gamma(d/2+1)}\sum_{j,k}\lambda_{j,k}^{2 }+O(\Lambda^{4})\] \[= \frac{\pi^{d/2}}{\Gamma(d/2)}\left[\frac{2}{d}-\frac{2}{(d+2)(d+ 4)}\left(A+\frac{1}{2}B\right)+\frac{4}{(d+2)^{2}}A\right]+O(\Lambda^{4})\] \[= \frac{\pi^{d/2}}{\Gamma(d/2)}\left[\frac{2}{d}-\frac{1}{(d+2)(d+ 4)}B+\frac{2(d+6)}{(d+2)^{2}(d+4)}A\right]+O(\Lambda^{4})\] ### Normalizing with Respect to Surface Area In this subsection, we take a quick detour to show how Taylor series expansion will be used to divide by \(\int_{R}S(x)dx\) in normalizing the relevant integrals. note how division by total surface area will occur. We use the expansion \[\frac{a_{0}+a_{1}t+O(t^{2})}{b_{0}+b_{1}t+O(t^{2})} =\left(\frac{a_{0}}{b_{0}}+\frac{a_{1}}{b_{0}}t+O(t^{2})\right) \frac{1}{1+\frac{b_{1}}{b_{0}}t+O(t^{2})} \tag{28}\] \[=\left(\frac{a_{0}}{b_{0}}+\frac{a_{1}}{b_{0}}t+O(t^{2})\right) \left(1-\frac{b_{1}}{b_{0}}t+O(t^{2})\right)\] (29) \[=\frac{a_{0}}{b_{0}}+\frac{a_{1}}{b_{0}}t-\frac{a_{0}b_{1}}{b_{0} ^{2}}t+O(t^{2}) \tag{30}\] to estimate these ratios (matching \(t\) with \(\Lambda^{2}\), roughly) ### The Means By Lemma 3.1 and integrating in the radial direction, \[\int_{R}Q_{i}(x)S(x)dx =\int_{S^{d-1}}\int_{0}^{r(\theta)}Q_{i}(x)\left[r^{d-1}+2r^{d+1} \sum_{j=1}^{D-d}\tilde{Q}_{j}(\theta)\right]\left(1+2\sum_{j=1}^{D-d}\tilde{Q}_ {j}(r\theta)\right)drd\theta\] \[+O(\Lambda^{4})\] \[= \int_{S^{d-1}}\int_{0}^{r(\theta)}Q_{i}(x)r^{d-1}drd\theta+O( \Lambda^{3})\] \[= \int_{S^{d-1}}\int_{0}^{r(\theta)}r^{d+1}Q_{i}(\theta)drd\theta+O (\Lambda^{3})\] \[= \int_{S^{d-1}}\frac{1}{d+2}Q_{i}(\theta)r(\theta)^{d+2}d\theta+O (\Lambda^{3}).\] Thus, applying (27) and (24), \[\int_{R}Q_{i}(x)S(x)dx =\int_{S^{d-1}}\frac{1}{d+2}Q_{i}(\theta)d\theta+O(\Lambda^{3})\] \[=\frac{\pi^{d/2}}{(d+2)\Gamma(d/2+1)}\sum_{j=1}^{d}\lambda_{i,j} +O(\Lambda^{3}).\] Upon normalizing, we find \[\bar{Q}_{i}=\frac{\int_{R}Q_{i}(x)S(x)dx}{\int_{R}S(x)dx}=\frac{1}{d+2}\sum_{ j=1}^{d}\lambda_{i,j}+O(\Lambda^{2}).\] Note that we only need this computation up to first-order accuracy since we will only need the above expression to multiply it by other terms depending on \(\Lambda\) in the computation of \(I_{4}\). ### Upper Trace Let \(\theta^{*}\in S^{d-1}\) be arbitrary. We begin as with the computation of total surface area, converting to polar coordinates and integrating in \(r\) first to get \[\int_{R}(x\cdot\theta^{*})^{2}S(x)dx =\int_{S^{d-1}}\int_{0}^{r(\theta)}(r\theta\cdot\theta^{*})^{2} \left[r^{d-1}+2r^{d+1}\sum_{j=1}^{D-d}\tilde{Q}_{j}(\theta)\right]drd\theta+O( \Lambda^{4})\] \[=\int_{S^{d-1}}(\theta\cdot\theta^{*})^{2}\int_{0}^{r(\theta)}r^{ d+1}+2r^{d+3}\sum_{j=1}^{D-d}\tilde{Q}_{j}(\theta)drd\theta+O(\Lambda^{4})\] \[=\int_{S^{d-1}}(\theta\cdot\theta^{*})^{2}\left[\frac{1}{d+2}r( \theta)^{d+2}+\frac{2}{d+4}r(\theta)^{d+4}\sum_{j=1}^{D-d}\tilde{Q}_{j}(\theta )\right]d\theta+O(\Lambda^{4})\] By (27), may rewrite the above as \[\int_{S^{d-1}}(\theta\cdot\theta^{*})^{2}\left[\frac{1}{d+2}(1-\frac{(d+2)q( \theta)}{2})+\frac{2}{d+4}(1-\frac{(d+4)q(\theta)}{2})\sum_{j=1}^{D-d}\tilde{ Q}_{j}(\theta)\right]d\theta+O(\Lambda^{4}), \tag{31}\] or \[\int_{R}(x\cdot\theta^{*})^{2}S(x)dx=\int_{S^{d-1}}(\theta\cdot\theta^{*})^{2 }\left[\frac{1}{d+2}-\frac{q(\theta)}{2}+\frac{2}{d+4}\sum_{j=1}^{D-d}\tilde{ Q}_{j}(\theta)\right]d\theta+O(\Lambda^{4}) \tag{32}\] By taking \(\theta^{*}=e_{i}\) (\(1\leq i\leq d\)) and summing over \(i\), we obtain the sum of the first \(d\) eigenvalues. By the identity \(\sum_{i=1}^{d}(\theta\cdot e_{i})^{2}=1\) for \(\theta\in S^{d-1}\), we have \[\sum_{i}I_{1}(i)= \int_{S^{d-1}}\left[\frac{1}{d+2}-\frac{q(\theta)}{2}+\frac{2}{d +4}\sum_{j=1}^{D-d}\tilde{Q}_{j}(\theta)\right]d\theta+O(\Lambda^{4})\] \[= \frac{2\pi^{d/2}}{(d+2)\Gamma(d/2)}-\frac{\pi^{d/2}}{2\Gamma(d/2 +2)}\sum_{j}\left(\frac{1}{2}(\sum_{k}\lambda_{j,k})^{2}+\sum_{k}\lambda_{j,k }^{2}\right)\] \[+\frac{2\pi^{d/2}}{(d+4)\Gamma(d/2+1)}\sum_{j,k}\lambda_{j,k}^{2} +O(\Lambda^{4})\] \[= \frac{\pi^{d/2}}{\Gamma(d/2)}\left[\frac{2}{d+2}-\frac{1}{d(d+2) }B+\frac{2}{(d+2)(d+4)}A\right]+O(\Lambda^{4}).\] To complete our computation of the upper trace, we divide by the total surface area \[\sum_{i}\frac{I_{1}(i)}{V}= \frac{\frac{\pi^{d/2}}{\Gamma(d/2)}\left[\frac{2}{d+2}-\frac{1}{d(d +2)}B+\frac{2}{(d+2)(d+4)}A\right]+O(\Lambda^{4})}{\frac{\pi^{d/2}}{\Gamma(d/2)} \left[\frac{2}{d}-\frac{1}{(d+2)(d+4)}B+\frac{2(d+6)}{(d+2)^{2}(d+4)}A\right]+ O(\Lambda^{4})}\] \[= \frac{d}{d+2}-\frac{1}{2(d+2)}B+\frac{d}{(d+2)(d+4)}A+\frac{d^{2} }{2(d+2)^{2}(d+4)}B-\frac{d^{2}(d+6)}{(d+2)^{3}(d+4)}A\] \[+O(\Lambda^{4})\] \[= \frac{d}{d+2}+\frac{4d-2d^{2}}{(d+2)^{3}(d+4)}A-\frac{3d+4}{(d+2) ^{2}(d+4)}B+O(\Lambda^{4}).\] ### Bounds for Upper Eigenvalues Here, we will determine the lower and upper bounds for the eigenvalues of \(\Sigma_{1}\) (found in (10) and (11) by approximating the expression in (32) without particular choice of \(\theta^{*}\). Fix \(1\leq j\leq D-d\) and perform a rotation so the eigenvectors of \(\tilde{Q}_{j}\) to be \(e_{1},...,e_{d}\) (keeping \(\theta^{*}=(y_{1},...,y_{d})\) arbitrary). By ignoring any terms with odd degrees of \(x_{k}\) and applying (19) and (20) \[\int_{S^{d-1}}(\theta\cdot y)^{2}\tilde{Q}_{j}(\theta)d\theta =\sum_{k}^{d}y_{k}^{2}\lambda_{j,k}^{2}\int_{S^{d-1}}x_{k}^{4}dx+ \sum_{k\neq l}y_{k}^{2}\lambda_{j,l}^{2}\int_{S^{d-1}}x_{k}^{2}x_{l}^{2}dx \tag{33}\] \[=\sum_{k}y_{k}^{2}\lambda_{j.k}^{2}\left(\frac{3}{2}\frac{\pi^{d/ 2}}{\Gamma(d/2+2)}\right)+\sum_{k\neq l}y_{k}^{2}\lambda_{j,l}^{2}\left(\frac{ 1}{2}\frac{\pi^{d/2}}{\Gamma(d/2+2)}\right)\] (34) \[\leq\sum_{k}y_{k}^{2}\sum_{l}\lambda_{j,l}^{2}\left(\frac{3}{2} \frac{\pi^{d/2}}{\Gamma(d/2+2)}\right)\] (35) \[=\frac{3}{2}\frac{\pi^{d/2}}{\Gamma(d/2+2)}\sum_{k}\lambda_{j,k}^ {2} \tag{36}\] Similarly, \[\int_{S^{d-1}}(\theta\cdot y)^{2}\tilde{Q}_{j}(\theta)d\theta\geq\frac{1}{2} \frac{\pi^{d/2}}{\Gamma(d/2+2)}\sum_{k}\lambda_{j,k}^{2}. \tag{37}\] Repeating the above strategy, we bound \[\int_{S^{d-1}}(\theta\cdot y)^{2}Q_{j}(\theta)^{2}d\theta=\int_{S^{d-1}}\left( \sum_{k=1}^{d}y_{k}^{2}x_{k}^{2}\right)\left(\sum_{l=1}^{d}\lambda_{j,l}x_{l} ^{2}\right)\left(\sum_{m=1}^{d}\lambda_{j,m}x_{m}^{2}\right)d\theta.\] Expanding the above and applying (21), (22), and (23) gives \[\frac{\pi^{d/2}}{\Gamma(d/2+3)}\left(\sum_{k}\frac{15y_{k}^{2}\lambda_{j,k}^{ 2}}{4}+\sum_{k\neq l\neq m\neq k}\frac{y_{k}^{2}\lambda_{j,l}\lambda_{j,m}}{4 }+2\sum_{k\neq l}\frac{3y_{k}^{2}\lambda_{j,k}\lambda_{j,l}}{4}+\sum_{k\neq l }\frac{3y_{k}^{2}\lambda_{j,l}^{2}}{4}\right).\] Thus, \[\int_{S^{d-1}}(\theta\cdot y)^{2}Q_{j}(\theta)^{2}d\theta \leq\frac{\pi^{d/2}}{\Gamma(d/2+3)}\left(\sum_{k}\frac{15y_{k}^{2}} {4}\sum_{l}\lambda_{j,l}^{2}+\sum_{k}y_{k}^{2}\frac{3}{2}\left((\sum_{l}\lambda _{j,l})^{2}+\sum_{l}\lambda_{j,l}^{2}\right)\right) \tag{38}\] \[=\frac{\pi^{d/2}}{\Gamma(d/2+3)}\left(\frac{21}{4}\sum_{l}\lambda_ {j,l}^{2}+\frac{3}{2}(\sum_{l}\lambda_{j,l})^{2}\right) \tag{39}\] Here, we simply acknowledge \[\int_{S^{d-1}}(\theta\cdot y)^{2}Q_{j}(\theta)^{2}d\theta\geq 0. \tag{40}\] Subbing (36) and (40) into (32), we find \[\int_{R}(x\cdot\theta^{*})^{2}S(x)dx \leq\int_{S^{d-1}}(\theta\cdot\theta^{*})^{2}\frac{1}{d+2}d \theta+\frac{2}{d+4}\frac{3}{2}\frac{\pi^{d/2}}{\Gamma(d/2+2)}\sum_{j,k} \lambda_{j,k}^{2}+O(\Lambda^{4})\] \[=\int_{S^{d-1}}x_{1}^{2}\frac{1}{d+2}d\theta+\frac{3}{d+4}\frac{ \pi^{d/2}}{\Gamma(d/2+2)}A+O(\Lambda^{4})\] \[=\frac{1}{d+2}\frac{\pi^{d/2}}{\Gamma(d/2+1)}+\frac{3}{d+4}\frac{ \pi^{d/2}}{\Gamma(d/2+2)}A+O(\Lambda^{4})\] \[=\frac{\pi^{d/2}}{\Gamma(d/2)}\left[\frac{2}{d(d+2)}+\frac{12}{d (d+2)(d+4)}A\right]+O(\Lambda^{4})\] To normalize the upper bound we divide the above by \(V\) to get \[\frac{\int_{R}(x\cdot\theta^{*})^{2}S(x)dx}{\int_{R}S(x)dx} \leq\frac{\frac{\pi^{d/2}}{\Gamma(d/2)}\left[\frac{2}{d(d+2)}+ \frac{12}{d(d+2)(d+4)}A\right]+O(\Lambda^{4})}{\frac{\pi^{d/2}}{\Gamma(d/2)} \left[\frac{2}{d}-\frac{1}{(d+2)(d+4)}B+\frac{2(d+6)}{(d+2)^{2}(d+4)}A\right] +O(\Lambda^{4})}\] \[=\frac{1}{d+2}+\frac{6}{(d+2)(d+4)}A\] \[-\frac{1}{d+2}\left[-\frac{d}{2(d+2)(d+4)}B+\frac{d(d+6)}{(d+2)^ {2}(d+4)}A\right]+O(\Lambda^{4})\] \[=\frac{1}{d+2}+\frac{5d^{2}+18d+24}{(d+2)^{3}(d+4)}A+\frac{d}{2(d +2)^{2}(d+4)}B+O(\Lambda^{4})\] Subbing (37) and (39) into (32), we see \[\int_{R}(x\cdot\theta^{*})^{2}S(x)dx \geq\frac{1}{d+2}\frac{\pi^{d/2}}{\Gamma(d/2+1)}-\frac{\pi^{d/2}}{ \Gamma(d/2+3)}\left(\frac{21}{4}\sum_{j}\sum_{l}\lambda_{j,l}^{2}+\frac{3}{2} \sum_{j}(\sum_{l}\lambda_{j,l})^{2}\right)\] \[+\frac{1}{d+4}\frac{\pi^{d/2}}{\Gamma(d/2+2)}\sum_{j}\sum_{k} \lambda_{j,k}^{2}+O(\Lambda^{4})\] \[=\frac{1}{d+2}\frac{\pi^{d/2}}{\Gamma(d/2+1)}-\frac{\pi^{d/2}}{ \Gamma(d/2+3)}\left(\frac{21}{4}A+\frac{3}{2}B\right)\] \[+\frac{\pi^{d/2}}{\Gamma(d/2+3)}\frac{1}{2}A+O(\Lambda^{4})\] \[=\frac{\pi^{d/2}}{\Gamma(d/2)}\left[\frac{2}{d(d+2)}-\frac{8}{d( d+2)(d+4)}\left(\frac{19}{4}A+\frac{3}{2}B\right)\right]+O(\Lambda^{4}).\] To obtain the lower bound on the eigenvalues, we divide by total surface area: \[\frac{\int_{R}(x\cdot\theta^{*})^{2}S(x)dx}{\int_{R}S(x)dx}\geq \frac{\frac{2}{d(d+2)}-\frac{8}{d(d+2)(d+4)}\left(\frac{19}{4}A+ \frac{3}{2}B\right)+O(\Lambda^{4})}{\frac{2}{d}-\frac{1}{(d+2)(d+4)}B+\frac{2 (d+6)}{(d+2)^{2}(d+4)}A+O(\Lambda^{4})}\] \[= \frac{1}{d+2}-\frac{19}{(d+2)(d+4)}A-\frac{6}{(d+2)(d+4)}B+\frac{ d}{2(d+2)^{2}(d+4)}B\] \[-\frac{d(d+6)}{(d+2)^{3}(d+4)}A+O(\Lambda^{4})\] \[= \frac{1}{d+2}-\frac{20d^{2}+82d+76}{(d+2)^{3}(d+4)}A-\frac{11d+24 }{2(d+2)^{2}(d+4)}B+O(\Lambda^{4})\] ### Lower Eigenvalues We repeat the strategy of prior computations, taking advantage of the fact that \(Q_{j}(x)^{2}-\bar{Q}_{j}^{2}=O(\Lambda^{2})\) so the \(O(\Lambda^{2})\) terms in \(S(x)\) turn into higher-order error. Similarly, we are able to take \(r(\theta)\approx 1\) once we have integrated in \(r\). \[\int_{R}(Q_{j}(x)^{2}-\bar{Q}_{j}^{2})S(x)dx =\int_{R}Q_{j}(x)^{2}-\bar{Q}_{j}^{2}dx+O(\Lambda^{4})\] \[=\int_{S^{d-1}}\int_{0}^{r(\theta)}r^{d-1}\left[r^{2}Q_{j}(\theta )^{2}-\bar{Q}_{j}^{2}\right]drd\theta+O(\Lambda^{4})\] \[=\int_{S^{d-1}}\frac{1}{d+2}r(\theta)^{d+2}Q_{j}(\theta)^{2}- \frac{1}{d}r(\theta)^{d}\bar{Q}_{j}^{2}d\theta+O(\Lambda^{4})\] \[=\int_{S^{d-1}}\frac{1}{d+2}Q_{j}(\theta)^{2}-\frac{1}{d}\bar{Q}_ {j}^{2}d\theta+O(\Lambda^{4})\] By Lemma A.2 and substituting our value of \(\bar{Q}_{i}\), \[\int_{R}(Q_{j}(x)^{2}-\bar{Q}_{j}^{2})S(x)dx= \left(\frac{1}{2(d+2)}(\sum_{k}\lambda_{j,k})^{2}+\frac{1}{d+2} \sum_{k}\lambda_{j,k}^{2}\right)\frac{\pi^{d/2}}{\Gamma(d/2+2)}\] \[-\frac{1}{d}\frac{2\pi^{d/2}}{\Gamma(d/2)}\left(\bar{Q}_{i} \right)^{2}+O(\Lambda^{3})\] \[= \left(\frac{1}{2(d+2)}(\sum_{k}\lambda_{j,k})^{2}+\frac{1}{d+2} \sum_{k}\lambda_{j,k}^{2}\right)\frac{\pi^{d/2}}{\Gamma(d/2+2)}\] \[-\frac{1}{d}\frac{2\pi^{d/2}}{\Gamma(d/2)}\left(\frac{1}{d+2} \sum_{j=1}^{d}\lambda_{i,j}+O(\Lambda^{2})\right)^{2}+O(\Lambda^{3})\] \[= \frac{\pi^{d/2}}{\Gamma(d/2)}\left[\frac{2}{d(d+2)^{2}}B_{j}+ \frac{4}{d(d+2)^{2}}A_{j}-\frac{2}{d(d+2)^{2}}B_{j}\right]+O(\Lambda^{3})\] \[= \frac{\pi^{d/2}}{\Gamma(d/2)}\frac{4}{d(d+2)^{2}}A_{j}+O(\Lambda ^{3}).\] We normalize to get \[\frac{\int_{R}(Q_{i}(x)-\bar{Q}_{i})^{2}S(x)dx}{\int_{R}S(x)dx} =\frac{\frac{4}{d(d+2)^{2}}A_{j}+O(\Lambda^{3})}{\frac{2}{d}- \frac{1}{(d+2)(d+4)}B+\frac{2(d+6)}{(d+2)^{2}(d+4)}A+O(\Lambda^{4})}\] \[=\frac{2}{(d+2)^{2}}A_{j}+O(\Lambda^{3}).\] Lastly, summing over \(i\) gives (9). ## Acknowledgments We would like to thank Yariv Aizenbud for sharing code used to generate the airplane photos for our experiments.
2309.10171
Specification-Driven Video Search via Foundation Models and Formal Verification
The increasing abundance of video data enables users to search for events of interest, e.g., emergency incidents. Meanwhile, it raises new concerns, such as the need for preserving privacy. Existing approaches to video search require either manual inspection or a deep learning model with massive training. We develop a method that uses recent advances in vision and language models, as well as formal methods, to search for events of interest in video clips automatically and efficiently. The method consists of an algorithm to map text-based event descriptions into linear temporal logic over finite traces (LTL$_f$) and an algorithm to construct an automaton encoding the video information. Then, the method formally verifies the automaton representing the video against the LTL$_f$ specifications and adds the pertinent video clips to the search result if the automaton satisfies the specifications. We provide qualitative and quantitative analysis to demonstrate the video-searching capability of the proposed method. It achieves over 90 percent precision in searching over privacy-sensitive videos and a state-of-the-art autonomous driving dataset.
Yunhao Yang, Jean-Raphaël Gaglione, Sandeep Chinchali, Ufuk Topcu
2023-09-18T21:40:08Z
http://arxiv.org/abs/2309.10171v1
# Specification-Driven Video Search via Foundation Models and Formal Verification ###### Abstract The increasing abundance of video data enables users to search for events of interest, e.g., emergency incidents. Meanwhile, it raises new concerns, such as the need for preserving privacy. Existing approaches to video search require either manual inspection or a deep learning model with massive training. We develop a method that uses recent advances in vision and language models, as well as formal methods, to search for events of interest in video clips automatically and efficiently. The method consists of an algorithm to map text-based event descriptions into linear temporal logic over finite traces (LTL\({}_{f}\)) and an algorithm to construct an automaton encoding the video information. Then, the method formally verifies the automaton representing the video against the LTL\({}_{f}\) specifications and adds the pertinent video clips to the search result if the automaton satisfies the specifications. We provide qualitative and quantitative analysis to demonstrate the video-searching capability of the proposed method. It achieves over 90 percent precision in searching over privacy-sensitive videos and a state-of-the-art autonomous driving dataset. ## Introduction The increasing abundance of video data enables users to search for events of interest, but existing approaches to searching through videos are either inefficient or require manual inspection. Various cameras, such as security or vehicle dash cameras, gather terabytes of video data, and manually searching for events (e.g., vehicle crashes) over such large-scale data is impractical. Existing automated video search approaches require either excessive human annotations [1] or massive training for neural networks [1, 1], which are inefficient. Furthermore, none of these approaches provide guarantees on the correctness of their search results. We develop a method that uses recent advances in vision and language models, as well as formal methods, to search for events of interest in video efficiently with probabilistic guarantees. The method includes an algorithm that maps texts to formal specifications and an algorithm to construct automata encoding information from videos, as illustrated in Figure 1. In practice, many events of interest are expressed in natural language. We accordingly design an algorithm that sends the text-based event description to a text generation model, e.g., GPT-series, and queries the model for the formal specifications and propositions describing the event. This algorithm converts texts to an interpretable format that we can use to search through videos. Meanwhile, we design an algorithm to construct automata encoding the video information. The algorithm utilizes a vision-language model to determine whether the event described by the proposition happens in the video, with a score indicating the model's confidence. The algorithm calibrates the confidence to the prediction accuracy of the search query using uncertainty quantification techniques for deep learning models. It uses the calibrated confidence to compute the probability that the automaton satisfies the specifications. Hence, we can obtain the probability of each video satisfying the event description during video search and add videos with probabilities above 0.5 to our search result. In contrast to existing approaches, this method is fully automated, computationally inexpensive, and can provide probabilistic guarantees through formal verification. We provide case studies and quantitative analysis on privacy-sensitive videos and the state-of-the-art autonomous driving dataset to demonstrate the video-searching capability of the proposed pipeline. We use the ground truth annotations from the datasets to evaluate the proposed pipeline, and the search results achieve over 90 percent precision. Figure 1: Demonstration of searching an event of interest—described in texts—within a given video. ## Related Work **Symbolic Representations.** Video searching or understanding typically requires the videos to be converted into some symbolic representations encoding the video information. Existing works [12, 13] utilize object detection models to form symbolic representations of videos that encode object information (e.g., existence, features). Some other works [11, 13, 14] construct graphs that represent each event or object as a temporal sequence of spatial information, which improves the interpretability of the raw data. However, none of the existing video representations are capable of formal verification. We introduce a video representation that can be formally verified against the provided specifications. Another work [11] uses a deep learning model to classify images and builds a deterministic finite automaton representing an image sequence through the classification results. It then verifies the automaton against temporal logic specifications. This work neither considers potential image classification errors nor provides probabilistic guarantees. In contrast, we incorporate the confidence scores returned by the foundation model to provide probabilistic guarantees. In addition to Umili et al., we propose a method to map natural language to temporal logic specifications. **Video Understanding.** Video applications like video searching require understanding the content of the videos. Many works focus on short-form video understanding [14, 15, 16], i.e., understanding videos less than five seconds, and long-form video understanding [13, 14, 15, 16], e.g., movie question-answering. The existing works interpret the videos to detect actions in the video [13, 14, 15], track objects [16, 17], or detect shot transitions [15]. However, these works do not provide any guarantees on their video understanding outcomes. The pipeline we proposed provides guarantees by formally verifying videos against the text-based descriptions of events of interest. **Formal Verification.** Formal verification is a technique to prove or disprove the satisfaction of a system with respect to certain formal specifications or properties. Such a technique requires a formal representation of the system, such as a finite-state automaton (FSA) or a Markov Decision Process (MDP). Bai et al. [14] propose a method to construct MDPs from driving behaviors and verify the safety of the behaviors. Bai et al. [12] use MDPs to represent robotic control systems and verify the safety of the system. Yang et al. [16] construct FSAs for textual task knowledge extracted from large language models. None of the existing works apply to videos alone. We take advantage of the emerging foundation models to build formal representations of videos. ## Preliminaries **Probabilistic Automaton.** We formally define a _probabilistic automaton_ (PA) [14] as a tuple \(\mathcal{A}=\langle Q,q_{0},F,L,\delta,\lambda\rangle\), where \(Q\) is the set of states, \(q_{0}\in Q\) is the initial state, \(F\in Q\) is the set of acceptance states, \(L\) is the set of state labeling symbols, \(\lambda:Q\to L\) is the label function, and \(\delta:Q\times Q\rightarrow\{0,1\}\) is the transition function indicating whether a given transition is allowed by the automaton. Each transition (\(q\xrightarrow{\sigma}q^{\prime}\)) associates with a probability \(\mathbb{P}\), which means if we are at state \(q\), we choose the transition \(q\xrightarrow{\sigma}q^{\prime}\) with a probability \(\mathbb{P}\). Note that for every state, the probabilities of all the outgoing transitions of this state with the input symbol \(\sigma\) sum up to 1. We define a set of atomic propositions \(P\) for the state labeling symbols \(L\coloneqq 2^{P}\). The propositional logic formula is based on these atomic propositions. We then define a _trajectory_ as a sequence of labels of states and the input symbols of the transitions \[\lambda(q_{0}),\lambda(q_{1}),...,\lambda(q_{n})\text{, where }q_{i}\in Q.\] The trajectory starts from the initial state \(q_{0}\) and ends at one of the acceptance states \(q_{n}\in F\). Each trajectory is associated with a probability, which is the product of all the probabilities of the state transitions in the trajectory. **Linear Temporal Logic over Finite Traces.** Temporal logic represents propositional and first-order logical reasoning with respect to time. _Linear temporal logic over finite traces_ (LTL\({}_{f}\)) [15] is a form of temporal logic that deals with finite sequences, i.e., finite-length trajectories. The syntax of LTL\({}_{f}\) formulas is defined as \[\varphi\coloneqq p\in P\mid\neg\varphi\mid\varphi_{1}\wedge\varphi_{2}\mid \operatorname{\mathcal{O}}\varphi\mid\varphi_{1}\operatorname{\mathcal{U}} \varphi_{2}.\] LTL\({}_{f}\) consists of all operations from propositional logic, such as AND (\(\wedge\)), OR (\(\vee\)), XOR (\(\oplus\)), NOT (\(\neg\)), IMPLY (\(\rightarrow\)), etc., and the following temporal operations: * Always (\(\square\,\phi_{1}\)): \(\phi_{1}\) is true for every step in the trajectory. * Sometimes (\(\lozenge\,\phi_{1}\)): \(\phi_{1}\) is true for at least one step in the trajectory. * Next (\(\operatorname{\mathcal{O}}\phi_{1}\)): \(\phi_{1}\) is true in the next step. * Until (\(\phi_{1}\operatorname{\mathcal{U}}\phi_{2}\)): \(\phi_{1}\) has to be true until \(\phi_{2}\) becomes true, \(\phi_{2}\) has to be true at one of the future steps. \(\phi_{1}\) and \(\phi_{2}\) are LTL\({}_{f}\) formulas. An LTL\({}_{f}\) formula is composed of variables in \(P\) and logic operations, as we call a _specification_. Each formula can be satisfied by a finite sequence of truth valuations of variables in \(P\), as we call a trajectory. If a trajectory \(T\) satisfies the specification \(\Phi\), we denote \(T\models\Phi\). **Foundation Model.** Foundation models are large-scale machine learning models that are trained on a vast amount of data and can be directly applied or fine-tuned to a wide range of downstream tasks. _Large language models_ (LLMs) such as BERT [11], CodeX [12], and GPT-series [13, 14] are text foundation models. LLMs are capable of downstream language processing tasks like text generation (next-word prediction), question-answering, text classification and summarization, machine translation, etc. Beyond text capability, several multimodal foundation models can understand and process both visual and textual inputs. CLIP Radford et al. (2021) measures the similarity or consistency between the input texts and images. Object detection models such as Yolo Redmon et al. (2016), Grounded-Segment-Anything (Grounded-SAM) Liu et al. (2023); Kirillov et al. (2023), ViLD Gu et al. (2022), GLIP Li et al. (2022) and R-CNN Ren et al. (2017) can detect the existences and positions of objects described in the textual inputs from a given image. In the later parts, we refer to these models as _vision-language models_ (VLMs) in the later parts. ## 2 Methodology We develop a method to search for events of interest in video. For each video clip, the method takes in a set of text-based event descriptions and outputs a score indicating the probability of the description being satisfied in the video. The method has two components: The first is an algorithm to map text-based event descriptions into LTL\({}_{f}\) specifications. The second component is an algorithm to construct a probabilistic automaton encoding the video information. Then, the method verifies the automaton against the specifications to obtain a probability that the automaton satisfies the LTL\({}_{f}\) specifications. By applying this method to a set of videos, we can efficiently search for events of interest in these videos by finding all the videos whose automata satisfy the specifications. ### Text-Based Description to LTL\({}_{f}\) Specification We develop an algorithm to map text-based descriptions of events of interest to LTL\({}_{f}\) specifications. The algorithm first extracts a set of atomic propositions and by querying the LLM to extract _noun phrases_ from the texts: ``` Extractnounphrasesfromthefollowingrules:Alistofrules.l.nounphrase1 2.nounphrase2... ``` Note that a noun phrase is a group of words that functions as a single unit within a sentence and centers around a noun. We consider these noun phrases as atomic propositions. Then, the algorithm queries the LLM again to transform the textual rules to a set of LTL\({}_{f}\) specifications with respect to the atomic propositions: ``` Definethefollowingrulesintemporallogicwithatomicpropositionsnounphrase1,nounphrase2,...:Alistofrules.l.Temporallogicformula1 2.Temporallogicformula2... ``` Now we have the sequence of frames \(\mathcal{F}\), a set of atomic propositions \(P\), and LTL\({}_{f}\) specifications \(\Phi\). ### Video to Probabilistic Automaton We develop an algorithm to extract information from videos and construct probabilistic automata encoding those information. The algorithm starts from a provided video and a set of textual rules regarding the video. We extract frames from the video at regular intervals, where each frame is an image. We denote the number of frames extracted in each second of the video as _frame frequency_, with a unit of _frames per second_. #### Calibrating Confidence and Accuracy We use a VLM, e.g., an open-domain object detection model, to evaluate each atomic proposition at each frame. In particular, a VLM that can take text-based propositions and the frame as inputs and returns _confidence score_ for each proposition. A confidence score is a softmax score that indicates how certain the model is about its prediction. These scores are between 0 and 1, where a value close to 1 indicates high confidence and a value close to 0 means low confidence. The model detects whether the object or scene described by the atomic proposition appears in the image with a certain confidence. We evaluate the proposition as true only if the object or scene is detected. However, the VLM can raise detection errors. We consider a detection result to be _correct_ only if the VLM detected an object that actually exists in the image. Therefore, we utilize the confidence scores returned by the model and estimate a mapping between confidence scores and the percentage of correct detection over all the detection results, as we call _classification accuracy_. **Definition 1**.: Let \(N\) be the number of detection results whose confidence scores returned by the VLM fall into a particular range \([c_{1},c_{2})\), \(C\) be the number of correct detection results (the detected object actually appears in the image) with confidence scores between \([c_{1},c_{2})\), we define the classification accuracy \(A_{C}\) of confidence interval \((c_{1},c_{2}]\) as \[A_{C}=\frac{C}{N}.\] We assume that a VLM performs consistently on data outside its training dataset. Under this assumption, we consider the classification accuracy at each confidence interval on a validation dataset as the probability of the detection result being correct on realistic data. Therefore, we apply the model to an image classification task on the validation dataset to estimate a confidence-accuracy mapping and assume it also applies to realistic data. We apply the VLM on a validation dataset independent of the video to obtain the classification accuracies at each confidence interval. Hence, we get a set of confidence-accuracy pairs. Then, we use a logistic function \[f(x)=\frac{1}{1+\exp(-k\cdot(x-x_{0}))} \tag{1}\] to estimate a mapping function \(\mathcal{M}:\mathcal{C}\mapsto\mathcal{A}\) that maps confidence scores to an accuracies. We use these accuracies in constructing probabilistic automata. #### Constructing Probabilistic Automaton We now have a VLM \(M_{V}:\mathcal{F}\times P\mapsto C\) that takes in a frame and a proposition and returns a confidence score, a sequence of frames \(\mathcal{F}\), atomic propositions \(P\), and a mapping function \(\mathcal{M}\). We use them to construct a probabilistic automaton and verify it against the LTL\({}_{f}\) specifications \(\Phi\). We start by creating an initial state \(q_{0}\) whose label is _none_, meaning that the initial state's label will not be counted into the trajectory during verification. Then, we process the sequence of frames in order to construct a probabilistic automaton. This procedure includes four steps: We process each frame in four steps: We first send the frame and all atomic propositions to the VLM and obtain a list of confidence scores associated with the propositions. In the second step, we map the confidence scores into accuracies through the function \(\mathcal{M}\). In the third step, we create \(2^{|P|}\) (\(|P|\) is the number of atomic propositions) new automaton states, each corresponding to a conjunction of propositions in \(2^{P}\). The corresponding conjunction of propositions is the label for the state. As an example, if an atomic proposition set \(P=\{p_{1},p_{2}\}\), we build four states with labels \(p_{1}\wedge p_{2}\), \(\neg p_{1}\wedge p_{2}\), \(p_{1}\wedge\neg p_{2}\), and \(\neg p_{1}\wedge\neg B\), respectively. In the fourth step, we compute a probability score for each newly created state at frame \(\mathcal{F}_{j}\). For a new state \(q_{j,k}\), the probability is the product of the probabilities for all the propositions \[\mathbb{P}_{j,k}=\prod_{p_{i}}\mathcal{M}(M_{V}(\mathcal{F}_{j},p_{i}))\quad \text{for all }p_{i}\in P, \tag{2}\] where \(\mathcal{F}_{j}\) is the current frame. The probability of the negation of a proposition (\(\neg p_{i}\)) is \(1-\mathcal{M}(M_{V}(\mathcal{F}_{j},p_{i}))\). For every new state \(q_{j,k}\) whose probability \(\mathbb{P}_{j,k}>0\), we construct transitions from all the states from the previous frame \(\mathcal{F}_{j-1}\) to \(q_{j,k}\) with probability \(\mathbb{P}_{j,k}\). After constructing the transitions, we remove all the new states that do not have incoming transitions. If \(\mathcal{F}_{j}\) is the first frame (j=1), we add transitions from the initial state \(q_{0}\) to every new state. We repeat the four steps for all the frames in \(\mathcal{F}\) and add the states created from the last frame to the set of acceptance states. Hence, we complete the automaton construction. We present the complete procedure in Algorithm 1. However, the resulting automaton will consist of \(|\mathcal{F}|\times 2^{|P|}\) (\(|\mathcal{F}|\) is the number of frames) states, which could lead to high computational complexity. We set two thresholds \(t_{T}\) and \(t_{F}\) and modify the mapping function to \[\mathcal{M}^{\prime}(c)=\begin{cases}1\text{ if }c\geq t_{T}\\ \mathcal{M}(c)\text{ if }t_{F}<c<t_{T}\\ 0\text{ if }c\leq t_{F},\end{cases} \tag{3}\] where \(c\in C\) is the confidence score. We use \(\mathcal{M}^{\prime}\) as the mapping function for Algorithm 1 instead of \(\mathcal{M}\). By doing so, we can eliminate a proportion of states due to zero-probability incoming transitions. ``` 1:procedureFrames2Automata(Vision-language model \(M_{V}\), Frames \(\mathcal{F}\), Output propositions \(P\), Mapping function \(\mathcal{M}\), True threshold \(t_{T}\), False threshold \(t_{F}\)) 2:\(\Sigma,L,Q,F,\lambda,\delta\) = \(A\), \(2^{P}\), [\(q_{0}\)], [], [\((q_{0}:none)\)], [] 3: prev = [\(q_{0}\)] 4:for\(\mathcal{F}_{j}\) in \(\mathcal{F}\)do 5: probability = dictionary() 6:for\(p_{i}\) in \(P\)do 7:\(c=M_{V}(\mathcal{F}_{j},p_{i})\) 8: probability[\(p_{i}\)] = \(\mathcal{M}(c)\) 9: probability[\(\neg p_{i}\)] = \(1-\mathcal{M}(c)\) 10:endfor 11: current = [] 12:for\(conj_{k}\) in conjunctions of \(2^{P}\)do 13:\(\mathbb{P}_{j,k}=\prod_{p_{i}\in conj_{k}}\) probability[\(p_{i}\)] 14:if\(\mathbb{P}_{j,k}>0\)then 15:\(Q\).append(\(q_{j,k}\)) 16: current.append(\(q_{j,k}\)) 17:\(\lambda\).append(\((q_{j,k}:conj_{k})\)) 18:\(F\).append(\(q_{j,k}\)) if \(j=|\mathcal{F}|\) 19:for\(q_{j-1}\) in current do 20:\(\delta\).append(\((q_{j-1},q_{j,k},\mathbb{P}_{j,k})\)) 21:endfor 22:endif 23:endfor 24: prev = current 25:endfor 26:return\(Q,I,F,\Sigma,L,\delta,\lambda\) 27:endprocedure ``` **Algorithm 1**Automaton Construction from Video Frames ### Verification and Video Search After we construct probabilistic automata for all the videos and convert event descriptions into LTL\({}_{f}\) specifications, we verify each automaton against the specification. If an automaton satisfies all the specifications with probability above a certain threshold (typically 0.5), we add the video corresponding to this automaton to our search result. By doing so, we can efficiently obtain all the videos containing the provided events of interest with probabilistic guarantees. ## Empirical Demonstration We empirically demonstrate the video search method on multiple proof-of-concept examples. For each video clip, we verify it against the provided specifications and add it to the search result if the verification probability is above 50 percent. Then, we provide quantitative analysis on a privacy-annotated video dataset and on a state-of-the-art autonomous driving dataset. Both datasets consist of ground truth annotations which we can use to evaluate the performance of our video search results. ### Confidence-Accuracy Mapping: Grounded-SAM We choose an open-domain object detection model--Grounded-SAM [11, 12]--as the VLM for the experiments and estimate the confidence-probability mapping on a validation dataset ImageNet [10]. We send each image and the complete list of labels from the validation dataset to the Grounded-SAM to obtain a confidence score with a label for each image. If the object detected by the Grounded-SAM is identical to the image's label, then we consider this case as a correct prediction. Figure 2 shows the confidence-accuracy mapping. We estimate this mapping using a logistic function \[\mathcal{M}(x)=\frac{1}{1+\exp(-50\cdot(x-0.56))}, \tag{4}\] which will be used for all the experiments in the later sections. According to Figure 2, the classification accuracy is consistently equal to one when the confidence is greater than 0.64 and consistently equal to zero when the confidence is less than 0.38. For the purpose of simplifying the constructed automaton, we set a true threshold \(t_{T}=0.64\) and a false threshold \(t_{F}=0.38\). ### Proof-of-Concept Demonstrations We first demonstrate the proposed method through two proof-of-concept examples, where we search through recorded videos that satisfy manually generated rules. #### Single-Rule Verification on College Introductory Videos We start with an example that searches for videos that satisfy a single privacy rule: never show faces in the video. We follow the procedure of processing textual rules to LTL\({}_{f}\) specifications and obtain the specification \(Phi\), where \[\Phi=\square\neg faces.\] During this procedure, we query GPT-4 (OpenAI 2023) to transform textual rules into LTL\({}_{f}\) specifications. We randomly select college introductory videos from YouTube and break each video into frames with a frequency of one frame per second. Next, we apply the Grounded-SAM object detection model to detect "faces" in each frame and present the detection results in Figure 3. Then, we follow Algorithm 1 to construct an automaton from the frames of each video. Figure 4 shows one sample automaton constructed from the frames of a college introductory video in Figure 3. We use a probabilistic model checker implemented by Stormpy (Junges and Volk 2021) to compute the probability that \(\Phi\) is satisfied. The model checker returns a probability of \(0.7\%\), which is the probability of the video satisfying \(\Phi\). This means the video in Figure 3 will not be added to the search result since its probability of satisfying \(\Phi\) is below 0.5. We repeat the verification procedure to all other college introductory videos and add videos whose probability of satisfying \(\Phi\) is greater than 0.5 to the search result. We present more college introductory videos in Appendix C.1. #### Multiple-Rule Verification on Traffic Recordings We have so far demonstrated the algorithm's capability in single-rule verification. We now search for traffic recording clips that satisfy multiple rules. We send the following prompt to GPT-4 to obtain the LTL\({}_{f}\) specifications regarding the traffic rules: ``` 1Giventhepropositions:pedestrian,stopsign,redlight,greenlight,accelerate,andbrake. 2Transformthefollowingrulesintotemporallogicformulas: 31.Yieldtopedestrians. 42.Slowdownafterseeingthestopsignorredlight. 53.Acceleratesometimeafterthelightturnsfromredtogreen. ``` The specifications are \(\Phi_{1}=\square\) (pedestrian\(\rightarrow\)brake), \(\Phi_{2}=\square\) ((stopsign\(\vee\)redlight)\(\rightarrow\)Obrake), \(\Phi_{3}=\square\) ((redlight\(\wedge\)Ogreenlight)\(\rightarrow\)O(\(\lozenge\)accelerate)). We collect a set of driving recordings and extract one Figure 4: Automaton corresponding to the frames of the video in Figure 3. ‘T’ and ‘F’ in each state indicate the state’s label, either “faces = True” or “faces = False.” Figure 5: Sample frames from the Driving Control Dataset with the object detection results. We present the label under each frame. Figure 3: Object detection results on the frames from college introductory videos. Figure 2: Confidence score returned by the Grounded-SAM versus its classification accuracy (blue line). The estimated mapping function (green dotted line) is \(\mathcal{M}(x)=1/(1+\exp(-50\cdot(x-0.56)))\). The orange line is the benchmark mapping, which is an identity function \(\overline{\mathcal{M}}(x)=x\). frame per second from the recordings. Then, we manually label each frame with driving operations 1-accelerate, 2-brake, or 0-no operation and construct a small-scale _Driving Control Dataset_. We present some sample data from the dataset in Appendix B. We apply the method to find recording clips that satisfy our selected traffic rules. We have the set of symbols \(L=\{\) accelerate, brake, no operation, pedestrian, stop sign, red light, green light \(\}\), where the first three symbols are labels from the dataset, and the others are evaluated through the VLM. An example of the frames from the dataset is presented in Figure 5, and the corresponding constructed automaton is presented in Figure 6. In this example, the probabilities of the three specifications being satisfied are 100%, 73%, and 100%, respectively. Since this recording clip satisfies all three specifications with probabilities greater than 0.5, we add this recording clip to the search result. We can apply this procedure to search for video clips from a long video, as presented in Appendix B. ### Quantitative Analysis over Realistic Datasets Search over Privacy-Sensitive Videos.We apply the proposed method on a realistic video dataset HMDB-51 [20] to verify the videos against several privacy rules and find all the videos that violate any of the privacy rules. We evaluate the proposed method in the metrics defined in Definition 2. **Definition 2**.: Let \(\Phi\) be an LTL\({}_{f}\) specification, \(N\) be the number of videos whose verification probabilities returned by the model checker are between an interval \([c_{1},c_{2})\), \(T_{P}\) be the number of videos whose verification probabilities are between \([c_{1},c_{2})\) that actually satisfy \(\Phi\), \(T_{P}*\) be the number of videos whose verification probabilities are between \([c_{1},1]\) that satisfy \(\Phi\), \(A_{P}\) be the total number of videos in the dataset that satisfy \(\Phi\), we define precision\(P_{c}\) and recall\(R_{c}\) as \[P_{c}=\frac{T_{P}}{N}\text{ and }R_{c}=\frac{T_{P}*}{A_{P}}.\] In the experiments, we equally divide the probabilities into 20 intervals: \([0,0.05),[0.05,0.1),...,[0.9,0.95),[0.95,1]\). In this setting, we obtain privacy annotations of the HMDB-51 dataset from PA-HMDB51 [20], which include characters' genders, races, etc., from the videos. We use these annotations as the ground truth to compute the accuracies. As for the first step, we take some of the annotations from PA-HMDB51 as the propositions and build a set of privacy rules by querying GPT-4: ``` 1Giventhepropositions:female,male,face,nude,blackskin,whiteskin. 2Transformthefollowingrulesinttemporallogicformulas: 31.Neverrevealgender. 42.Don'tshowthefacesofnudepeople. 53.Neverrevealraces. ``` \begin{table} \begin{tabular}{||c|c|c|c||} \hline State & Label & State & Label \\ \hline 1,3,7.2 & accelerate & 2 & brake, stop sign \\ 4.2 & no-op & 4.1 & no-op, red light \\ 5.1,6 & brake, red light & 5.2 & brake \\ 7.1, 8 & accelerate, green & & \\ \hline \end{tabular} \end{table} Table 1: State labels of the automaton in Figure 6. Figure 8: The top and bottom figures show the verification precision and recall of verification results whose probabilities fall into each particular range. Figure 6: Automaton constructed from the frames in Figure 5. The labels for each state are in Table 1. Figure 7: Object detection results on the HMDB-51 dataset. We then transform these rules above into \(\mathrm{LTL}_{f}\) specifications: \[\begin{array}{l}\Phi_{1}^{P}=\square(\neg\text{male }\land\neg\text{female}),\\ \Phi_{2}^{P}=\square\text{ (nude }\to\neg\text{ face}),\\ \Phi_{3}^{P}=\square(\neg\text{black skin }\land\neg\text{white skin}).\end{array}\] We extract all the frames from videos and apply the Grounded-SAM to detect whether the objects described in the propositions appear in each frame. We obtain the confidence scores from the Grounded-SAM and follow Algorithm 1 to construct a probabilistic automaton for each video. The privacy annotations from the dataset are static scenes. Hence, we only consider the detection results from the VLM when constructing the automaton. Next, we use a probabilistic model checker to compute the probability that each video satisfies the \(\mathrm{LTL}_{f}\) specifications. By doing so, we can efficiently find all videos that violate any of the specifications, i.e., verification probability below 50%. We evaluate our verification results (probabilities) using the ground truth privacy annotations from the dataset. We only select the following annotations to avoid ambiguity: race-black, race-white, gender-female, gender-male, gender-coexist, and nudity-semi-nudity. We evaluate the verification results over the metrics in Definition 2. For each interval, we compute the precision and recall, as presented in Figure 8. If we use 0.5 as the threshold to determine whether we should add the video to the search result, the precision is approximately 90%, and the recall is over 80%, indicating the reliability of our verification results. Search over Autonomous Driving Videos.We use the proposed method to search videos that satisfy the provided specifications within the NuScenes Dataset (Caesar et al., 2020). The NuScenes Dataset consists of 93,000 images with annotations of objects. We group 10 images and consider them as frames for one video clip. Then, we want to search for the video clips that satisfy our manually generated specifications: \[\begin{array}{l}\Phi_{1}^{T}=\square\text{ bicycle }\to\text{human},\\ \Phi_{2}^{T}=\square\text{ truck }\to(\lozenge\text{ car}),\\ \Phi_{3}^{T}=\lozenge\text{ car }\land(\lozenge\text{ human }).\end{array}\] From the specifications, we extract a set of propositions { bicycle, human, truck, car}. We apply the Grounded-SAM to detect objects described by these propositions and follow Algorithm 1 to construct a probabilistic automaton for each 10-frame video clip. Then, we use the probabilistic model checker to compute the probabilities of each video clip satisfying each specification. If a video satisfies all three specifications with a probability above 50%, we extract it as one of the search results. Since the dataset contains annotations of objects described by the propositions, we take these annotations to evaluate our verification results. We present both the object detection results and the annotations provided by the dataset in Figure 9. We evaluate the probabilistic verification results over the metrics in Definition 2. We indicate the reliability of the method in Figure 10. The precision of verification results whose probabilities are greater than 0.5 is around 95%, and the recall is above 80%. Although there exist small gaps between the empirical results and the ideal case, these gaps can be filled by the future development of VLMs. ## Conclusions We design a method for efficient video search, which consists of an algorithm that maps text-based event descriptions into \(\mathrm{LTL}_{f}\) formulas and an algorithm for constructing a probabilistic automaton encoding the video information. We calibrate the confidence and accuracy of the vision and language foundation model and use the calibrated confidence as probabilities in the probabilistic automaton. Hence, we can apply formal verification to compute the probability that the automaton of each video satisfies the \(\mathrm{LTL}_{f}\) formulas. By doing so, we can efficiently find all the videos that satisfy the event description with formal guarantees, i.e., probabilities above a certain threshold. Due to the limitation of the selected foundation model, our method only works for static event descriptions. As a future direction, we can incorporate action prediction models for searching dynamic events. Figure 10: The top and bottom figures show the verification precision and recall of verification results whose probabilities fall into each range. Figure 9: Examples of the object detection results on images from the NuScenes Dataset. The top row shows the object detection results by the Grounded-SAM, and the bottom row shows the annotated segments provided by the dataset.
2309.09420
Discovery and inference of a causal network with hidden confounding
This article proposes a novel causal discovery and inference method called GrIVET for a Gaussian directed acyclic graph with unmeasured confounders. GrIVET consists of an order-based causal discovery method and a likelihood-based inferential procedure. For causal discovery, we generalize the existing peeling algorithm to estimate the ancestral relations and candidate instruments in the presence of hidden confounders. Based on this, we propose a new procedure for instrumental variable estimation of each direct effect by separating it from any mediation effects. For inference, we develop a new likelihood ratio test of multiple causal effects that is able to account for the unmeasured confounders. Theoretically, we prove that the proposed method has desirable guarantees, including robustness to invalid instruments and uncertain interventions, estimation consistency, low-order polynomial time complexity, and validity of asymptotic inference. Numerically, GrIVET performs well and compares favorably against state-of-the-art competitors. Furthermore, we demonstrate the utility and effectiveness of the proposed method through an application inferring regulatory pathways from Alzheimer's disease gene expression data.
Li Chen, Chunlin Li, Xiaotong Shen, Wei Pan
2023-09-18T01:42:06Z
http://arxiv.org/abs/2309.09420v1
# Discovery and inference of a causal network with hidden confounding+ ###### Abstract This article proposes a novel causal discovery and inference method called GrIVET for a Gaussian directed acyclic graph with unmeasured confounders. GrIVET consists of an order-based causal discovery method and a likelihood-based inferential procedure. For causal discovery, we generalize the existing peeling algorithm to estimate the ancestral relations and candidate instruments in the presence of hidden confounders. Based on this, we propose a new procedure for instrumental variable estimation of each direct effect by separating it from any mediation effects. For inference, we develop a new likelihood ratio test of multiple causal effects that is able to account for the unmeasured confounders. Theoretically, we prove that the proposed method has desirable guarantees, including robustness to invalid instruments and uncertain interventions, estimation consistency, low-order polynomial time complexity, and validity of asymptotic inference. Numerically, GrIVET performs well and compares favorably against state-of-the-art competitors. Furthermore, we demonstrate the utility and effectiveness of the proposed method through an application inferring regulatory pathways from Alzheimer's disease gene expression data. Keywords: Causal discovery, Gaussian directed acyclic graph, Invalid instrumental variables, Uncertain interventions, Simultaneous inference, Gene regulatory network. ## 1 Introduction Understanding causal relations is part of the foundation of intelligence. A directed acyclic graph (DAG) is often used to describe the causal relations among multiple interacting units (Pearl, 2009). Unlike classical causal inference tasks where the DAG is determined a priori, causal discovery aims to learn a graphical representation from data. It is useful for forming data-driven conjectures about the underlying mechanism of a complex system, including gene networks (Sachs et al., 2005), functional brain networks (Liu et al., 2017), manufacturing pipelines (Kertel et al., 2022), and dynamical systems (Li et al., 2020). In such a situation, randomized experiments are usually unethical or infeasible, and unmeasured confounders commonly arise in practice. The presence of latent confounders can bias the causal effect estimation and even distort causal directions, making causal discovery challenging. To treat latent confounders, we use additive interventions as instrumental variables (IVs), which are well-developed in conventional causal inference (Angrist et al., 1996) yet are less explored in causal discovery of a large-scale network. In this article, we focus on a Gaussian DAG model with hidden confounders and develop methods that integrate the discovery and inference of causal relations within the framework of uncertain additive interventions (the targets of interventions are unknown). Causal discovery has been extensively studied (Zheng et al., 2018; Aragam et al., 2019; Gu et al., 2019; Lee and Li, 2022; Zhao et al., 2022; Li et al., 2023); see Drton and Maathuis (2017); Heinze-Deml et al. (2018); Glymour et al. (2019); Vowels et al. (2021) for comprehensive reviews. For observational data (without external interventions), some methods are able to treat hidden confounding by either (a) producing less informative discoveries, like a partial ancestral graph (Colombo et al., 2012) rather than a DAG, or (b) employing a certain deconfounding strategy (Frot et al., 2019; Shah et al., 2020) based on the pervasive confounding assumption. However, the former may not reveal essential information, such as causal directions, while the latter can be inconsistent in low-dimensional situations and may not necessarily outperform the naive regression (Grimmer et al., 2020). Thus, external interventions are useful to provide more information about causal relations while relaxing the requirements on latent confounding. As an example of external (additive) interventions, IVs have been well developed in conventional causal inference to tackle unmeasured confounding; see Lousdal (2018) for a survey. In a classical bivariate setting where the causal direction is known, an IV is required to influence the response variable only through the cause variable, which is often fragile in practice (Murray, 2006). For instance, genetic variants like single nucleotide polymorphisms (SNPs) are used as IVs in Mendelian randomization (MR) analysis to discover putative causal genes of complex traits, where the IV conditions are commonly violated due to the (horizontal) pleiotropy. Remedying these invalid IVs has been the subject of recent work in causal inference (Kang et al., 2016; Guo et al., 2018; Windmeijer et al., 2019; Burgess et al., 2020). The discussion of IV estimation in graphical modeling, however, remains limited. The methods of Oates et al. (2016); Chen et al. (2018) estimate the graph given valid IVs, while the work of Li et al. (2023) propose the peeling algorithm to construct the DAG in the case of uncertain interventions and invalid IVs. None of these methods permit latent confounding. A recent work (Xue and Pan, 2020) discusses causal discovery of a bivariate mixed effect graph where confounders and invalid IVs are allowed, but it remains unclear how to extend it to a large-scale causal network. Moreover, despite the progress in causal discovery, inference about the discovered re lations is often regarded as a separate task and has received less attention in the literature. Notable exceptions include recent advances in graphical modeling (Jankova and van de Geer, 2018; Li et al., 2020; Shi et al., 2023; Wang et al., 2023) and mediation analysis (Chakraborty et al., 2018; Shi and Li, 2021; Li et al., 2022); however, these methods cannot account for latent confounders. Indeed, due to unmeasured confounding, the probability distribution of observed variables is no longer locally Markovian with respect to the DAG (Pearl, 2009), rendering these approaches inappropriate. Consequently, there is a pressing need for new inference methodologies. This article contributes to the following aspects. * For modeling, we establish the identifiability conditions for a Gaussian DAG with latent confounders utilizing additive interventions. To our knowledge, this result is the first of its kind. Importantly, the conditions allow the interventions to have unknown and multiple targets, which is suitable for multivariate causal analysis (Murray, 2006). * For methodology, we develop a novel method named the Graphical Instrumental Variable Estimation and Testing (GrIVET), integrating order-based causal discovery and likelihood-based inference. For causal discovery, we estimate the ancestral relations and candidate IVs with a modified peeling algorithm to treat unmeasured confounding. On this basis, we propose a sequential procedure to estimate each direct effect using IVs, where a working response regression is used to separate the direct effect from the mediation effects. Regarding inference, we develop a new likelihood ratio test of multiple causal effects to account for unmeasured confounders. * For theory, we show that GrIVET enjoys desired guarantees. In particular, it consistently estimates the DAG structure and causal effects even when some interventions do not meet the IV criteria. As for computation, only \(O((p+|\mathcal{E}^{+}|)\times\log(s)\times(q^{3}+nq^{2}))\) operations are required almost surely, where \(p\) and \(q\) are the numbers of primary and intervention variables, \(s\) is sparsity, \(|\mathcal{E}^{+}|\) is the size of the ancestral relation set, and \(n\) is the sample size. Moreover, under the null hypothesis, we establish the convergence of the likelihood ratio statistic to the null distribution in high-dimensional situations, ensuring the validity of asymptotic inference. * The simulation studies and an application to the Alzheimer's Disease Neuroimaging Initiative dataset demonstrate the utility and effectiveness of the proposed methods. The implementation of GrIVET is available at [https://github.com/chunlinli/grivet](https://github.com/chunlinli/grivet). The rest of the article is structured as follows. Section 2 introduces a linear structural equation model with hidden confounders and establishes its identifiability. Section 3 presents a novel order-based method for causal discovery and effect estimation. Section 4 develops a likelihood ratio test for simultaneous inference of causal effects. Section 5 provides theoretical justification of the proposed method. Section 6 performs simulation studies, followed by an application to infer gene pathways with gene expression and SNP data. Finally, Section 7 concludes the article. The Appendix contains supporting lemmas, while the Supplementary Materials include illustrative examples, technical proofs, and additional simulations. ## 2 Causal graphical model with confounders ### Structural equations with confounders We consider a structural equation model with \(p\) primary variables \(\mathbf{Y}=(Y_{1},\ldots,Y_{p})^{\top}\) and \(q\) intervention variables \(\mathbf{X}=(X_{1},\ldots,X_{q})^{\top}\), \[\mathbf{Y}=\mathbf{U}^{\top}\mathbf{Y}+\mathbf{W}^{\top}\mathbf{X}+\mathbf{\varepsilon},\quad \mathbf{\varepsilon}\sim N(\mathbf{0},\mathbf{\Sigma}),\quad\text{Cov}(\mathbf{\varepsilon },\mathbf{X})=\mathbf{0}, \tag{1}\] where \(\mathbf{U}_{p\times p}\) is a matrix describing the causal influences among \(\mathbf{Y}\), \(\mathbf{W}_{q\times p}\) is a matrix representing the interventional effects of \(\mathbf{X}\) on \(\mathbf{Y}\), and \(\mathbf{\varepsilon}\) is a vector of possibly correlated errors. Specifically, * The parameter matrix \(\mathbf{U}\), which is of primary interest, has a causal interpretation in that \(\mathrm{U}_{kj}\neq 0\) indicates that \(Y_{k}\) is a cause of \(Y_{j}\), denoted by \(Y_{k}\to Y_{j}\). Thus, \(\mathbf{U}\) represents a directed graph among primary variables. In what follows, we will focus on a directed acyclic graph (DAG), where no directed cycle is permissible and \(\mathbf{U}\) is subject to the acyclicity constraint (Zheng et al., 2018; Yuan et al., 2019). * The intervention variables \(\mathbf{X}\) and errors \(\mathbf{\varepsilon}\) are uncorrelated by reparameterization. As a result, \(\mathbf{W}\) is associational instead of causal. Here, \(\mathrm{W}_{lj}\neq 0\) indicates that \(X_{l}\) intervenes on \(Y_{j}\), denoted by \(X_{l}\to Y_{j}\). As \(\mathbf{X}\) represents external interventions, no directed edge from a primary variable to an intervention variable is allowed. * A non-diagonal \(\mathbf{\Sigma}\) indicates the presence of unmeasured confounders. For instance, \(\mathbf{\varepsilon}=\mathbf{\Phi}^{\top}\mathbf{\eta}+\mathbf{e}\) can be (not uniquely) written as a sum of correlated components \(\mathbf{\Phi}^{\top}\mathbf{\eta}\) and independent components \(\mathbf{e}\) so that \(\mathbf{\Sigma}=\mathbf{\Phi}^{\top}\mathbf{\Phi}+\text{Diag}(\sigma_{1}^{2},\ldots,\sigma_ {p}^{2})\), where \(\mathbf{\Phi}_{r\times p}\) is the matrix of confounding effects, \(\mathbf{\eta}\sim N(\mathbf{0},\mathbf{I}_{r\times r})\) represents \(r\) independent confounding sources, and \(\mathbf{e}\sim N(\mathbf{0},\text{Diag}(\sigma_{1}^{2},\ldots,\sigma_{p}^{2}))\) represents \(p\) independent errors. Whenever \(\Sigma_{jk}\neq 0\) for some distinct \((j,k)\), we have \(\Sigma_{jk}=\sum_{m=1}^{r}\Phi_{mj}\Phi_{mk}\neq 0\), implying that some confounding variable \(\eta_{m}\) influences both \(Y_{j}\) and \(Y_{k}\). As such, \((\mathbf{U},\mathbf{W})\) together represents a directed graph of \(p\) primary variables and \(q\) intervention variables, denoted as \(\mathcal{G}=(\mathbf{X},\mathbf{Y};\mathcal{E},\mathcal{I})\), where \(\mathcal{E}=\{(k,j):\mathrm{U}_{kj}\neq 0\}\) is the set of primary variable edges and \(\mathcal{I}=\{(l,j):\mathrm{W}_{lj}\neq 0\}\) is the set of intervention edges. In \(\mathcal{G}\), (a) if \(Y_{k}\to Y_{j}\), then \(Y_{k}\) is a parent of \(Y_{j}\), and \(Y_{j}\) is a child of \(Y_{k}\), (b) if \(Y_{k}\to\cdots\to Y_{j}\) (a directed path from \(Y_{k}\) to \(Y_{j}\)), then \(Y_{k}\) is an ancestor of \(Y_{j}\), and \(Y_{j}\) is a descendant of \(Y_{k}\), and (c) if \(Y_{k}\to\cdots\to Y_{m}\to\cdots\to Y_{j}\), then \(Y_{m}\) is a mediator of \(Y_{k}\) and \(Y_{j}\). In what follows, for a graph \(\mathcal{G}\), denote the parent set of \(Y_{j}\) as \(\text{{pa}}_{\mathcal{G}}(j)=\{k:Y_{k}\to Y_{j}\}\), the ancestor set of \(Y_{j}\) as \(\text{{an}}_{\mathcal{G}}(j)=\{k:Y_{k}\to\cdots\to Y_{j}\}\), and the intervention set of \(Y_{j}\) as \(\text{{in}}_{\mathcal{G}}(j)=\{l:X_{l}\to Y_{j}\}\). For \((k,j)\) such that \(Y_{k}\to\cdots\to Y_{j}\), denote the mediator set as \(\text{{me}}_{\mathcal{G}}(k,j)=\{m:Y_{k}\to\cdots\to Y_{m}\to\cdots\to Y_{j}\}\). ### Identifiability and instrumental variables The causal parameter matrix \(\mathbf{U}\) is generally non-identifiable1 without further conditions on the Gaussian errors \(\boldsymbol{\varepsilon}\) or the interventions \(\boldsymbol{X}\). Without invoking external interventions (\(\mathbf{W}\equiv\mathbf{0}\)), \(\mathbf{U}\) can be identified under a certain error-scale assumption (Peters and Buhlmann, 2014; Ghoshal and Honorio, 2018; Rajendran et al., 2021), which is sensitive to variable scaling such as the common practice of standardizing variables (Reisach et al., 2021). To overcome this limitation, interventions are introduced to identify the causal parameters. With suitable interventions, \(\mathbf{U}\) is identifiable if no confounder is present in the model (\(\boldsymbol{\Sigma}\) is diagonal) (Oates et al., 2016; Chen et al., 2018; Li et al., 2023a). In addition, it is worth mentioning that \(\mathbf{U}\) can be estimated without intervention if the errors \(\boldsymbol{\varepsilon}\) are non-Gaussian (Shimizu et al., 2006; Zhao et al., 2022); however, such methods are not applicable in the case of unmeasured confounding. Footnote 1: The causal parameter \(\mathbf{U}\) is said to be identifiable if for any \((\mathbf{U},\mathbf{W},\boldsymbol{\Sigma})\) and \((\mathbf{U}^{\prime},\mathbf{W}^{\prime},\boldsymbol{\Sigma}^{\prime})\), we have \(\mathbb{P}_{\mathbf{U},\mathbf{W},\boldsymbol{\Sigma}}=\mathbb{P}_{\mathbf{U}^ {\prime},\mathbf{W}^{\prime},\boldsymbol{\Sigma}^{\prime}}\) implies \(\mathbf{U}=\mathbf{U}^{\prime}\). Otherwise, it is said to be non-identifiable. This subsection establishes the identifiability of (1) in the presence of unmeasured confounders using uncertain additive interventions (the targets of interventions are unknown) as IVs. To proceed, we introduce the notion of IV for our purpose. **Definition 1**.: An intervention variable \(X_{l}\) is said to be a valid IV of \(Y_{k}\) in \(\mathcal{G}\) if **(IV1)**\(X_{l}\) intervenes on \(Y_{k}\), namely \(\mathrm{W}_{lk}\neq 0\), and **(IV2)**\(X_{l}\) does not intervene on any other primary variable \(Y_{k^{\prime}}\), namely \(\mathrm{W}_{lk^{\prime}}=0\) for \(k^{\prime}\neq k\). Otherwise, \(X_{l}\) is called an invalid IV. Denote the valid IV set of \(Y_{k}\) as \(\textsc{iv}_{\mathcal{G}}(k)=\{l:X_{l}\to Y_{k},X_{l}\not \to Y_{k^{\prime}},k^{\prime}\neq k\}\). **Remark 1**.: Consider a bivariate case where we are interested in the potential causal effect \(Y_{1}\to Y_{2}\). In causal inference literature (Angrist et al., 1996; Kang et al., 2016), a valid IV \(X\) of \(Y_{1}\) is required to satisfy that (a) \(X\) is related to the \(Y_{1}\), referred to as relevance, (b) \(X\) has no directed edge to \(Y_{2}\), called exclusion, and (c) \(X\) is not related to unmeasured confounders, called unconfoundedness. In (1), (IV1) is indeed the relevance property, (IV2) generalizes the exclusion property for causal discovery, and the requirement \(\mathrm{Cov}(\boldsymbol{\varepsilon},\boldsymbol{X})=\mathbf{0}\) corresponds to the unconfoundedness. To identify \(\mathbf{U}\), two challenges emerge as the confounders arise. First, determining causal directions in the graph becomes more challenging. In (1), because of hidden confounding, the distribution \(\mathbb{P}(\boldsymbol{Y}\mid\boldsymbol{X})\) does not admit the causal Markov property (Pearl, 2009) according to \(\mathcal{G}\), that is, \(Y_{j}\) is not independent of its non-descendants given \((\boldsymbol{Y}_{\textsc{pa}_{\mathcal{G}}(j)},\boldsymbol{X})\). As a result, the existing methods based on this property can learn wrong causal directions due to misspecification. To identify causal directions, we formalize the concept of unmediated parents to highlight the causal relations that are critical in identification. **Definition 2**.: A primary variable \(Y_{k}\) is an unmediated parent of \(Y_{j}\) in \(\mathcal{G}\) if \(Y_{k}\to Y_{j}\) and there is no other directed path from \(Y_{k}\) to \(Y_{j}\). In other words, \(Y_{k}\) is an unmediated parent of \(Y_{j}\) if no mediator is between \(Y_{k}\) and \(Y_{j}\). Another challenge comes from uncertain interventions and invalid IVs. Assigning valid IVs for each primary variable can be difficult when the targets of interventions are unknown. Thus, it may be effective to construct a set of candidate IVs (including invalid IVs) for each primary variable, on which we estimate the causal parameters \(\mathbf{U}\). To this end, we define \(p\) candidate IV sets, one for each primary variable. **Definition 3**.: An intervention variable \(X_{l}\) is said to be a candidate IV of \(Y_{k}\) in \(\mathcal{G}\) if **(IV1')**\(X_{l}\) intervenes on \(Y_{k}\), and **(IV2')**\(X_{l}\) does not intervene on any non-descendant of \(Y_{k}\). Denote the candidate IV set of \(Y_{k}\) by \(\text{ca}_{\mathcal{G}}(k)=\{l:X_{l}\to Y_{k},X_{l}\to Y_{j}\text{ only if }k\in\text{an}_{\mathcal{G}}(j)\}\). The candidate IVs of \(Y_{k}\) include all valid IVs of \(Y_{k}\), but not vice versa. A candidate IV of \(Y_{k}\) may be invalid, as it could intervene on descendants of \(Y_{k}\). **Theorem 1** (Identifiability).: _Suppose_ 1. \(\operatorname{Cov}(\boldsymbol{X})\) _is positive definite._ 2. \(\operatorname{Cov}(Y_{j},X_{l}\mid\boldsymbol{X}_{\{1,\ldots,q\}\setminus\{l \}})\neq 0\) _whenever_ \(X_{l}\) _intervenes on an unmediated parent of_ \(Y_{j}\)_._ 3. (Majority rule)__\(|\text{iv}_{\mathcal{G}}(k)|>|\text{ca}_{\mathcal{G}}(k)|/2\)_;_ \(k=1,\ldots,p\)_._ _Then \((\mathbf{U},\mathbf{W},\boldsymbol{\Sigma})\) in (1) are identifiable in that if \((\mathbf{U},\mathbf{W},\boldsymbol{\Sigma})\) and \((\mathbf{U}^{\prime},\mathbf{W}^{\prime},\boldsymbol{\Sigma}^{\prime})\) encode the same probability distribution, then \((\mathbf{U},\mathbf{W},\boldsymbol{\Sigma})=(\mathbf{U}^{\prime},\mathbf{W}^{ \prime},\boldsymbol{\Sigma}^{\prime})\)._ To our knowledge, Theorem 1 is a new result for Gaussian DAG with hidden confounding, establishing the identifiability of all parameters in (1). In fact, if the causal parameter \(\mathbf{U}\) is identifiable, then so are parameters \(\mathbf{W},\boldsymbol{\Sigma}\). Regarding the conditions, (A1) states that \(\operatorname{Cov}(\boldsymbol{X})\) has full rank, which is common in the IV literature (Kang et al., 2016; Chen et al., 2018). Note that (A1) permits discrete IV variables such as SNPs in data analysis. (A2) requires the interventional effects through unmediated parents not to cancel out when an invalid IV has multiple targets. (A3) requires valid IVs to dominate invalid ones so that the causal effect can be identified in the presence of latent confounders. Such a condition has been used in the causal inference literature (Kang et al., 2016; Windmeijer et al., 2019). As shown in Supplementary Materials Section 1, when (A3) fails, (1) can be non-identifiable. By comparison, (A1)-(A2) together with (A4) are used for model identification in the absence of unmeasured confounding (Li et al., 2023a). 1. Each \(Y_{k}\) is intervened by at least one valid IV. Noting that (A4) is implied by (A3), treating hidden confounding demands stronger conditions in view of Theorem 1. ## 3 Causal discovery This section proposes a novel IV method to learn a DAG with unmeasured confounders. First, we introduce the ancestral relation graph (ARG), which, together with the candidate IV sets in Section 2.2, constitutes a basis for the proposed method. **Definition 4** (Ancestral relation graph).: For a DAG \(\mathcal{G}=(\mathbf{X},\mathbf{Y};\mathcal{E},\mathcal{I})\), its ancestral relation graph is defined as \(\mathcal{G}^{+}=(\mathbf{X},\mathbf{Y};\mathcal{E}^{+},\mathcal{I}^{+})\), where \[\mathcal{E}^{+}=\Big{\{}(k,j):k\in\textsc{an}_{\mathcal{G}}(j)\Big{\}},\qquad \mathcal{I}^{+}=\Big{\{}(l,j):l\in\bigcup_{k\in\textsc{an}_{\mathcal{G}}(j) \cup\{j\}}\textsc{in}_{\mathcal{G}}(k)\Big{\}}.\] Here, \(\mathcal{G}^{+}\) is a super-DAG of \(\mathcal{G}\) in that \(\mathcal{E}^{+}\supseteq\mathcal{E}\) is the set of ancestral relations, \(\mathcal{I}^{+}\supseteq\mathcal{I}\) is a superset of interventional relations, and \(\mathcal{G}^{+}\) is acyclic. Note that \(\mathcal{E}^{+}\) defines a partial order for the primary variables \(\mathbf{Y}\) in that \(Y_{k}\prec_{\mathcal{G}}Y_{j}\) whenever \((k,j)\in\mathcal{E}^{+}\). Without confounding, \(\mathbf{U}\) can be consistently estimated via direct regressions according to the known \(\mathcal{G}^{+}\)(Shojaie and Michailidis, 2010), where \(\mathcal{G}^{+}\) can be recovered by the peeling algorithm (Li et al., 2023a). However, this approach no longer applies in the presence of hidden confounders. To address this obstacle, Sections 3.1-3.2 modify the peeling algorithm to construct the ARG \(\mathcal{G}^{+}\) and the candidate IV sets \(\{\textsc{ca}_{\mathcal{G}}(k)\}_{1\leq k\leq p}\), and then Sections 3.3-3.4 develop a method to estimate \(\mathbf{U}\) assuming the ARG and candidate IVs are known. ### Identification of \(\mathcal{G}^{+}\) and candidate IVs In this subsection, we modify the peeling algorithm, originally designed for a model without unmeasured confounders (Li et al., 2023a), to uncover \(\mathcal{G}^{+}\) and \(\{\textsc{ca}_{\mathcal{G}}(k)\}_{1\leq k\leq p}\) in the presence of hidden confounders, of which the results can be subsequently used as the inputs for identification of \(\mathbf{U}\) in Section 3.3. The modified peeling algorithm essentially requires \(p\) regressions to identify the ARG and candidate IVs, which is suited for large-scale causal discovery. Moreover, the produced ARG and candidate IV sets enjoy desirable statistical properties; see Section 5. Let us begin with an observation that (1) can be rewritten as \[\mathbf{Y}=\mathbf{V}^{\top}\mathbf{X}+(\mathbf{I}-\mathbf{U}^{\top})^{-1}\mathbf{ \varepsilon}, \tag{2}\] where \(\mathbf{V}=\mathbf{W}(\mathbf{I}-\mathbf{U})^{-1}\) and \(\textsc{V}_{lj}=\sum_{k=1}^{p}\textsc{W}_{lk}(\textsc{I}_{kj}+\textsc{U}_{kj }+\cdots+(\mathbf{U}^{p-1})_{kj})\). Intuitively, \(\textsc{V}_{lj}\neq 0\) implies the dependence of \(Y_{j}\) on \(X_{l}\) through a directed path \(X_{l}\to Y_{k}\to\cdots\to Y_{j}\), and hence that \(X_{l}\) intervenes on \(Y_{j}\) itself (when \(k=j\)) or its ancestor \(Y_{k}\) (when \(k\neq j\)). In cases where \(X_{l}\) intervenes exclusively on one primary variable, the following proposition provides insights into the connection between \(\mathbf{V}\) and \(\mathcal{G}^{+}\). **Proposition 1**.: _Suppose Assumptions (A1), (A2), and (A4) are satisfied. There exists at least one intervention variable \(X_{l}\) such that \(\textsc{V}_{lk}\neq 0\) and \(\textsc{V}_{lk^{\prime}}=0\) for \(k^{\prime}\neq k\) if and only if \(Y_{k}\) is a leaf node (has no descendant). Moreover, such \(X_{l}\) is a valid IV of \(Y_{k}\) in \(\mathcal{G}\)._ Proposition 1 suggests that the leaves and their valid IVs in \(\mathcal{G}\) can be identified by \[\textsc{leaf}(\mathcal{G}) =\{k:\text{ for some }l,\textsc{V}_{lk}\neq 0\text{ and }\textsc{V}_{lk^{\prime}}=0\text{ for all }k^{\prime}\neq k\}\] \[=\{k:k=\operatorname*{arg\,max}_{j}|\textsc{V}_{lj}|\text{ for some }l=\operatorname*{arg\,min}_{\|\textbf{V}_{l,+}\|_{0}>0}\|\textbf{V}_{l,+}\|_{0}\},\] \[\textsc{iv}_{\mathcal{G}}(k) =\{l:\textsc{V}_{lk}\neq 0\text{ and }\textsc{V}_{lk^{\prime}}=0\text{ for all }k^{\prime}\neq k\} \tag{3}\] \[=\{l:l=\operatorname*{arg\,min}_{\|\textbf{V}_{l,+}\|_{0}>0}\| \textbf{V}_{l,+}\|_{0}\text{ and }k=\operatorname*{arg\,max}_{j}|\textsc{V}_{lj}|\},\quad k\in\textsc{ leaf}(\mathcal{G}).\] After the leaf nodes are learned, we can remove them to obtain a sub-DAG. If \(X_{l}\) is a valid IV of a non-leaf \(Y_{k}\) in \(\mathcal{G}\), its validity for \(Y_{k}\) is retained in the sub-DAG, implying (A4) continues to hold. Moreover, Assumptions (A1)-(A2) are naturally upheld in the sub-DAG. Hence, the requirements of Proposition 1 are satisfied in the sub-DAG, whose leaf variables and their valid IVs can be learned in the same fashion. As a result, we can successively identify and remove (i.e., peel) the leaf nodes from the DAG and sub-DAGs. This yields a topological order of primary variables but does not recover \(\mathcal{G}^{+}\). Next, we investigate how \(\mathbf{V}\) can be further used to recover \(\mathcal{G}^{+}\) with \(\{\textsc{ca}_{\mathcal{G}}(k)\}_{1\leq k\leq p}\). Subsequently, we use \(\mathcal{G}^{-}=(\mathbf{X}^{-},\mathbf{Y}^{-};\mathcal{E}^{-},\mathcal{I}^{-})\) to denote a generic sub-DAG produced by peeling, where \(\mathbf{Y}^{-}\) are the primary variables in \(\mathcal{G}^{-}\) and \(\mathbf{Y}\setminus\mathbf{Y}^{-}\) are peeled ones, \(\mathbf{X}^{-}\) are intervention variables on \(\mathbf{Y}^{-}\), \(\mathcal{E}^{-}\) is the set of causal relations among \(\mathbf{Y}^{-}\), and \(\mathcal{I}^{-}\) is the set of interventional relations between \(\mathbf{X}^{-}\) and \(\mathbf{Y}^{-}\). Then each variable in \(\mathbf{Y}^{-}\) is a non-descendant of each in \(\mathbf{Y}\setminus\mathbf{Y}^{-}\). Moreover, \(\textsc{leaf}(\mathcal{G}^{-})\) and \(\{\textsc{iv}_{\mathcal{G}^{-}}(k)\}_{k\in\textsc{leaf}(\mathcal{G}^{-})}\) are identified by (3). **Proposition 2**.: _Suppose Assumptions (A1), (A2), and (A4) are satisfied. Let \(Y_{k}\) be a leaf node in \(\mathcal{G}^{-}\) and \(Y_{j}\) be in \(\mathbf{Y}\setminus\mathbf{Y}^{-}\). Then the following statements are true._ 1. _If_ \(\textsc{V}_{lj}\neq 0\) _for all_ \(l\in\textsc{iv}_{\mathcal{G}^{-}}(k)\)_, we have_ \((k,j)\in\mathcal{E}^{+}\)_._ 2. _If_ \(Y_{k}\) _is an unmediated parent of_ \(Y_{j}\)_, then_ \(\textsc{V}_{lj}\neq 0\) _for all_ \(l\in\textsc{iv}_{\mathcal{G}^{-}}(k)\)_._ Proposition 2 outlines a method for identifying edges in \(\mathcal{G}^{+}\) from the leaf variables of \(\mathcal{G}^{-}\) to the peeled variables \(\mathbf{Y}\setminus\mathbf{Y}^{-}\) by \[\{(k,j):Y_{k}\in\textsc{leaf}(\mathcal{G}^{-}),\ Y_{j}\in\mathbf{Y}\setminus\mathbf{Y }^{-}\ \text{and}\ \textsc{V}_{lj}\neq 0\ \text{for all}\ l\in\textsc{iv}_{\mathcal{G}^{-}}(k)\}. \tag{4}\] Specifically, (A) shows that any identified edge must be present in \(\mathcal{G}^{+}\), so no extra edges are identified. Meanwhile, (B) shows that every directed edge from an unmediated parent must be correctly discovered. Importantly, the collection of all such edges suffices to recover all ancestral relationships, which guarantees that no edge in \(\mathcal{E}^{+}\) is overlooked. Upon the identification of \(\mathcal{G}^{+}\), the candidate IV sets can be learned by \[\textsc{ca}_{\mathcal{G}}(k)=\{l:(l,k)\in\mathcal{I}^{+}\ \text{and}\ (l,j)\in\mathcal{I}^{+},k\neq j\ \text{only if}\ (k,j)\in\mathcal{E}^{+}\},\quad 1\leq k\leq p. \tag{5}\] Consequently, Propositions 1-2 enable the recovery of \(\mathcal{G}^{+}\) and \(\{\textsc{ca}_{\mathcal{G}}(k)\}_{1\leq k\leq p}\). ### Finite-sample estimation of \(\mathcal{G}^{+}\) and candidate IVs This subsection implements the modified peeling algorithm delineated in Section 3.1 to estimate \(\mathcal{G}^{+}\) and \(\{\textsc{ca}_{\mathcal{G}}(k)\}_{1\leq k\leq p}\). To proceed, suppose data matrices \(\mathbf{Y}_{p\times n}=(\mathbf{Y}_{+,1},\ldots,\mathbf{Y}_{+,n})\) and \(\mathbf{X}_{q\times n}=(\mathbf{X}_{+,1},\ldots,\mathbf{X}_{+,n})\) are given, where \((\mathbf{Y}_{+,i},\mathbf{X}_{+,i})_{i=1}^{n}\) are sampled from (1) independently. We estimate \(\mathbf{V}\) by \(\widehat{\mathbf{V}}=(\widehat{\mathbf{V}}_{+,1},\ldots,\widehat{\mathbf{V}}_ {+,p})\) with sparse regressions \[\widehat{\mathbf{V}}_{+,j}=\operatorname*{arg\,min}_{\mathbf{\beta}}\ \sum_{i=1}^{n}( \textsc{Y}_{j,i}-\mathbf{\beta}^{\top}\mathbf{X}_{+,i})^{2}\quad\text{s.t.}\quad \|\mathbf{\beta}\|_{0}\leq\kappa^{\prime}_{j} \tag{6}\] where \(1\leq\kappa^{\prime}_{j}\leq q\) is tuned by BIC for \(1\leq j\leq p\) Moreover, the truncated Lasso penalty (TLP) (Shen et al., 2012) is used as the computational surrogate for \(\|\cdot\|_{0}\), where TLP is defined as \(\text{TLP}_{\tau}(\mathbf{\beta})=\sum_{j=1}^{r}\min(|\beta_{j}|/\tau,1)\) for \(\mathbf{\beta}=(\beta_{1},\ldots,\beta_{r})\), and \(\tau>0\) is a hyperparameter in TLP; see Supplementary Materials Section 2 for details. The modified peeling algorithm based on Section 3.1 is summarized in Algorithm 1.2 Footnote 2: In Algorithm 1 Step 7, the indices of \(\mathbf{V}\) are kept so that \(\text{V}_{lj}\) always represents the effect from \(X_{l}\) to \(Y_{j}\). ``` Input: Data \(\mathbf{Y}_{p\times n}\) and \(\mathbf{X}_{q\times n}\); 1 Compute \(\widehat{\mathbf{V}}\) via (6); 2 Initialize \(\mathbf{V}\leftarrow\widehat{\mathbf{V}}\), \(\widehat{\mathcal{E}}^{+}\leftarrow\emptyset\), \(\widehat{\mathcal{I}}^{+}\leftarrow\{(l,k):\widehat{\text{V}}_{lk}\neq 0\}\); 3 Initialize \(\mathcal{G}^{-}\) by \(\mathbf{Y}^{-}\leftarrow\mathbf{Y}\), \(\mathbf{X}^{-}\leftarrow\mathbf{X}\), \(\mathcal{E}^{-}\leftarrow\widehat{\mathcal{E}}^{+}\), \(\mathcal{I}^{-}\leftarrow\widehat{\mathcal{I}}^{+}\); 4while\(\mathbf{Y}^{-}\) is not emptydo 5 Update leaf(\(\mathcal{G}^{-}\)) and \(\{\text{IV}_{\mathcal{G}^{-}}(k)\}_{k\in\text{\sc leaf}(\mathcal{G}^{-})}\) via (3); 6 Update \(\widehat{\mathcal{E}}^{+}\) by adding (4); 7 Update \(\mathcal{G}^{-}\) by removing leaf(\(\mathcal{G}^{-}\)) and \(\mathbf{V}\) by keeping the columns in \(\mathbf{Y}^{-}\); 8 9 end while 10 Update \(\widehat{\mathcal{E}}^{+}\leftarrow\{(k,j):Y_{k}\rightarrow\cdots\to Y _{j}\text{ in }\widehat{\mathcal{E}}^{+}\}\); 11 Update \(\widehat{\mathcal{I}}^{+}\leftarrow\{(l,j):(l,k)\in\widehat{\mathcal{I}}^{+}\text { and }(k,j)\in\widehat{\mathcal{E}}^{+}\}\); 12 Update \(\widehat{\text{CA}}_{\mathcal{G}}(k)\) by (5); 13return\(\widehat{\mathcal{E}}^{+}\), \(\widehat{\mathcal{I}}^{+}\), and \(\{\widehat{\text{CA}}_{\mathcal{G}}(k)\}_{1\leq k\leq p}\); ``` **Algorithm 1**Estimation of \(\mathcal{G}^{+}\) and \(\{\text{CA}_{\mathcal{G}}(k)\}_{1\leq k\leq p}\) ### Identification of \(\mathbf{U}\) In this subsection, we present a new method for identifying causal effects \(\mathbf{U}\), using the ARG \(\mathcal{G}^{+}\) and candidate IV sets \(\{\text{CA}_{\mathcal{G}}(k)\}_{1\leq k\leq p}\) as inputs. Note that \(\{\text{an}_{\mathcal{G}}(k)\}_{1\leq k\leq p}\) and \(\{\text{me}_{\mathcal{G}}(k,j)\}_{(k,j)\in\mathcal{E}^{+}}\) can be derived from \(\mathcal{G}^{+}\). Throughout this subsection, the subscript \(\mathcal{G}\) is dropped for brevity and \(\mathbf{\alpha},\mathbf{\beta},\mathbf{\gamma}\) denote nuisance parameters in regression. Moreover, we assume that \(\mathbf{\varepsilon}\) and \(\mathbf{X}\) are independent to simplify the derivation; see Lemmas 1-2 in the Appendix for the case with \(\mathbf{\varepsilon}\) and \(\mathbf{X}\) being uncorrelated. The case with all IVs being valid.We begin with a special case of (1) where all IVs are valid, that is, \(\text{ca}(k)=\text{iv}(k)\); \(k=1,\ldots,p\). To estimate \(\mathbf{U}\), note that \(\mathbf{U}\) is supported on \(\mathcal{E}^{+}\), namely \(\mathbf{U}=(\mathbf{U}_{\mathcal{E}^{+}},\mathbf{0})\). Here, we consider estimating \(\text{U}_{kj}\), as well as selecting nonzero \(\text{U}_{kj}\) for graph recovery, for each \((k,j)\in\mathcal{E}^{+}\), as described in Figure 1 (a). To pinpoint the difficulties and motivate our approach, we make the following observations. First, regression of \(Y_{j}\) on \(Y_{k}\) together with covariates \((\mathbf{Y}_{\text{an}(j)\setminus\{k\}},\mathbf{X})\) can bias the estimation due to confounder \(\eta\). Second, in hope of treating confounders one might replace with its surrogate \(\mathbb{E}(Y_{k}\mid\mathbf{Y}_{\textsc{an}(k)},\mathbf{X})\) to regress \(Y_{j}\) on \(\mathbb{E}(Y_{k}\mid\mathbf{Y}_{\textsc{an}(k)},\mathbf{X})\) with \((\mathbf{Y}_{\textsc{an}(j)\setminus\{k\}},\mathbf{X}_{\textsc{iv}(k)^{c}})\) being covariates. However, this is also problematic. For explanation, note that \(\textsc{an}(j)\setminus\{k\}\) can be partitioned into mediators \(\textsc{me}(k,j)\) and non-mediators \[\textsc{nm}(k,j)=\textsc{an}(j)\setminus(\textsc{me}(k,j)\cup\{k\}).\] In Figure 1 (a), \(\mathbf{X}_{\textsc{iv}(k)}\) can be associated with \(\eta\) given \(\mathbf{Y}_{\textsc{an}(j)\setminus\{k\}}=(\mathbf{Y}_{\textsc{me}(k,j)},\mathbf{Y}_{ \textsc{nm}(k,j)})\), violating the unconfoundedness of IVs (Remark 1) and causing an estimation bias. This is because the mediators \(\mathbf{Y}_{\textsc{me}(k,j)}\) generate additional associations after conditioning on them; see the Appendix for technical discussion using the concept of d-separation (Pearl, 2009). Now, we propose a new method, which eliminates the impact of mediators \(\mathbf{Y}_{\textsc{me}(k,j)}\) by introducing the working response \(\overline{Y}_{j}=Y_{j}-\mathbf{U}_{\textsc{me}(k,j),j}^{\top}\mathbf{Y}_{\textsc{ ME}(k,j)}\), as depicted in Figure 1 (b). Of note, the definition of \(\overline{Y}_{j}\) depends on \((k,j)\), which is dropped for simplicity. As in Angrist et al. (1996), we have \[\mathbb{E}\left(\overline{Y}_{j}\mid\mathbf{Y}_{\textsc{nn}(k,j)}, \mathbf{X}\right) \tag{7}\] \[\overset{\text{(i)}}{=}\mathrm{U}_{kj}\,\mathbb{E}\left(Y_{k}\mid \mathbf{Y}_{\textsc{sm}(k,j)},\mathbf{X}\right)+\sum_{k^{\prime}\in\textsc{sm}(k,j)} \mathrm{U}_{k^{\prime}j}Y_{k^{\prime}}+\sum_{l\notin\textsc{iv}(k)}\mathrm{W} _{lj}X_{l}+\mathbb{E}\left(\varepsilon_{j}\mid\mathbf{Y}_{\textsc{sm}(k,j)},\mathbf{X}\right)\] \[\overset{\text{(ii)}}{=}\mathrm{U}_{kj}\widetilde{Y}_{k}+\mathbf{ \gamma}^{\top}\mathbf{Z},\] where \(\widetilde{Y}_{k}=\mathbb{E}(Y_{k}\mid\mathbf{Y}_{\textsc{nm}(k,j)},\mathbf{X})\), \(\mathbf{Z}=(\mathbf{Y}_{\textsc{nm}(k,j)},\mathbf{X}_{\textsc{ca}(k)^{c}})=(\mathbf{Y}_{ \textsc{nm}(k,j)},\mathbf{X}_{\textsc{iv}(k)^{c}})\), equality (i) follows from (1), and equality (ii) holds because \(\mathbb{E}\left(\varepsilon_{j}\mid\mathbf{Y}_{\textsc{nm}(k,j)},\mathbf{X}\right)\) is a linear combination of \((\mathbf{Y}_{\textsc{nm}(k,j)},\mathbf{X}_{\textsc{iv}(k)^{c}})\) by Lemma 1 in Appendix. Observe that \(\widetilde{Y}_{k}\) depends on \(\mathbf{X}_{\textsc{iv}(k)}\) while \(\mathbf{Z}\) does not. As a result, the \(\mathrm{U}_{kj}\) is identified through the working response regression. This approach requires the knowledge of \(\mathbf{U}_{\textsc{me}(k,j),j}\) prior to identifying \(\mathrm{U}_{kj}\). Given \(\mathcal{G}^{+}\), we develop a sequential procedure to learn \(\mathbf{U}\). First, we identify \(\mathrm{U}_{kj}\) for each pair \((k,j)\) such that the longest path in \(\mathcal{G}^{+}\) between \(k\) and \(j\) is equal to \(d=1\). Then for \((k,j)\) such that the Figure 1: Estimation of causal parameter \(\mathrm{U}_{kj}\). (a) Display of the relations among relevant variables. (b) Display of working response regression. longest path in \(\mathcal{G}^{+}\) between \(k\) and \(j\) is \(d=2\), the effects of mediators \(\mathbf{U}_{\mbox{\tiny\sc me}(k,j),j}\) are available. Thus, we can identify \(\mathrm{U}_{kj}\) in (7). Proceed similarly for \(d=3,4,5,\ldots\) until all pairs in \(\mathcal{E}^{+}\) have been identified. The case with invalid IVs.In general, \(\mbox{\sc ca}(k)\supseteq\mbox{\sc iv}(k)\) because of invalid IVs, where \(\mbox{\sc ca}(k)\) is known but \(\mbox{\sc iv}(k)\) is unknown. Similar to Kang et al. (2016), we have \[\begin{array}{l}\mathbb{E}\left(\widetilde{Y}_{j}\mid\mathbf{Y}_{\mbox{\tiny \sc nm}(k,j)},\mathbf{X}\right)\\ =\mathrm{U}_{kj}\,\mathbb{E}\left(Y_{k}\mid\mathbf{Y}_{\mbox{\tiny \sc nm}(k,j)},\mathbf{X}\right)+\sum_{k^{\prime}\in\mbox{\tiny\sc nm}(k,j)}\mathrm{ U}_{k^{\prime}j}Y_{k^{\prime}}+\sum_{l\notin\mbox{\tiny\sc iv}(k)}\mathrm{W}_{lj}X_{l}+ \mathbb{E}\left(\varepsilon_{j}\mid\mathbf{Y}_{\mbox{\tiny\sc nm}(k,j)},\mathbf{X} \right)\\ \stackrel{{\mbox{\tiny(iii)}}}{{=}}\mathrm{U}_{kj}\widetilde{Y}_{k }+\mathbf{\gamma}^{\top}\mathbf{Z}+\sum_{l\in\mbox{\tiny\sc ca}(k)\mbox{\tiny\sc iv}( k)}\beta_{l}X_{l},\end{array} \tag{8}\] where \(\widetilde{Y}_{k}=\mathbb{E}(Y_{k}\mid\mathbf{Y}_{\mbox{\tiny\sc nm}(k,j)},\mathbf{X})\), \(\mathbf{Z}=(\mathbf{Y}_{\mbox{\tiny\sc nm}(k,j)},\mathbf{X}_{\mbox{\tiny\sc ca}(k)^{c}})\), equality (iii) holds by Lemma 1 in Appendix, and \(\beta_{l}=\mathrm{W}_{lj}\neq 0\) indicates \(X_{l}\) is an invalid IV for \(Y_{k}\). However, since \(\mbox{\sc iv}(k)\) has not been identified and \(\widetilde{Y}_{k}\) depends on \(\mathbf{X}_{\mbox{\tiny\sc ca}(k)}\), the representation of (iii) may not be unique. When the majority rule (A3) is satisfied by the DAG, the term (iii) admits the unique expression as in (8), providing the identification of \(\mathrm{U}_{kj}\). This leads to a sparse regression for an infinite sample \[\min_{\mathrm{U}_{kj},\mathbf{\beta},\mathbf{\gamma}}\ \mathbb{E}\left(\overline{Y}_{j}- \mathrm{U}_{kj}\widetilde{Y}_{k}-\mathbf{\gamma}^{\top}\mathbf{Z}-\mathbf{\beta}^{\top} \mathbf{X}_{\mbox{\tiny\sc ca}(k)}\right)^{2}\quad\mbox{s.t.}\quad\|\mathbf{\beta}\|_{ 0}\leq\kappa, \tag{9}\] where \(0\leq\kappa<|\mbox{\sc ca}(k)|/2\) is an integer-valued hyperparameter controlling the sparsity of \(\mathbf{\beta}\). ### Finite-sample estimation of \(\mathbf{U}\) Suppose \((\mathbf{Y}_{p\times n},\mathbf{X}_{q\times n})\) are given. To estimate \(\mathrm{U}_{kj}\), noting that \(\widetilde{Y}_{k}\) is linear in \((\mathbf{Y}_{\mbox{\tiny\sc nm}(k,j)},\mathbf{X})\) by Lemma 1, we estimate \(\widetilde{\mathrm{Y}}_{k,i}\) by \(\widehat{\mathrm{Y}}_{k,i}=\widehat{\mathbf{\alpha}}_{1}^{\top}\mathbf{X}_{+,i}+ \widehat{\mathbf{\alpha}}_{2}^{\top}\mathbf{Y}_{\mbox{\tiny\sc nm}(k),i}\), where \((\widehat{\mathbf{\alpha}}_{1},\widehat{\mathbf{\alpha}}_{2})\) solves \[\min_{\mathbf{\alpha}_{1},\mathbf{\alpha}_{2}}\ \sum_{i=1}^{n}\left(\mathrm{Y}_{k,i}-\mathbf{ \alpha}_{1}^{\top}\mathbf{X}_{+,i}+\mathbf{\alpha}_{2}^{\top}\mathbf{Y}_{\mbox{ \tiny\sc nm}(k),i}\right)^{2}\quad\mbox{s.t.}\quad\|\mathbf{\alpha}_{1}\|_{0}+\| \mathbf{\alpha}_{2}\|_{0}\leq\nu_{1}, \tag{10}\] with \(\nu_{1}\) being a tuning parameter. Let the final estimate \(\widehat{\mathrm{U}}_{kj}\) with \((\widehat{\mathbf{\beta}},\widehat{\mathbf{\gamma}})\) be the solution to the working response regression (provided that \(\widehat{\mathbf{U}}_{\mbox{\tiny\sc me}(k,j),j}\) are available) \[\begin{array}{c}\min_{\mathrm{U}_{kj},\mathbf{\beta},\mathbf{\gamma}} \ \sum_{i=1}^{n}\left(\left(\mathrm{Y}_{j,i}-\widehat{\mathbf{U}}_{\mbox{\tiny \sc me}(k,j),j}^{\top}\mathbf{Y}_{\mbox{\tiny\sc me}(k,j),i}\right)-\mathrm{U}_ {kj}\widehat{\mathrm{Y}}_{k,i}-\mathbf{\beta}^{\top}\mathbf{X}_{\mbox{\tiny\sc ca}(k),i }-\mathbf{\gamma}^{\top}\mathbf{Z}_{i}\right)^{2}\\ \mbox{s.t.}\quad\rho(\mathrm{U}_{kj})+\|\mathbf{\beta}\|_{0}\leq\kappa,\quad\|\mathbf{ \gamma}\|_{0}\leq\nu_{2},\end{array} \tag{11}\] where \(0\leq\kappa\leq|\mbox{\sc ca}(k)|/2\) and \(0\leq\nu_{2}\leq|\mbox{\sc nm}(k,j)|+|\mbox{\sc ca}(k)^{c}|\) are tuning parameters. Depending on the purpose, \(\rho(\cdot)=\mathrm{I}(\cdot\neq 0)\) for graph recovery and \(\rho(\cdot)=0\) for effect estimation without selection. In (10)-(11), \(\nu_{1},\nu_{2}\) are added to treat possible high-dimensional situations and the hyperparameters are tuned by BIC. Algorithm 2 summarizes the procedure. ## 4 Likelihood inference This section develops a likelihood ratio test for the presence of multiple directed edges. Let \(\mathcal{H}\subseteq\{(k,j):k\neq j,\ 1\leq k,j\leq p\}\) be a hypothesized edge set for primary variables \(\mathbf{Y}\), where \((k,j)\in\mathcal{H}\) specifies a (hypothesized) directed edge \(Y_{k}\to Y_{j}\) in (1). Now consider simultaneous testing of directed edges, \[H_{0}:\mathrm{U}_{kj}=0\text{ for all }(k,j)\in\mathcal{H}\quad\text{ versus }\quad H_{a}:\mathrm{U}_{kj}\neq 0\text{ for some }(k,j)\in\mathcal{H}. \tag{12}\] The null hypothesis \(H_{0}\) asserts that all hypothesized edges in \(\mathcal{H}\) are absent in the true graph \(\mathcal{G}\). Rejecting \(H_{0}\) indicates that at least one hypothesized edge in \(\mathcal{H}\) presents in \(\mathcal{G}\). The likelihood ratio.Given \(\mathcal{G}^{+}=(\mathbf{X},\mathbf{Y};\mathcal{E}^{+},\mathcal{I}^{+})\), let \(\mathbf{\theta}(\mathcal{G}^{+})=(\mathbf{U},\mathbf{W})\) encode the coefficient parameters in \(\mathcal{G}^{+}\), where \(\mathbf{U}=(\mathbf{U}_{\mathcal{E}^{+}},\mathbf{0})\) and \(\mathbf{W}=(\mathbf{W}_{\mathcal{I}^{+}},\mathbf{0})\). As such, the adjacency matrix \(\mathbf{U}\) automatically meets the acyclicity constraint. Given a random sample \(\left(\mathbf{Y}_{+,i},\mathbf{X}_{+,i}\right)_{i=1}^{n}\), the log-likelihood is written as (up to an additive constant) \[L(\mathbf{\theta}(\mathcal{G}^{+}),\mathbf{\Omega})=-\frac{1}{2}\sum_{i=1}^{n}\left\| \mathbf{\Omega}^{1/2}\left(\left(\mathbf{I}-\mathbf{U}^{\top}\right)\mathbf{Y}_{+,i}-\mathbf{W}^{\top}\mathbf{X}_{+,i}\right)\right\|_{2}^{2}+\frac{n}{2}\log \det(\mathbf{\Omega}), \tag{13}\] where \(\mathbf{\Omega}=\mathbf{\Sigma}^{-1}\) is the inverse of \(\mathbf{\Sigma}\) in (1). Then the maximum likelihood estimation (MLE) of (1) can be written as \[\max_{(\mathcal{G}^{+},\mathbf{\Omega})}\max_{\mathbf{\theta}(\mathcal{G}^{+})}L(\bm {\theta}(\mathcal{G}^{+}),\mathbf{\Omega}). \tag{14}\] In view of (14), to obtain a likelihood ratio statistic for (12) we need to compute the following quantities: (1) a consistent estimate \(\widehat{\mathcal{G}}^{+}\) of \(\mathcal{G}^{+}\), (2) a consistent estimate \(\widehat{\mathbf{\Omega}}\) of \(\mathbf{\Omega}\), and (3) two estimates, \(\widehat{\mathbf{\theta}}^{(0)}\) and \(\widehat{\mathbf{\theta}}^{(1)}\), of \(\mathbf{\theta}(\mathcal{G}^{+})\) under \(H_{0}\) and \(H_{a}\), respectively. This leads to the likelihood ratio defined as \[L(\widehat{\mathbf{\theta}}^{(1)},\widehat{\mathbf{\Omega}})-L(\widehat{\mathbf{\theta}}^ {(0)},\widehat{\mathbf{\Omega}}), \tag{15}\] where \(\mathcal{G}^{+}\) is estimated by Algorithm 1 and \(\mathbf{\Omega}\) is estimated from the residuals after fitting model (1) via Algorithm 2. Inference subject to acyclicity.In classical models, a likelihood ratio of form (15) has a nondegenerate and tractable limiting distribution, typically a chi-squared distribution with degrees of freedom \(|\mathcal{H}|\). However, the likelihood ratio for (12) may behave differently from classical ones since (15) may be degenerate or intractable, as to be explained. First, note that the maximum likelihood subject to a wrong ARG \(\widetilde{\mathcal{G}}^{+}\not\supseteq\mathcal{G}\) tends to be smaller than that subject to the correct \(\mathcal{G}^{+}\), that is, \[\max_{\widetilde{\mathcal{G}}^{+}\not\supseteq\mathcal{G}}\max_{\boldsymbol{ \theta}(\widetilde{\mathcal{G}}^{+}),\boldsymbol{\Omega}}L(\boldsymbol{ \theta}(\widetilde{\mathcal{G}}^{+}),\boldsymbol{\Omega})<\max_{\boldsymbol{ \theta}(\mathcal{G}^{+}),\boldsymbol{\Omega}}L(\boldsymbol{\theta}(\mathcal{G }^{+}),\boldsymbol{\Omega}),\] as \(n\to\infty\) under some regularity conditions for consistency. Thus, we assume \(\widehat{\mathcal{G}}^{+}=\mathcal{G}^{+}\) in this paragraph. Then \(\widehat{\boldsymbol{\theta}}^{(0)}\) is the MLE subject to \(\mathcal{G}^{+}\) and \(\mathbf{U}_{\mathcal{H}}=\boldsymbol{0}\), which is equal to the MLE subject to the graph \(\mathcal{G}^{+}_{0}=(\boldsymbol{X},\boldsymbol{Y};\mathcal{E}^{+}\setminus \mathcal{H},\mathcal{I}^{+})\). Meanwhile, to test whether any edge in \(\mathcal{H}\) exists, \(\widehat{\boldsymbol{\theta}}^{(1)}\) is the MLE subject to an augmented graph \(\mathcal{G}^{+}_{1}=(\boldsymbol{X},\boldsymbol{Y};\mathcal{E}^{+}\cup \mathcal{H},\mathcal{I}^{+})\) with hypothesized edges being added, namely, \(\widehat{\mathbf{U}}^{(1)}=(\widehat{\mathbf{U}}^{(1)}_{\mathcal{E}^{+}\cup \mathcal{H}},\boldsymbol{0})\) and \(\widehat{\mathbf{W}}^{(1)}=(\widehat{\mathbf{W}}^{(1)}_{\mathcal{I}^{+}}, \boldsymbol{0})\). Of note, since \(\mathcal{H}\) is pre-specified by the user, \(\mathcal{G}^{+}_{1}\) is not necessarily acyclic, and thus, not all edges in \(\mathcal{H}\) could present in \(\widehat{\mathbf{U}}^{(1)}\). Furthermore, if a hypothesized edge \((k,j)\) is present in \(\widehat{\mathbf{U}}^{(1)}\), then \(\{(k,j)\}\cup\mathcal{E}^{+}\) must have no directed cycle and (15) is strictly positive (nondegenerate). However, even if (15) does not degenerate to zero, its limiting distribution can be complicated when there exist multiple ways of augmenting \(\mathcal{G}^{+}\) with the edges in \(\mathcal{H}\) while maintaining the resulting graph as a DAG. Therefore, a regularity condition for \(\mathcal{H}\) is necessary to rule out intractable situations. On the ground of the foregoing discussion, we introduce the concepts of nondegeneracy and regularity to characterize the behavior of (15) as in Li et al. (2023a). **Definition 5** (Nondegeneracy and regularity with respect to \(\mathcal{G}^{+}\)).: 1. An edge \((k,j)\in\mathcal{H}\) is said to be nondegenerate with respect to an ancestral graph \(\mathcal{G}^{+}=(\boldsymbol{Y},\boldsymbol{X};\mathcal{E}^{+},\mathcal{I}^{+})\) if \(\{(k,j)\}\cup\mathcal{E}^{+}\) contains no directed cycle. Otherwise, \((k,j)\) is said to be degenerate. Let \(\mathcal{D}\subseteq\mathcal{H}\) be the set of all nondegenerate edges with respect to \(\mathcal{G}^{+}\). A null hypothesis \(H_{0}\) is said to be nondegenerate with respect to \(\mathcal{G}^{+}\) if \(\mathcal{D}\neq\emptyset\). Otherwise, \(H_{0}\) is said to be degenerate. 2. A null hypothesis \(H_{0}\) is said to be regular with respect to \(\mathcal{G}^{+}\) if \(\mathcal{D}\cup\mathcal{E}^{+}\) contains no directed cycle. Otherwise, \(H_{0}\) is called irregular. Suppose \(H_{0}\) is nondegenerate and regular. Then \(\widehat{\boldsymbol{\theta}}^{(0)}\) is the MLE subject to the graph \(\mathcal{G}^{+}_{0}=(\boldsymbol{X},\boldsymbol{Y};\mathcal{E}^{+}\setminus \mathcal{D},\mathcal{I}^{+})\) and \(\widehat{\boldsymbol{\theta}}^{(1)}\) is the MLE subject to the graph \(\mathcal{G}^{+}_{1}=(\boldsymbol{X},\boldsymbol{Y};\mathcal{E}^{+}\cup \mathcal{D},\mathcal{I}^{+})\). Now, we investigate the limiting distribution of (15) and derive an asymptotic test based on it. To this end, define the statistic \[T(\mathcal{D})=\begin{cases}2\left(L(\widehat{\boldsymbol{\theta}}^{(1)}, \widehat{\boldsymbol{\Omega}})-L(\widehat{\boldsymbol{\theta}}^{(0)},\widehat {\boldsymbol{\Omega}})\right)&\text{if }|\mathcal{D}|\text{ is fixed,}\\ \left(2\left(L(\widehat{\boldsymbol{\theta}}^{(1)},\widehat{\boldsymbol{\Omega }})-L(\widehat{\boldsymbol{\theta}}^{(0)},\widehat{\boldsymbol{\Omega}}) \right)-|\mathcal{D}|\right)/\sqrt{2|\mathcal{D}|}&\text{if }|\mathcal{D}|\to\infty.\end{cases} \tag{16}\] **Theorem 2** (Limiting distribution).: _Assume the null hypothesis \(H_{0}\) is nondegenerate and regular. Suppose \(\mathbb{P}(\widehat{\mathcal{G}}^{+}=\mathcal{G}^{+})\to 1\) as \(n\to\infty\). Then we have \(\mathbb{P}(\widehat{\mathcal{D}}=\mathcal{D})\to 1\). In addition, if \(\|\widehat{\boldsymbol{\Omega}}-\boldsymbol{\Omega}\|_{2}^{2}=O_{\mathbb{P}}(|S |\log(p\lor n)/n)\) where \(S=\{(k,j):\Omega_{kj}\neq 0\}\), then under \(H_{0}\),_ \[T(\widehat{\mathcal{D}})\stackrel{{ d}}{{\longrightarrow}}\begin{cases} \chi_{|\mathcal{D}|}^{2},&\text{if $|\mathcal{D}|$ is fixed and $|S|\log(p\lor n)/n\to 0$},\\ N(0,1),&\text{if $|\mathcal{D}|\to\infty$ and $|\mathcal{D}||S|\log(p\lor n)/n\to 0$}. \end{cases}\] On the basis of Theorem 2, we conduct inference by substituting \(|\mathcal{D}|\) by its estimate \(|\widehat{\mathcal{D}}|\) and proceed with the empirical rule: (1) use the chi-squared test when \(|\widehat{\mathcal{D}}|<50\), and (2) use the normal test when \(|\widehat{\mathcal{D}}|\geq 50\). Theorem 2 requires a good estimator \(\widehat{\boldsymbol{\Omega}}\) of \(\boldsymbol{\Omega}=\boldsymbol{\Sigma}^{-1}\) to account for the confounding effects, where \(\boldsymbol{\Sigma}=\operatorname{Cov}(\boldsymbol{\varepsilon})\). To estimate \(\boldsymbol{\Omega}\), let \(\widehat{\boldsymbol{\varepsilon}}_{+,i}=(\mathbf{I}-\widehat{\mathbf{U}})^{ \top}\mathbf{Y}_{+,i}-\widehat{\mathbf{W}}^{\top}\mathbf{X}_{+,i}\); \(i=1,\ldots,n\) be the estimated residuals after fitting (1) with Algorithm 2. Here we use the neighborhood selection method (Meinshausen and Buhlmann, 2006) with an additional refitting to obtain a positive definite estimate \(\widehat{\boldsymbol{\Omega}}\). In Supplementary Materials, we include the computational details and show that this estimator satisfies \(\|\widehat{\boldsymbol{\Omega}}-\boldsymbol{\Omega}\|_{F}^{2}=O_{\mathbb{P}}(|S |\log(p\lor n)/n)\) so that Theorem 2 applies. **Remark 2**.: In Theorem 2, we focus on nondegenerate and regular hypotheses. For a degenerate case, we define the p-value as one. For an irregular case where \(\mathcal{D}\cup\mathcal{E}^{+}\) contains a directed cycle, we decompose \(H_{0}\) into sub-hypotheses \(H_{0}^{(1)},\ldots,H_{0}^{(r)}\), each of which is regular. Then testing \(H_{0}\) is reduced to multiple testing for \(H_{0}^{(1)},\ldots,H_{0}^{(r)}\). Finally, we discuss two aspects of likelihood estimation and inference in the presence of unmeasured confounding. First, when \(\boldsymbol{\Sigma}\) is non-diagonal, the likelihood in (13) cannot be factorized according to \(\mathcal{G}\) (or \(\mathcal{G}^{+}\)). This implies that, unlike the case without latent confounders (Shojaie and Michailidis, 2010), the parameters of each equation in (1) cannot be estimated separately given \(\mathcal{G}^{+}\). Indeed, the likelihood estimation of \((\mathbf{U},\mathbf{W})\) in (1) requires a preliminary estimate of \(\boldsymbol{\Omega}\) to account for correlations arising from hidden confounding. Furthermore, compared to Li et al. (2023a), the likelihood ratio (15) is no longer a sum of likelihood ratios of equations associated with nondegenerate hypothesized edges, rendering inference more challenging in both computation and theory when hidden confounders are present. Computationally, the likelihood ratio (15) requires maximization of the full likelihood, which is costly for a large-scale graph. Theoretically, estimating \(\boldsymbol{\Omega}\) and \((\mathbf{U},\mathbf{W})\) in high-dimensional situations may suffer from the curse of dimensionality. Second, to mitigate the challenges in inference, we may conduct inference with respect to a sub-DAG to achieve dimensionality reduction. Specifically, let \(\mathcal{D}\) be the nondegenerate edges of \(H_{0}\). Given ARG \(\mathcal{G}^{+}\), we perform likelihood inference using a sub-DAG (of ARG) \(\mathcal{G}^{+}_{\text{sub}}=(\boldsymbol{X}_{\text{sub}},\boldsymbol{Y}_{ \text{sub}};\mathcal{E}^{+}_{\text{sub}},\mathcal{I}^{+}_{\text{sub}})\), where all edges specified in \(\mathcal{D}\) are among primary variables \(\boldsymbol{Y}_{\text{sub}}\), and \(\boldsymbol{Y}_{\text{sub}}\) are non-descendants of \(\boldsymbol{Y}\setminus\boldsymbol{Y}_{\text{sub}}\) in the graph \((\boldsymbol{X},\boldsymbol{Y};\mathcal{E}^{+}\cup\mathcal{D},\mathcal{I}^{+})\), \(\boldsymbol{X}_{\text{sub}}\) is the set of intervention variables of \(\boldsymbol{Y}_{\text{sub}}\), \(\mathcal{E}^{+}_{\text{sub}}\) is the set of ancestral relations among \(\boldsymbol{Y}_{\text{sub}}\), and \(\mathcal{I}^{+}_{\text{sub}}\) is the set of interventional relations between \(\boldsymbol{X}_{\text{sub}}\) and \(\boldsymbol{Y}_{\text{sub}}\) in ARG \(\mathcal{G}^{+}\). Then the test statistic (16) is computed within the sub-DAG \(\mathcal{G}^{+}_{\text{sub}}\), which reduces computation. Furthermore, Theorem 2 holds true when the estimator of the smaller precision matrix \(\mathbf{\Omega}_{\text{sub}}\) enjoys the desired convergence rate \(O_{\mathbb{P}}(\sqrt{|S_{\text{sub}}|\log(p_{\text{sub}}\lor n)/n})\) in operator norm, where the subscript \({}_{\text{sub}}\) denotes the quantities corresponding to the structural equations of \(\mathbf{Y}_{\text{sub}}\). ## 5 Theory In this section, we develop a theory to quantify the finite sample performance as well as the complexities of Algorithms 1-2 when TLP is used for computation. To proceed, we introduce some technical conditions for casual discovery consistency. For \((k,j)\in\mathcal{E}^{+}\), let \(\widetilde{\mathbf{\Sigma}}^{(k,j)}\) be the covariance matrix of \((\mathbb{E}(Y_{k}\mid\mathbf{Y}_{\text{\tiny NM}_{\mathcal{G}}(k,j)},\mathbf{X}),\mathbf{ Y}_{\text{\tiny NM}_{\mathcal{G}}(k,j)},\mathbf{X})\). Moreover, let \(s=\max_{(k,j)\in\mathcal{E}^{+}}(\kappa+\nu_{2},\nu_{1})\vee\max_{1\leq k\leq p }\|\mathbf{V}_{+,k}\|_{0}\) be the maximum sparsity-level in the estimation procedure, where \(\nu_{1},\nu_{2},\kappa\) depends on \((k,j)\) which is dropped for conciseness. Assume there exist constants \(c_{0},c_{1},c_{2},c_{3}>0\) such that 1. \(\min_{(k,j)\in\mathcal{E}^{+}}\min_{B:|B|\leq 2s}\min_{\mathbf{v}:\| \mathbf{v}\|_{2}=1,\|\mathbf{v}_{B^{c}}\|_{1}\leq 3\|\mathbf{v}_{B}\|_{1}+c_{ 0}s\sqrt{\log(p)/n}}\langle\mathbf{v},\widetilde{\mathbf{\Sigma}}^{(k,j)}\mathbf{ v}\rangle\geq c_{1}\). 2. \(\min_{\text{\tiny V}_{kj}\neq 0}|\text{\tiny V}_{kj}|\geq c_{2}\sqrt{\log(q \lor n)/n}\). 3. \(\min_{\text{\tiny U}_{kj}\neq 0}|\text{\tiny U}_{kj}|\geq c_{3}\sqrt{\log(p\lor n)/n}\). 4. \(\max_{1\leq k\leq p}\{|\text{\tiny AN}_{\mathcal{G}}(k)|,|\text{\tiny IN}_{ \mathcal{G}}(k)|,\|\mathbf{U}_{+,k}\|_{1}\}=O(1)\), and \(\max_{(k,j)\in\mathcal{E}^{+}}(\text{Diag}(\widetilde{\mathbf{\Sigma}}^{(k,j)}))= O(1)\). Condition (C1) is a restricted eigenvalue condition, which is common in high-dimensional estimation (Bickel et al., 2009) and can be viewed as a stronger version of (A1) in Theorem 1. (C2) and (C3) impose restrictions on the minimal signal strengths of \(\mathbf{V}\) and \(\mathbf{U}\) so that the ARG \(\mathcal{G}^{+}\) and DAG \(\mathcal{G}\) can be consistently recovered, respectively. They are similar to the beta-min condition (Meinshausen and Buhlmann, 2006) and the degree of separation condition (Shen et al., 2012) in the variable selection literature. **Theorem 3**.: _Suppose Assumptions (A1)-(A3) in Theorem 1 are satisfied and assume \(\mathbf{X}\) is sub-Gaussian with mean zero and parameter \(\varsigma^{2}\)._ 1. _(Parameter estimation) Suppose (C1), (C2), (C4) are met with sufficiently large_ \(c_{0},c_{1},c_{2}\)_. Suppose the tuning parameters are suitably chosen such that_ 1. _In Algorithm_ 1_,_ \(0.01c_{2}\sqrt{\log(q\lor n)/n}\leq\tau^{\prime}\leq 0.4\min_{\text{\tiny V}_{kj} \neq 0}|\text{\tiny V}_{kj}|\)_,_ \(\kappa^{\prime}_{j}=\|\mathbf{V}_{+,j}\|_{0}\) _for_ \(1\leq j\leq p\)_._ 2. _In Algorithm_ 2_,_ \(0.5c_{3}\sqrt{\log(p\lor n)/n}\leq\tau\)_,_ \(\nu_{1}=\lceil\operatorname{TLP}_{\tau}((\mathbf{\alpha}_{1},\mathbf{\alpha}_{2}))\rceil\)_,_ \(\nu_{2}=\lceil\operatorname{TLP}_{\tau}(\mathbf{\gamma})\rceil\)_, and_ \(\kappa=\lceil\operatorname{TLP}_{\tau}(\mathbf{\beta})\rceil\) _for any_ \((k,j)\in\mathcal{E}^{+}\)_._ _Then there exists constant \(C_{1}>0\) such that when \(n\) is sufficiently large_ \[|\widehat{\text{\tiny U}}_{kj}-\text{\tiny U}_{kj}|\leq C_{1}\sqrt{\log(p\lor n )/n},\] _almost surely under \(\mathbb{P}_{(\mathbf{U},\mathbf{W},\mathbf{\Sigma})}\). Moreover, Algorithms 1 and 2 respectively terminate in \(O(p\times\log(s)\times(q^{3}+nq^{2}))\) and \(O(|\mathcal{E}^{+}|\times\log(s)\times(q^{3}+nq^{2}))\) operations almost surely._ 2. _(Graph recovery) Additionally, if (C3) is satisfied with_ \(c_{3}>C_{1}>\tau\)_, then when_ \(n\) _is sufficiently large we have_ \(\widehat{\mathcal{G}}=\mathcal{G}\) _almost surely._ By Theorem 3, the proposed method achieves causal discovery consistency in terms of consistent parameter estimation and structure recovery. Moreover, Algorithms 1-2 enjoy low-order polynomial time complexity almost surely provided that the data are randomly sampled from (1). ## 6 Numerical examples ### Simulations This subsection investigates via simulations the operating characteristics of GrIVET, including the qualities of structure learning, parameter estimation, and statistical inference. To generate an observation \((\boldsymbol{Y},\boldsymbol{X})\), we first introduce hidden variables \(\boldsymbol{\eta}\sim N(\mathbf{0},\mathbf{I}_{r\times r})\) as unmeasured confounders. Then, we sample \(\boldsymbol{X}\) from \(N(\mathbf{0},\mathbf{I}_{q\times q})\) for continuous interventions or from \(\{-1,1\}^{q}\) with equal probability for discrete interventions. Given \(\boldsymbol{X}\) and \(\boldsymbol{\eta}\), we generate \(\boldsymbol{Y}\) according to \[\boldsymbol{Y}=\mathbf{U}^{\top}\boldsymbol{Y}+\mathbf{W}^{\top}\boldsymbol{X }+\boldsymbol{\Phi}^{\top}\boldsymbol{\eta}+\boldsymbol{e},\quad\boldsymbol{e} \sim N\left(\mathbf{0},\mathrm{Diag}\left(\sigma_{1}^{2},\ldots,\sigma_{p}^{2} \right)\right). \tag{17}\] We conduct simulations with the following settings. * **Hub graph.** Let \(p=101\), \(q=252\), and \(r=10\). For \(\mathbf{U}\), \((\mathrm{U}_{1,j})_{2\leq j\leq p}\) are independently sampled from \(\{-1,1\}\) with equal probability, while the rest are set to \(0\). This generates a sparse graph with the dense neighborhood of the first node. Let \(\mathbf{W}_{q\times p}=(\mathbf{I}_{p\times p},\mathbf{I}_{p\times p},\mathbf{ F}^{\top})^{\top}\) where the entries \((\mathrm{F}_{j,2j},\mathrm{F}_{j,2j+1})_{1\leq j\leq q-2p}\) are set to \(1\), while other entries of \(\mathbf{F}\) are zero. Then \(X_{j},X_{2j}\) are IVs of \(Y_{j}\) for \(j=1,\ldots,p\) and \(X_{2p+1},\ldots,X_{q}\) are invalid IVs with two intervention targets. For the confounders, \(\Phi_{1,1}\) and \((\Phi_{jk})_{10j-8\leq k\leq 10j+1}^{1\leq j\leq r}\) are sampled uniformly from \((-0.4,-0.6)\cup(0.4,0.6)\), while other entries of \(\boldsymbol{\Phi}\) are zero. We generate \((\sigma_{1},\ldots,\sigma_{p})\) uniformly from \((0.4,0.6)\). * **Random graph.** Let \(p=100\), \(q=250\), and \(r=10\). For \(\mathbf{U}\), the upper off-diagonals \((\mathrm{U}_{kj})_{k<j}\) are sampled independently from \(\{0,1\}\) according to Bernoulli\((1/10p)\) while other entries are zero. Set \(\mathbf{W}_{q\times p}=(\mathbf{I}_{p\times p},\mathbf{I}_{p\times p},\mathbf{ F}^{\top})^{\top}\) where \((\mathrm{F}_{j,2j-1},\mathrm{F}_{j,2j})_{1\leq j\leq 1-2p}\) are set to \(1\), while other entries of \(\mathbf{F}\) are zero. Then \(X_{j},X_{2j}\) are IVs of \(Y_{j}\) for \(j=1,\ldots,p\) and \(X_{2p+1},\ldots,X_{q}\) are invalid IVs with two intervention targets. For the confounders, \((\Phi_{jk})_{10j-9\leq k\leq 10j}^{1\leq j\leq r}\) are sampled uniformly from \((-0.4,-0.6)\cup(0.4,0.6)\), while other entries of \(\boldsymbol{\Phi}\) are zero. We generate \((\sigma_{1},\ldots,\sigma_{p})\) uniformly from \((0.4,0.6)\). Structure learning.After obtaining ancestral relations from Algorithm 1, we implement Algorithm 2 to confirm parental relations but with constraints also imposed on the parameter of interest. Four graph metrics are used for evaluation: the false discovery rate (FDR), the true positive rate (TPR), the Jaccard index (JI), and the structural Hamming distance (SHD). The results in Table 1 demonstrate the strong performance of GrIVET in structure learning. Note that a high TPR indicates GrIVET's capability to detect the true existing edges, while the FDR remains low, signifying the high specificity of GrIVET. In Supplementary Materials Section 3.3, we further compare GrIVET with RFCI (Colombo et al., 2012) and LRpS-GES (Frot et al., 2019) in terms of structural learning accuracy. GrIVET compares favorably against the competitors. Parameter estimation.We compare the proposed IV estimation method in Section 3.3 with the regression method without any adjustment for confounding (Li et al., 2023). To evaluate the quality of estimation, we consider three metrics, the average maximum absolute deviation, the mean absolute deviation, and the mean square deviation between true coefficients and estimates over 1000 runs. As demonstrated in Table 2, GrIVET enhances parameter estimation by accounting for latent confounding. As anticipated, GrIVET's estimation improves with increasing sample size \(n\), while the naive regression method (Li et al., 2023) remains inconsistent. Furthermore, GrIVET's advantages become more pronounced when stronger confounding effects are present, as evidenced by additional simulations in the Supplementary Materials. \begin{table} \begin{tabular}{l l c c c c c} \hline Graph & Intervention & \(n\) & FDR(\%) & TPR(\%) & SHD & JI(\%) \\ \hline Hub & Continuous & 500 & 0.000 & 100.000 & 0.000 & 100.000 \\ & & 400 & 0.000 & 99.998 & 0.002 & 99.998 \\ & & 300 & 0.000 & 99.998 & 0.002 & 99.998 \\ & Discrete & 500 & 0.000 & 99.999 & 0.001 & 99.999 \\ & & 400 & 0.000 & 99.998 & 0.002 & 99.998 \\ & & 300 & 0.000 & 99.999 & 0.001 & 99.999 \\ \hline Random & Continuous & 500 & 0.011 & 98.600 & 0.001 & 98.589 \\ & & 400 & 0.000 & 98.600 & 0.000 & 98.600 \\ & & 300 & 0.018 & 98.590 & 0.003 & 98.575 \\ & Discrete & 500 & 0.000 & 98.600 & 0.000 & 98.600 \\ & & 400 & 0.024 & 98.600 & 0.002 & 98.576 \\ & & 300 & 0.000 & 98.600 & 0.000 & 98.600 \\ \hline \end{tabular} \end{table} Table 1: False discovery rate (FDR), true positive rate (TPR), structural Hamming distance (SHD), and Jaccard index (JI) of GrIVET for causal discovery over 1000 simulation replications. To compute the metrics, let TP, RE, FP, and FN be the numbers of identified edges with correct directions, those with wrong directions, estimated edges not in the skeleton of the true graph, and missing edges compared to the true skeleton. Then \(\text{FDR}=(\text{RE}+\text{FP})/(\text{TP}+\text{RE}+\text{FP})\), \(\text{TPR}=\text{TP}/(\text{TP}+\text{FN})\), \(\text{SHD}=\text{FP}+\text{FN}+\text{RE}\), and \(\text{JI}=\text{TP}/(\text{TP}+\text{SHD})\). Inference.We now evaluate the empirical performance of the proposed tests in terms of size and power. For the empirical size, we calculate the percentage of times \(H_{0}\) is rejected out of 1000 simulations when \(H_{0}\) is true. For the power, we consider three alternative hypotheses \(H_{a}\), where all the edges in \(H_{0}\) exist. The empirical power of a test is the percentage of times \(H_{0}\) is rejected out of 1000 simulations when \(H_{a}\) is true. The adjacency matrix \(\mathbf{U}\) is modified according to the null and alternative hypotheses. * **Hub graph, fixed \(\mathcal{H}\).** For the size, consider \(\mathcal{H}=\{(2,7)\}\), \(\mathcal{H}=\{(2,7),(7,12),(12,17)\}\), and \(\mathcal{H}=\{(2,7),(7,12),(12,17),(17,22),(22,27)\}\). For the power, consider \(\mathcal{H}=\{(1,2)\}\), \(\mathcal{H}=\{(1,2),(1,12),(1,22),(1,32),(1,42)\}\). * **Random graph, fixed \(\mathcal{H}\).** We consider \(\mathcal{H}=\{(1,6)\}\), \(\mathcal{H}=\{(1,6),(6,11),(11,16)\}\), and \(\mathcal{H}=\{(1,6),(6,11),(11,16),(16,21),(21,26)\}\) for both size and power. * **Random graph, random \(\mathcal{H}\).** We also consider testing 50 randomly selected edges individually. Here, a random graph is generated so that 20 of these selected edges are present in the true DAG (i.e., \(H_{a}\) is valid). As a result, for every selected edge, \(H_{0}\) holds in roughly 600 repetitions and \(H_{a}\) holds in roughly 400 repetitions. As shown in Table 3 for fixed \(\mathcal{H}\), empirical sizes are close to the nominal \(\alpha=0.05\) under \(H_{0}\), and the proposed test enjoys desirable power under \(H_{a}\). Figure 2 presents similar results for testing random \(\mathcal{H}\). The Supplementary Materials display that the sampling distribution of the test statistic is close to the derived asymptotic distribution in Theorem 2. Additional simulation details and results are also available in Supplementary Materials. \begin{table} \begin{tabular}{l l c c c} \hline Graph & Intervention & \(n\) & \multicolumn{2}{c}{GrIVET} & Direct regression (Li et al., 2023a) \\ & & & (Max AD, Mean AD, Mean SqD) & (Max AD, Mean AD, Mean SqD) \\ \hline Hub & Continuous & 500 & (0.06107, 0.01808, 0.00052) & (0.12817, 0.02448, 0.00142) \\ & & 400 & (0.06863, 0.02037, 0.00066) & (0.13196, 0.02637, 0.00156) \\ & & 300 & (0.07922, 0.02347, 0.00087) & (0.13395, 0.02873, 0.00170) \\ & Discrete & 500 & (0.06119, 0.01803, 0.00051) & (0.12770, 0.02434, 0.00141) \\ & & 400 & (0.06932, 0.02030, 0.00065) & (0.13041, 0.02621, 0.00153) \\ & & 300 & (0.08046, 0.02355, 0.00088) & (0.13334, 0.02867, 0.00169) \\ \hline Random & Continuous & 500 & (0.02836, 0.01445, 0.00034) & (0.04254, 0.01791, 0.00076) \\ & & 400 & (0.03245, 0.01660, 0.00045) & (0.04390, 0.01899, 0.00079) \\ & & 300 & (0.03760, 0.01939, 0.00060) & (0.04709, 0.02150, 0.00091) \\ & Discrete & 500 & (0.02910, 0.01505, 0.00037) & (0.04287, 0.01808, 0.00075) \\ & & 400 & (0.03272, 0.01686, 0.00046) & (0.04432, 0.01962, 0.00081) \\ & & 300 & (0.03619, 0.01879, 0.00057) & (0.04756, 0.02146, 0.00094) \\ \hline \end{tabular} \end{table} Table 2: Parameter estimation: the average of largest absolute difference (Max AD), the average absolute differences (Mean AD), and the average squared differences (Mean SqD) between the estimated parameters and the true parameters for two competing methods over 1000 simulation replications. ### ADNI data analysis In this subsection, GrIVET is applied to analyze the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset (available at [https://adni.loni.usc.edu](https://adni.loni.usc.edu)). The goal is to infer gene pathways related to Alzheimer's Disease (AD) in order to elucidate the gene-gene interactions in AD/cognitive impairment patients and healthy individuals, respectively. Dataset.The dataset comprises gene expression levels adjusted for five covariates: gender, handedness, education level, age, and intracranial volume. For data analysis, we select genes with at least one SNP at a marginal significance level below \(10^{-14}\), resulting in \(p=21\) genes as primary variables. For these genes, we further extract their marginally most correlated \begin{table} \begin{tabular}{l c c c c} \hline Graph & Intervention & \(n\) & Size (\(|\mathcal{D}|=1,3,5\)) & Power (\(|\mathcal{D}|=1,3,5\)) \\ \hline Hub & Continuous & 500 & (0.028,0.026,0.029) & (1.000,1.000,1.000) \\ & & 400 & (0.043,0.038,0.035) & (1.000,1.000,1.000) \\ & & 300 & (0.037,0.030,0.034) & (1.000,1.000,1.000) \\ & Discrete & 500 & (0.036,0.040,0.027) & (1.000,1.000,1.000) \\ & & 400 & (0.051,0.040,0.040) & (1.000,1.000,1.000) \\ & & 300 & (0.052,0.041,0.035) & (1.000,1.000,1.000) \\ \hline Random & Continuous & 500 & (0.038,0.037,0.026) & (1.000,1.000,1.000) \\ & & 400 & (0.033,0.031,0.028) & (1.000,1.000,1.000) \\ & & 300 & (0.033,0.025,0.030) & (1.000,1.000,1.000) \\ & Discrete & 500 & (0.040,0.029,0.027) & (1.000,1.000,1.000) \\ & & 400 & (0.042,0.034,0.040) & (1.000,1.000,1.000) \\ & & 300 & (0.029,0.033,0.034) & (1.000,1.000,1.000) \\ \hline \end{tabular} \end{table} Table 3: Empirical size for GrIVET at nominal level \(\alpha=0.05\), respectively for \(|\mathcal{D}|=1\), \(|\mathcal{D}|=3\) and \(|\mathcal{D}|=5\), over 1000 simulation replications. Figure 2: The boxplots of the empirical rejection probabilities for testing randomly selected edges. The nominal level is \(\alpha=0.05\). two SNPs, yielding \(q=42\) SNPs as unspecified intervention variables for subsequent data analysis. All gene expression levels are normalized. The dataset initially categorizes individuals into four groups: Alzheimer's Disease (AD), Early Mild Cognitive Impairment (EMCI), Late Mild Cognitive Impairment (LMCI), and Cognitive Normal (CN). For our analysis, we treat 247 CN individuals as controls and the remaining 462 individuals as cases (AD-MCI). We then use the gene expressions and the SNPs to infer gene pathways for the 462 AD-MCI and 247 CN control cases, respectively. Hypotheses.We focus on statistical inferences related to genes APP and CASP3 (Julia and Goate, 2017; Su et al., 2001). As in Figure 3, for each edge \((k,j)\), we consider testing \(H_{0}:\mathrm{U}_{kj}=0\) versus \(H_{a}:\mathrm{U}_{kj}\neq 0\). Results.Figure 3 displays the p-values and significant results under the level \(\alpha=0.05\) after the Holm-Bonferroni adjustment for \(2\times 7=14\) tests. The tests exhibit strong evidence for the presence of \(\{\mathrm{LRP1}\rightarrow\mathrm{CASP3},\;\mathrm{APP}\rightarrow\mathrm{ APOE}\}\) in the AD-MCI group, but no evidence in the CN group. Meanwhile, this result suggests the presence of connections \(\{\mathrm{CAPN1}\rightarrow\mathrm{CASP3},\;\mathrm{ATP5F1}\rightarrow \mathrm{CASP3}\}\) in the CN group but not so in the AD-MCI group. In both groups, we identify directed connection \(\mathrm{APP}\rightarrow\mathrm{APBB1}\). Figure 4 shows the residual correlation matrices for both groups, suggesting the existence of unmeasured confounding. The Supplementary Materials include normal Q-Q plots of residuals, demonstrating that the normality assumption is approximately satisfied for both groups. Figure 3: Display of the genes associated with proposed tests. (a) and (b): Solid/dashed arrows indicate significant/insignificant edges at \(\alpha=0.05\) after adjustment for multiplicity by the Bonferroni-Holm correction. Some of our discoveries agree with the existing findings. Specifically, our result indicates the presence of connection \(\text{APP}\rightarrow\text{APOE}\) for the AD-MCI group, but not for the CN group, which seems consistent with the knowledge that APP and APOE are functionally linked in brain cholesterol metabolism (Liu et al., 2017) and the contributions of APOE to the pathophysiology of AD (Bu, 2009). The connection LRP1 \(\rightarrow\) CASP3 also differs in AD-MCI and CN groups, which may serve to support the conclusion that activated CASP3 may be a factor in functional decline and may have an important role in neuronal cell death and plaque formation in AD brain (Su et al., 2001) given the finding that both APOE and its receptor LRP1 are present in amyloid plaques (Poirier, 1996). Moreover, the connection CAPN1 \(\rightarrow\) CDK5R1 discovered in both groups can be found in the AlzNet database (interaction ID 24614). ## 7 Discussion This article proposes a novel instrumental variable procedure that integrates causal discovery and inference for a Gaussian directed acyclic graph with hidden confounders. One future research direction is to develop methodologies for analyzing discrete/mixed-type (primary variable) data. Additionally, the present work uses individual-level data from a single study for causal discovery and inference. In many real applications, due to privacy concerns and ownership restrictions, the data are only available in the form of summary statistics (e.g., GWAS summary data) or in other privatized forms. Extending GrIVET to leverage these data is an important topic. Furthermore, multisource/decentralized data are ubiquitous, raising new challenges in communication, privacy, and handling of corrupted data. It would be promising to employ modern machine learning techniques, such as federated learning (Xiong et al., 2021; Gao et al., 2021), to address these challenges and fully unleash the potential of large-scale causal discovery and inference. Finally, we discuss two limitations of the present work. Figure 4: Display of residual correlation matrices for AD-MCI and CN groups. * GrIVET necessitates the availability of valid IVs for each primary variable due to the hardness of causal identification in the presence of hidden confounding. In genetic research, there is an ample supply of genetic variants (e.g., SNPs) serving as IVs. Nonetheless, obtaining valid IVs can be challenging in certain applications. It is thus crucial to investigate the potential for causal discovery even when faced with an insufficient number of IVs. * For inference, Theorem 2 requires that \(\mathbb{P}(\widehat{\mathcal{G}}^{+}=\mathcal{G}^{+})\to 1\), which is guaranteed by Condition (C2) in Theorem 3. Fulfilling this requirement can be challenging; in such cases, one might turn to the post-selection inference framework (Berk et al., 2013) by concentrating on the parameters within the selected model. However, the test results should be meticulously interpreted, as these parameters cease to be causal or structural (Berk et al., 2013) unless \(\mathbb{P}(\widehat{\mathcal{G}}^{+}=\mathcal{G}^{+})\to 1\). In essence, (C2) enables the causal meaning of the tested parameters to be carried over to finite-sample inference. Exploring ways to lift the signal strength condition while preserving the causal interpretation for statistical inference after DAG structure learning (Wang et al., 2023) is an important research topic. ## Appendix A Appendix Definition of d-separation (Pearl, 2009).Consider a DAG \(\mathcal{G}\) with node variables \((Z_{1},\ldots,Z_{d})^{\top}\). Nodes \(Z_{k}\) and \(Z_{j}\) are adjacent if \(Z_{k}\to Z_{j}\) or \(Z_{k}\gets Z_{j}\). An undirected path between \(Z_{k}\) and \(Z_{j}\) in \(\mathcal{G}\) is a sequence of distinct nodes \((Z_{k},\ldots,Z_{j})\) such that all pairs of successive nodes in the sequence are adjacent. A non-endpoint node \(Z_{m}\) on an undirected path \((Z_{k},\ldots,Z_{m-1},Z_{m},Z_{m+1},\ldots,Z_{j})\) is called a collider if \(Z_{m-1}\to Z_{m}\gets Z_{m+1}\). Otherwise, it is called a non-collider. Let \(A\subseteq\{1,\ldots,d\}\), where \(A\) does not contain \(k\) and \(j\). Then \(\mathbf{Z}_{A}\) is said to block an undirected path \((Z_{k},\ldots,Z_{j})\) if at least one of the following holds: (1) the undirected path contains a non-collider that is in \(\mathbf{Z}_{A}\), or (2) the undirected path contains a collider that is not in \(\mathbf{Z}_{A}\) and has no descendant in \(\mathbf{Z}_{A}\). A node \(Z_{k}\) is d-separated from \(Z_{j}\) given \(\mathbf{Z}_{A}\) if \(\mathbf{Z}_{A}\) block every undirected path between \(Z_{k}\) and \(Z_{j}\); \(k\neq j\). Additional discussion of Figure 1 (a).Let \((k,j)\in\mathcal{E}^{+}\) and suppose all IVs are valid. We explain why \(\mathbf{X}_{\textsc{Ca}(k)}\) may not be valid IVs after conditioning on \(\mathbf{Y}_{\textsc{an}(j)\setminus\{k\}}\), as mentioned in Section 3.3. Let \(l\in\textsc{ca}(k)\) and \(m\in\textsc{me}(k,j)\) such that \(Y_{k}\) is an unmediated parent of \(Y_{m}\). Note that in Figure 1 (a) of the main text, whenever \(\eta\to Y_{m}\), then \(\mathbf{Y}_{\textsc{an}(j)\setminus\{k\}}\) does not d-separate \(\mathbf{X}_{\textsc{ca}(k)}\) and \(\eta\), since \(Y_{m}\) is a collider in the undirected path \((X_{l},Y_{k},Y_{m},\eta,Y_{j})\). As a result, \(\mathbf{X}_{\textsc{Ca}(k)}\) and \(\eta\) can be associated conditioned on \(\mathbf{Y}_{\textsc{an}(j)\setminus\{k\}}\). Additional discussion on identification of U.We have the following result. **Lemma 1**.: _In (1), assume \(\mathbf{X}\) and \(\mathbf{\varepsilon}\) are independent._ 1. \(\mathbb{E}(Y_{k}\mid\mathbf{Y}_{\textsc{nm}(k,j)},\mathbf{X})\) _is a linear combination of_ \((\mathbf{Y}_{\textsc{nm}(k,j)},\mathbf{X})\) _._ * \(\mathbb{E}(\varepsilon_{j}\mid\boldsymbol{Y}_{\textsc{nm}(k,j)},\boldsymbol{X})\) _is a linear combination of_ \((\boldsymbol{Y}_{\textsc{nm}(k,j)},\boldsymbol{X}_{\textsc{ca}(k)^{c}})\)_._ Proof.: Here, (A) follows directly from (1). For (B), we have \[\mathbb{E}(\varepsilon_{j}\mid\boldsymbol{Y}_{\textsc{nm}(k,j)},\boldsymbol{X })=\mathbb{E}(\varepsilon_{j}\mid\boldsymbol{\varepsilon}_{\textsc{nm}(k,j)}, \boldsymbol{X})=\mathbb{E}(\varepsilon_{j}\mid\boldsymbol{\varepsilon}_{ \textsc{nm}(k,j)})=\boldsymbol{\pi}^{\top}\boldsymbol{\varepsilon}_{\textsc{ nm}(k,j)},\] where the last equality is due to the normality of \(\boldsymbol{\varepsilon}\). Finally, in (1), we immediately have \(\boldsymbol{\varepsilon}_{\textsc{nm}(k,j)}\) is linear in \((\boldsymbol{Y}_{\textsc{nm}(k,j)},\boldsymbol{X}_{\textsc{ca}(k)^{c}})\). Now, we show that \(\operatorname{Cov}(\boldsymbol{\varepsilon},\boldsymbol{X})=\boldsymbol{0}\) is sufficient to derive the identification results in Section 3.3. Given random variables \(\zeta\) and \(\boldsymbol{\xi}\), let \(\mathbb{L}(\zeta\mid\boldsymbol{\xi})\) be the best linear approximation of \(\zeta\) using \(\boldsymbol{\xi}\), namely \(\mathbb{L}(\zeta\mid\boldsymbol{\xi})=\widetilde{\boldsymbol{\omega}}^{\top} \boldsymbol{\xi}\) where \[\widetilde{\boldsymbol{\omega}}=\operatorname*{arg\,min}_{\boldsymbol{\omega}} \ \mathbb{E}(\zeta-\boldsymbol{\omega}^{\top}\boldsymbol{\xi})^{2}.\] For random variables \(\zeta\), \(\zeta^{\prime}\), and \(\boldsymbol{\xi}\), we have that (a) \(\mathbb{L}(\zeta+\zeta^{\prime}\mid\boldsymbol{\xi})=\mathbb{L}(\zeta\mid \boldsymbol{\xi})+\mathbb{L}(\zeta^{\prime}\mid\boldsymbol{\xi})\), (b) \(\mathbb{L}(c\zeta\mid\boldsymbol{\xi})=c\mathbb{L}(\zeta\mid\boldsymbol{\xi})\) for \(c\in\mathbb{R}\), (c) \(\mathbb{L}(\zeta\mid\boldsymbol{\xi})=0\) if \(\operatorname{Cov}(\zeta,\boldsymbol{\xi})=\boldsymbol{0}\), (d) \(\mathbb{L}(\zeta\mid\boldsymbol{\xi})=\zeta\) if \(\zeta\in\operatorname{Span}(\boldsymbol{\xi})\), and (e) \(\mathbb{L}(\zeta\mid\boldsymbol{\xi})=\mathbb{L}(\zeta\mid\boldsymbol{A} \boldsymbol{\xi})\) for invertible \(\mathbf{A}\). Thus, \(\mathbb{L}(\cdot\mid\star)\) mimics \(\mathbb{E}(\cdot\mid\star)\), and Lemma 2 holds. The proof is similar to that of Lemma 1. **Lemma 2**.: _In (1), Lemma 1 holds with \(\mathbb{E}(\cdot\mid\star)\) being replaced by \(\mathbb{L}(\cdot\mid\star)\)._ As a result, if \(\boldsymbol{X}\) and \(\boldsymbol{\varepsilon}\) are uncorrelated as in (1), the derivation in Section 3.3 holds with \(\mathbb{E}(\cdot\mid\star)\) being replaced by \(\mathbb{L}(\cdot\mid\star)\).
2309.06707
Return to Lacan: an approach to digital twin mind with free energy principle
Free energy principle (FEP) is a burgeoning theory in theoretical neuroscience that provides a universal law for modelling living systems of any scale. Expecting a digital twin mind from this first principle, we propose a macro-level interpretation that bridge neuroscience and psychoanalysis through the lens of computational Lacanian psychoanalysis. In this article, we claim three fundamental parallels between FEP and Lacanian psychoanalysis, and suggest a FEP approach to formalizing Lacan's theory. Sharing the non-linear temporal structure that combines prediction and retrospection (logical time), both of two theories focus on epistemological questions that how systems represented themselves and external world, and those elements failed to be represented (lacks and free energy) significantly influence the systems' subsequent states. Additionally, the fundamental hypothesis of FEP that the precise state of environment is always concealed, accounts for object petit a, the core concept in Lacan's theory. With neuropsychoanalytic mapping from three orders (the Real, the Symbolic, and the Imaginary, RSI) onto brain regions, we propose a brain-wide FEP model for a minimal definition of Lacanian mind - composite state of RSI that is perturbated by desire running over the logical time. The FEP-RSI model involves three FEP units connected by respective free energy with a natural compliance with logical time, mimicking core dynamics of Lacanian mind. The biological plausibility of current model is considered from perspectives of cognitive neuroscience. In conclusion, the FEP-RSI model encapsulates a unified framework for digital twin modeling at the macro level.
Lingyu Li, Chunbo Li
2023-09-13T04:12:53Z
http://arxiv.org/abs/2309.06707v2
# An active inference model of Lacanian psychoanalysis ###### Abstract There has been a growing interest in exploring behavior, brain, and mind through the lens of complex systems theory. However, a unified and computational model that comprehensively encapsulates the properties of the human mind remains elusive. To address this gap, we propose a recurrent generative model drawing upon with Lacanian psychoanalysis and active inference. We conceptualize mechanism of desire as partial generalized synchronization, and then apply the model to suicidal dynamics to illustrate the theoretical and practical implications of our model. This work on computational psychoanalysis reveals its potential in unraveling complex mental phenomena. active inference, free energy principle, Lacan, psychoanalysis Lingyu Li, Chunbo Li ## Introduction Over centuries, researchers have devoted efforts to understand mental phenomena. The emergence of the complex systems approach has ushered in a new paradigm, offering a fresh lens through which to understand the complex and systematic nature of mental states [(1)]. This perspective treats mental processes as dynamic interplays influenced by both internal and external variables [(2)]. These insights have profound implications for psychopathology, including the early detection of warning signs, the prediction of symptom shifts, and transdiagnostic understandings of perception, behavior, thoughts, and memory [(3-5)]. However, a unified and quantified model that comprehensively encapsulates the properties of the human mind remains elusive. The challenge lies in defining the locus of the mind's existence, its evolutionary drivers, and the inherent lack of a cohesive theory. As such, there's a growing demand for top-down theory-laden models in current realm [6]. To address this gap, we propose an example of generative model that draws inspiration from both Lacanian psychoanalysis and active inference. By melding these two theoretical frameworks, we aim to offer a comprehensive model that captures the dynamic essence of the human mind. After introducing key aspects of Lacanian theory, such as the three registers (Imaginary, Symbolic, Real), logical time, and desire, we find out the intimacy between these concepts and principles of active inference. A recurrent generative model is constructed then, allowing us to simulate and understand the complex unfolding of mental states. Furthermore, we delve into the concept of desire as partial generalized synchronization between individuals, offering a unique perspective on human interactions and interpersonal dynamics. To illustrate the practical implications of computational psychoanalysis, we apply our model to suicidal dynamic, and gain some computational insights into the underlying mechanisms and potential prevention strategies. These explorations underline the potential of computational psychoanalysis in unraveling complex mental phenomena. ## Introduction to Lacanian psychoanalysis and Active Inference ### Lacanian psychoanalysis Lacan is renowned for his sophisticated theory of psychoanalysis, deeply rooted in philosophy and linguistics. An exhaustive overview of Lacan's theory is beyond the scope of this article, we will focus on three pivotal concepts: the three registers, logical time, and desire. At the core of Lacanian psychoanalysis are the three registers: the Imaginary, the Symbolic, and the Real. These registers serve as a fundamental framework for understanding where the human subject resides within their mental states. According to Lacan, an individual's mental state is a blend of these three domains that influence the subject simultaneously and interdependently, akin to a Borromean ring. The Imaginary represents an internal representation of the external world. The dynamic interplay between the external world and the internal self leads to primary self-identification and self-knowledge, though inherently distorted due to its "imaginary" nature. Consequently, the identification within the Imaginary is essentially a misidentification, forming the basis for psychosis [7]. The Symbolic domain encompasses the language. Put succinctly, since the "unconscious is structured like a language," the Symbolic operates akin to Hermeneutics, addressing issues of meaning generation, self-interpretation, and intersubjectivity. Hence the Symbolic is linguistic representation of the situations of subject. However, it's important to note that any representation is inherently incomplete, leaving aspects unrepresented. The Real collects the unrepresented, thus existing as a realm of "impossibility", a missing reality. Derived from the Real, the concept of _repetition_ -- manifested as an incessant attempt to represent the unrepresented -- becomes another fundamental concept of Lacanian psychoanalysis [8]. Logical time introduces the notion that human experiences cannot be neatly confined within a unidirectional and chronological understanding of time. From the perspective of logical time, "the past anticipates a future within which it can retroactively find a place". In other words, the past bestows significance upon forthcoming events in an anticipatory manner, and the future imbues the past with retroactive meaning. Along logical time, meaningful relationships between events emerge, transforming time into a mechanism that generates significance. This concept underscores the anticipatory nature of human mind and emphasizes the importance of retroactive reconstruction [9]. And what drives the ongoing evolution of the three registers within logical time? Repetition, as mentioned, serves as the driving force. For Lacan, repetition represents the return of something that maintains identical --_object petit a_, a concept rooted in the Real, leading to the endless metonymic course of desire. And orientation of _drive_ is the running of desire towards object petit a. The history of subjectivity unfurls in the repetition of desire's trajectory: the unrepresented \(\rightarrow\) object petit a \(\rightarrow\) desire \(\rightarrow\) object of desire \(\rightarrow\) failure of representation \(\rightarrow\) the unrepresented. That is, desire can never be satisfied, and the endless cycles contribute to so-called _fantasy_ that need to be traversed by psychoanalysis practice. Ultimately, a rudimentary definition of the human mind emerges from Lacanian perspective-- composite status of three registers which is perturbated by desire running over the logical time. Active InferenceActive inference, a burgeoning theory in neuroscience, aims to provide a universal principle of living systems at any scales like neural activities, perceptions, and individual and collective behaviors [(10)]. The precise state of the environment, often concealed (referred to as the _hidden state_), necessitates the system to deduce the hidden state (termed _inference_) by employing an internal model founded on existing environmental knowledge, known as _priors_. Subsequent to observations, the system evaluates its inference, and discrepancies (_surprises_) are fed back to the system to formulate conclusions (_posteriors_) and refine its internal model. An alternative way to fulfill original inference is to change the environment actively. Therefore, both of perception and action serves the unified purpose of enhancing the performance of the internal model, i.e., minimal surprises [(11)]. Then the concept of planning or decision-making is involved in active inference. From a range of alternative policies, the system tends to adopting the policy that is anticipated to optimize its internal model most effectively. The central tenet of this optimization process is the minimization of _free energy--a principle known as the _free energy principle_. Originally a thermodynamic concept, free energy quantifies the energy available in a system to alter its properties. In essence, minimal free energy implies a state of equilibrium within the system. In the context of active inference, free energy, encompassing both variational and expected free energy, gauges the "energy" required to modify one's internal model. Rayn et al have provided a detailed and hands-on tutorial on active inference [12]. And we hereby only focus on variational and expected free energy to uncover the intimacy between Lacanian psychoanalysis and active inference. Based on priors over hidden state of environment \(P(s)\) and likelihood of corresponding observation \(P(o|s)\), an approximated posterior \(Q(x)\) is calculated. Real observation data serves as evidence for this inference, denoted as \(P(o)\). And the variational free energy F is mathematically expressed as follows: \[F[Q,o]=D_{KL}[Q(s)||P(s|o)]-ln\;P(o)\] When model evidence is exceptionally robust, a belief with lowest variational free energy could perform exact inference on hidden state, because the Kullback-Leibler divergence between the approximated posterior and posterior probability is expected to be minimal. Conversely, when evidence is relatively weak, an exact inference could be difficult. Then another way to minimize the variation free energy is to obtain evidence through action, transferring current context to action planning. For every conceivable policy \(\pi\), the anticipated state of the system, denoted as \(\bar{x}\), is deduced based on transition possibilities \(P(\bar{x}|x,\pi)\). Then the expected free energy \(G\) of this policy is calculated as: \[G(\pi)=D_{KL}[Q(\bar{x}|\pi)||P(\bar{x}|C)]+\mathbb{E}_{Q(\bar{x}|\pi)}[H[P(y| \bar{x})]]\] The calculation of expected free energy \(G\) involves two items: risks (divergences between expected and preferred states) and expected ambiguity (Shannon entropy of expected outcomes). Therefore, policy with lowest expected free energy could balance risks and ambiguity, providing a solution to the exploration-exploitation dilemma [12]. In essence, expected free energy is a "belief that one will minimize free energy in the future" [11]. This replicates the mechanism of logical time, as Lacan described: "the past anticipates a future within which it can retroactively find a place" [9]. Additionally, both Lacanian psychoanalysis and active inference are on how external world is represented by internal models, and the unrepresented or surprises plays a significant role in the dynamical operations. The intimacy between these two theoretical frameworks lays foundations of our incorporating work, and in the next section, we propose a generative model for further explorations. ### Generative model of Lacanian psychoanalysis Inspired by previous research in Lacanian neuropsychoanalysis, we begin with an initial endeavor to roughly map the functions of the three registers onto the brain (Figure 1.**a**), setting the stage for our formal generative model. Our intention is not to achieve a precise anatomical or functional mapping, but rather to establish an intuitive framework for defining the system of current interest. The Real is situated within the upper brainstem and diencephalic system, as these areas play a fundamental role in affective experiences, consciousness, and the primary needs of the body such as sustenance, sexuality, and homeostasis [(13)]. The Imaginary, on the other hand, corresponds to the parietal and occipital lobes, given their involvement in motor control, visual perception, and the representation of self (body-image). The Symbolic domain is allocated to the prefrontal and parietal lobes, responsible for executing language processing, generating meaning, and conducting thought experiments based on anticipation and retroaction [(14)]. *Figure 1. Illustration of recurrent generative model entailing principles of Lacanian psychoanalysis and active inference. (a) An intuitive mapping of three registers onto brain regions. R: the Real; S: the Symbolic; I: the Imaginary (b) Flowchart of our recurrent generative model with three interconnected basic units operating in discrete time. (c) A closer look at basic unit within the generative model. (d) Simulation of dynamics of three registers when the Symbolic register is perturbated.** Subsequently, we put forth a recurrent generative model comprising three basic units operating in discrete time, as depicted in Figure 1.**b**. Zooming into the basic unit (Figure 1.**c**), at time step \(\tau\), the unit infers the hidden state based on observations and the corresponding likelihood. It then evaluates the expected free energy for each alternative policy \(\pi\), to realize specific preferences. As previously mentioned, the system inclines towards policies with lowest expected free energy. Upon adopting a policy \(\pi\), the divergences between preferences and outcomes are calculated. This divergence element, serving as a retrospective assessment of policy effects, propagates globally to all three units (each with distinct weights). It contributes to the updating of the actual (variational) free energy -- an entity we may term "_residual free energy_" to emphasize its post-hoc rationale. Beliefs with minimal residual free energy servers as the current posteriors, and as priors for the subsequent time step. Expected and residual free energy enables the explicit implementation of logical time within our model. To examine the model, we simulate the 15-timesteps dynamics of the three registers when the Symbolic register is perturbed, utilizing Python 3.9 and pymdp 0.0.7.1 [(15)]. We set the initial state (i.e., priors) of the Symbolic register at 0, with a preference value of 4. Concurrently, we maintain consistent priors and preferences for the other two registers. As depicted in Figure 1.d, when the Symbolic register aligns with its preferred position, the other two registers exhibit synchronous fluctuations, representing the interconnectivity between three registers. This interconnectivity stems from the brain-wide propagation of divergence item and residual free energy. Consequently, this recurrent generative model not only embodies basic Lacanian idea including interdependent three registers and logical time, but also reflects the close relationship between these two concepts. Numerous studies have harnessed active inference to elucidate the principles underlying various brain functions, like emotion, interoception, consciousness, explanation, communication, culture, self-consciousness, body image, and sensorimotor, and so on (16-24). Our generative model could offer a holistic and top-down perspective on these functions by treat them as different modalities within three registers. _Desire as Partial Generalized Synchronization_ To capture the dynamics of the human mind within real-world contexts, communication must not be overlooked. In the field of active inference, communication is investigated with the paradigm involving two subjects with similar internal models [20, 25, 26]. To realize an effective communication, two agents need to infer internal models and predict each other's behaviors, culminating in synchronization - a dynamic process referred to as _synchronization of chaos_ or _generalized synchronization_. In this section, we aim to integrate the concept of desire into our computational psychoanalysis model. Lacan frames the essence of desire as metaphor in linguistic fashion [23]. That is, envisioning two subjects as signifiers within a signifying chain, desire of subject is to seizure the signified of an object [27]. From the lens of complexity theory, the desire of one subject on other manifests as a tendency toward a generalized synchronization of their Symbolic registers. Since that our model consists of three registers, the process of desire, synchronization of the Symbolic registers, entails _partial generalized synchronization_ (Figure 2. **a**). This partial synchronization enables that two synchronized subjects maintain their respective chaotic behaviors [28]. the other's hidden state of the Symbolic registers as their own preferences, which guarantee a partial generalize synchronization. When simulating such dynamics, we found that under identical condition, subsequent dynamics could be highly variable, and we display two of them, as illustrated in (b) and (c).** To simulate such conditions, we design two subjects (A & B) with a shared internal model but differing initial states. At each time step, subjects A and B infer the state of each other's Symbolic register, treating these inferred states as their own preferences. The dynamics of these two subjects' partial synchronization are illustrated in Figure 2. **b** and **c**, spanning 15-timesteps. Despite our uncomplicated generative model, the subsequent dynamics exhibit significant diversity under identical condition. In Figure 2. **b**, the two subjects achieve rapid synchronization, whereas in Figure 2. **c**, the synchronization remains partial. This phenomenon may reflect the multifaceted and random nature of human interaction and communication. Interestingly, as Figure 2 illustrates, when two subjects share the same internal model, a partial synchronization of their Symbolic registers can indirectly lead to the synchronization of the other two registers. This intriguing observation aligns with Figure 2: **Desire as partial generalized synchronization between multiple individuals.** numerous studies indicating that romantic partners often exhibit shared autonomic physiology and emotional regulation across various timeframes [29-31]. One plausible interpretation is that the internal models of romantic partners tend to align over time due to the effects of long-term synchronization. This insight may open a novel avenue for exploring interpersonal dynamics. This section sheds light on the vital role of communication in understanding the dynamics of the human mind. Integrating the concept of desire into computational psychoanalysis, we unveils desire as a mechanism driving partial synchronization between individuals, thereby deepening our comprehension of human connections and behaviors. ### Computational insights into suicidal dynamics We have proposed a recurrent generative model based on Lacanian psychoanalysis and active inference, and simulated dynamics under controlled conditions. However, reality is far more tumultuous, and the human mind stands as an enigmatic puzzle without a readily available comprehensive solution. Despite this complexity, we could use this model as helpful tool to comprehending complex dynamics of mental phenomena. Taking suicide thoughts and attempts as example, we will apply this model to understand the suicidal dynamics in this section. Zizek suggests there are different modalities of suicide in three registers -- suicide as act bearing specific massages, as total closure of the Real (direct identification of subject and object), and as deprivation of symbolic identity [32]. A qualitative study analyzing the suicide notes of 12 individuals further substantiates these modalities, including altered perception of self and other, ambivalence of emotions, taking suicide as punishment, urge to escape from the symbolic network, and so on [33]. These emphasize suicide is a comprehensive disorder of three registers, and we could understand its dynamics via our generative model. As mentioned, desire courses along the Symbolic register, manifesting as tendency towards partial synchronization. But under some conditions where desire fails, the synchronization is thwarted. Consequently, subject in the Symbolic register grapple with expected free energy that cannot be eliminated by any policy and residual free energy that keeps accumulating. Suicide, by doing nothing in the future, cancels out expected free energy and halts the accumulation, and hence becomes the final policy which is described as _passage to the act_ by Lacan, in meaning of an exit from the Symbolic. On the other hand, other two registers have to cope with residual free energy caused by the Symbolic register, fostering other two modalities of suicide. Corresponding functions like self-representation, emotion, interoception, and so on become altered. Finally, this could give rise to actual suicide attempt. Consequently, within the three-dimensional state space defined by the three registers, suicide assumes the guise of a state in which the system relinquishes structural stability. This precarious state could potentially lead to "suicidal points," characterized by catastrophic bifurcations. Perturbations that push the system towards these critical junctures might serve as risk factors for suicide. To counteract this, interventions must be designed to redirect trajectories away from these suicidal points. Strategies such as social support, psychotherapeutic interventions, and appropriate medications hold promise in this regard. Moreover, the notion of expected and residual free energy introducing logical time raises a compelling parallel--symbolic suicide has the capacity to remove the subject from logical time, analogous to physical suicide's cessation of physical time. This notion echoes Hendin's perspective on suicide as act to stop time [34]. And further explorations may yield more insights into phenomenology of suicide crises. In conclusion, the example of suicidal dynamics illustrates the theoretical and practical value of computational psychoanalysis, and additional research is warranted to validate and expand upon these initial findings. ## Concluding remarks Inspired by prior researches in Lacanian (neuro)psychoanalysis and active inference, this work offers theoretical foundation and computational evidence for incorporations of two fields and provides an innovative perspective by which complex dynamics of human mind could be dissected using lens of computational psychoanalysis. Under the framework of active inference, we propose a recurrent generative model where key Lacanian concepts like three registers, logical time find computational form. Central to our study is the novel conceptualization of desire as a mechanism of partial synchronization between individuals, shedding light on intricate patterns in human connections. Finally, by exploring dynamics of suicide with this model, we add a layer of depth and relevance to current work, demonstrating the clinical potential implications. Eric Kandel, recipient of the 2000 Nobel Prize, believes that psychoanalysis is "the most coherent and intellectually satisfying view of human mind", and will regain its energy through joining with cognitive neuroscience [(35)]. Our work may help to realize this prophecy by offering a top-down avenue for this fusion through the realm of computational psychoanalysis. The model we proposed acknowledges its simplicity. There are lots of modalities, multiple hierarchies, changeable parameters, and distinct time scales for each register. On the other hand, this model only captures several basic concepts of Lacanian psychoanalysis, which is essentially a theory of highly complexity. Further studies might complete this skeleton via calibration of model, descriptive interpretation, simulation of psychiatric disorders, application into real-world datasets, and so on. In summary, this study presents a convergence that blends theoretical underpinnings with computational revelations. This research holds promise for comprehensive understanding of human mind and inspiring further exploration into the depths of cognition and behavior.
2310.00215
Implicit collaboration with a drawing machine through dance movements
In this demonstration, we exhibit the initial results of an ongoing body of exploratory work, investigating the potential for creative machines to communicate and collaborate with people through movement as a form of implicit interaction. The paper describes a Wizard-of-Oz demo, where a hidden wizard controls an AxiDraw drawing robot while a participant collaborates with it to draw a custom postcard. This demonstration aims to gather perspectives from the computational fabrication community regarding how practitioners of fabrication with machines experience interacting with a mixed-initiative collaborative machine.
Itay Grinberg, Alexandra Bremers, Louisa Pancoast, Wendy Ju
2023-09-30T01:34:03Z
http://arxiv.org/abs/2310.00215v1
# Implicit collaboration with a drawing machine ###### Abstract. In this demonstration, we exhibit the initial results of an ongoing body of exploratory work, investigating the potential for creative machines to communicate and collaborate with people through movement as a form of implicit interaction (Bremers et al., 2018). The paper describes a Wizard-of-Oz demo, where a hidden wizard controls an AxiDraw drawing robot while a participant collaborates with it to draw a custom postcard. This demonstration aims to gather perspectives from the computational fabrication community regarding how practitioners of fabrication with machines experience interacting with a mixed-initiative collaborative machine. human-robot interaction, communication, collaboration + Footnote †: 2023: Xer 23, October 08–10, 2023, New York, NY + Footnote †: 2023: Xer 23, October 08–10, 2023, New York, NY of freedom, is placed on the table with two cameras aimed at the work surface and at the user. Figure 1 shows four examples of the resulting postcards on the robot's right side. The Wizard will improvise using a set of pre-developed possible movements--however, the interaction consists of two clear stages that are described in Table 1, categorized as "Welcoming", and Table 2, categorized as "Collaborative Drawing". Afterwards, participants can take their postcard home. ## 3. Demo Requirements The demo will be set up to run on a table. We require electricity and WiFi access, permission to use live cameras, and a chair for demo participants. The wizard will connect to the robot remotely by controlling the robot's computer via SSH. The wizard's "eyes" will be two cameras - one of them will show the participant, and the second one will show the workspace of the robot. ## 4. The Demo Design This demonstration is intended to highlight the collaborative design method used in the design of the robot's actions. One key element of our design approach is forming our interdisciplinary research team. The authors of this demo consist of two interaction design researchers, one mechanical engineer, and one dancer. The design process occurred during in-person meetings over the course of four months. ## 5. Acknowledgements The authors would like to thank Cooper Murr, Tobias Weinberg, Avital Dell'Ariccia, Evil Mad Scientist, Antti Oulasvirta and Francois Guimbretiere for their earlier suggestions, which fed into this work, and the Jacobs Technion-Cornell Institute for funding this work.
2309.14303
Dataset Diffusion: Diffusion-based Synthetic Dataset Generation for Pixel-Level Semantic Segmentation
Preparing training data for deep vision models is a labor-intensive task. To address this, generative models have emerged as an effective solution for generating synthetic data. While current generative models produce image-level category labels, we propose a novel method for generating pixel-level semantic segmentation labels using the text-to-image generative model Stable Diffusion (SD). By utilizing the text prompts, cross-attention, and self-attention of SD, we introduce three new techniques: class-prompt appending, class-prompt cross-attention, and self-attention exponentiation. These techniques enable us to generate segmentation maps corresponding to synthetic images. These maps serve as pseudo-labels for training semantic segmenters, eliminating the need for labor-intensive pixel-wise annotation. To account for the imperfections in our pseudo-labels, we incorporate uncertainty regions into the segmentation, allowing us to disregard loss from those regions. We conduct evaluations on two datasets, PASCAL VOC and MSCOCO, and our approach significantly outperforms concurrent work. Our benchmarks and code will be released at https://github.com/VinAIResearch/Dataset-Diffusion
Quang Nguyen, Truong Vu, Anh Tran, Khoi Nguyen
2023-09-25T17:19:26Z
http://arxiv.org/abs/2309.14303v4
Dataset Diffusion: Diffusion-based Synthetic Dataset Generation for Pixel-Level Semantic Segmentation ###### Abstract Preparing training data for deep vision models is a labor-intensive task. To address this, generative models have emerged as an effective solution for generating synthetic data. While current generative models produce image-level category labels, we propose a novel method for generating pixel-level semantic segmentation labels using the text-to-image generative model Stable Diffusion (SD). By utilizing the text prompts, cross-attention, and self-attention of SD, we introduce three new techniques: _class-prompt appending_, _class-prompt cross-attention_, and _self-attention exponentiation_. These techniques enable us to generate segmentation maps corresponding to synthetic images. These maps serve as pseudo-labels for training semantic segmenters, eliminating the need for labor-intensive pixel-wise annotation. To account for the imperfections in our pseudo-labels, we incorporate uncertainty regions into the segmentation, allowing us to disregard loss from those regions. We conduct evaluations on two datasets, PASCAL VOC and MSCOCO, and our approach significantly outperforms concurrent work. Our benchmarks and code will be released at [https://github.com/VinAIResearch/Dataset-Diffusion](https://github.com/VinAIResearch/Dataset-Diffusion). ## 1 Introduction Semantic segmentation is a fundamental task in computer vision. Its objective is to assign semantic labels to each pixel in an image, making it crucial for applications such as autonomous driving, scene comprehension, and object recognition. However, one of the primary challenges in semantic segmentation is the high cost associated with manual annotation. Annotating large-scale datasets with pixel-level labels is labor-intensive, time-consuming, and requires substantial human effort. To address this challenge, an alternative strategy involves leveraging generative models to synthesize datasets with pixel-level labels. Past research efforts have utilized Generative Adversarial Networks (GANs) to effectively generate synthetic datasets for semantic segmentation, thereby mitigating the reliance on manual annotation [1; 2; 3]. However, GAN models primarily concentrate on object-centric images and have yet to capture the intricate complexities present in real-world scenes. On the other hand, text-to-image diffusion models have emerged as a promising technique for generating highly realistic images from textual descriptions [4; 5; 6; 7]. These models possess unique characteristics that make them well-suited for the generation of semantic segmentation datasets. Firstly, the text prompts used as input to these models can serve as valuable guidance since they explicitly specify the objects to be generated. Secondly, the application of cross and self-attention maps in the image generation process endows these models with informative spatial cues, enabling precise extraction of object positions within the generated images. By leveraging these characteristics of text-to-image diffusion models, the concurrent works DiffuMask [8] and DiffusionSeg [9] effectively generate pairs of synthetic images and corresponding segmentation masks. DiffuMask achieves this by utilizing straightforward text prompts, such as "a photo of a [class name] [background description]", to generate image and segmentation mask pairs. Meanwhile, DiffusionSeg focuses on creating synthetic datasets that address the challenge of object discovery, which involves identifying salient objects within an image. While these approaches successfully produce images paired with their corresponding segmentation masks, they are currently limited to generating a single object segmentation mask per image. In this paper, we present Dataset Diffusion, a novel framework for synthesizing high-quality semantic segmentation datasets, as shown in Fig. 1. Our approach focuses on generating realistic images depicting scenes with multiple objects, along with precise segmentation masks. We introduce two techniques: _class-prompt appending_, which encourages diverse object classes in the generated images, and _class-prompt cross-attention_, enabling more precise attention to each object within the scene. We also introduce _self-attention exponentiation_, a simple refinement method using self-attention maps to enhance segmentation quality. Finally, we employ the generated data to train a semantic segmenter using uncertainty-aware segmentation loss and self-training. To evaluate the quality of the synthesized datasets, we introduce two benchmark datasets: synth-VOC and synth-COCO. These benchmarks utilize two well-established semantic segmentation datasets, namely PASCAL VOC [10] and COCO [11], to standardize the text prompt inputs and ground-truth segmentation evaluation. On the synth-VOC benchmark, Dataset Diffusion achieves an impressive mIoU of \(64.8\), outperforming DiffuMask [8] by a substantial margin. On the synth-COCO benchmark, the DeepLabV3 model trained on our synthesized dataset achieves noteworthy results of \(34.2\) in mIoU compared to the model trained on real images with full supervision. In summary, the contributions of our work are as follows: * We present a framework that effectively employs a state-of-the-art text-to-image diffusion model to generate synthetic datasets with pixel-level annotations. * We introduce a simple and effective text prompt design that facilitates the generation of complex and realistic images, closely resembling real-world scenes. * We propose a straightforward method that utilizes self and cross-attention maps to achieve highly accurate segmentation, thereby improving the quality and reliability of the synthesized datasets. * We introduce synth-VOC and synth-COCO benchmarks for evaluating the performance of semantic segmentation dataset synthesis. In the following, Sec. 2 reviews prior work, Sec. 3 describes our proposed framework, and Sec. 4 presents our experimental results. Finally, Sec. 5 concludes with some remarks and discussions. ## 2 Related Work **Semantic segmentation** is a critical computer vision task that involves classifying each pixel in an image to a specific class label. Popular semantic segmentation approaches include the fully convolutional network (FCN) [12] and its successors, such as DeepLab [13], DeepLabV2 [14], DeepLabv3 [15], DeepLabv3+ [16], UNet [17], SegNet [18], PSPNet [19], and HRNet [20]. Recently, Figure 1: Overview of our Dataset Diffusion for synthetic dataset generation. (**Left**) Given the target classes, our framework generates high-fidelity images with their corresponding pixel-level semantic segmentations. These segmentations serve as pseudo-labels for training a semantic segmenter. (**Right**) The trained semantic segmenter is able to predict the semantic segmentation of a test image. transformer-based approaches like SETR [21], Segmenter [22], SegFormer [23], and Mask2Former [24] have gained attention for their superior performance over convolution-based approaches. In our framework, we focus on generating synthetic datasets that can be used with any semantic segmenter, so we use DeepLabv3 and Mask2Former as they are commonly used. **Text-to-image diffusion models** have revolutionized image generation research, moving beyond simple class-conditioned to more complex text-conditioned image generation. Examples include GLIDE [25], Imagen [6], Stable Diffusion (SD) [5], Dall-E [4], eDiff-I [7], and Muse [26]. These models can generate images with multiple objects interacting with each other, more closely resembling real-world images rather than the single object-centric images generated by prior generative models. Our Dataset Diffusion marks a milestone in synthetic dataset generation literature, moving from image-level annotation to pixel-level annotation. We utilize Stable Diffusion [5] in our framework, as it is the only open-sourced pretrained text-to-image diffusion model available at the time of writing. **Diffusion models for segmentation.** Diffusion models have proven effective for semantic, instance, and panoptic segmentation tasks. These models either use input images to condition the mask-denoising process [27; 28; 29; 30; 31; 32; 33], or employ pretrained diffusion models as feature extractors [34; 35; 36; 37]. However, they still require ground-truth (GT) segmentation for training. In contrast, our framework utilizes only a pretrained SD to generate semantic segmentation without GT labels. **Generative Adversarial Networks (GANs) for synthetic segmentation datasets.** GANs have been employed in the generation of synthetic segmentation datasets, as demonstrated in previous works such as [38; 1; 3; 1]. However, these approaches primarily focus on object-centric images, where a single mask is segmented for the salient object or specific parts of common objects like faces, cars, or horses, as exemplified in [2]. In contrast, our framework is designed to generate semantic segmentations for more complex images, where multiple objects interact with each other at the scene level. Furthermore, while some techniques [38; 39] support foreground/background subtraction, and others [3; 1] still require human annotations, our objective is to generate semantic segmentations for multiple object classes in each image without the need for human involvement. **Diffusion models for synthetic data generation** have been used to improve the performance of image classification [40; 41], domain adaptation for classification [42; 43], and zero/few-shot learning [44; 45; 46; 47]. However, these methods produce only image-level annotations as augmentation datasets. In contrast, our framework produces pixel-level annotations, which is considerably more challenging. Recently, there have been concurrent works [8; 9] that utilize Stable Diffusion (SD) for generating object segmentation without any annotations. However, they focus on segmenting a single object in an image rather than multiple objects. Their text-prompt inputs to SD are simple, usually "a photo of a [class name]". The semantic segmenter trained on these annotations can segment multiple objects to some extent. Our framework, on the other hand, employs more complex text prompts where multiple objects can coexist and interact, making it more suitable for the semantic segmentation task in real-world images. ## 3 Dataset Diffusion **Problem setting:** Our objective is to generate a synthetic dataset \(\mathcal{D}=\left(I_{i},S_{i}\right)_{i=1}^{N}\), consisting of high-fidelity images \(\mathcal{I}\) and pixel-level semantic masks \(\mathcal{S}\). These images and masks capture both the semantic and location information of the target classes \(\mathcal{C}=\left\{c_{1},c_{2},...,c_{K}\right\}\), where \(K\) represents the number of classes. The purpose of constructing this dataset is to train a semantic segmenter \(\Phi\) without relying on human annotation. In our approach, we follow a three-step process. Firstly, we prepare relevant text prompts \(\mathcal{P}\) containing the target classes (Sec. 3.1). Secondly, using Stable Diffusion (SD) as our model, we generate images \(\mathcal{I}_{i}\in\mathbb{R}^{H\times W\times 3}\) and their corresponding semantic segmentations \(\mathcal{S}_{i}\in\left\{0,\dots,K\right\}^{H\times W}\), where \(0\) represents the background class (Sec. 3.2). These images and segmentations form the synthetic dataset \(\mathcal{D}\). Lastly, we train a semantic segmenter \(\Phi\) on \(\mathcal{D}\) and evaluate its performance on the test set of standard semantic segmentation datasets (Sec. 3.3). It is worth noting that our approach primarily focuses on segmenting common objects in everyday scenes, where the SD model excels, rather than specialized domains like medical or aerial images. The overall framework is depicted in Fig. 2. ### Preparing Text Prompts for Stable Diffusion To prepare prompts containing a given list of classes for SD, one option is to utilize a large language model (LLM) such as ChatGPT [48] to generate the sentences, similar to the method described in [9]. This approach can be valuable in real-world applications. However, for evaluating the quality of the synthetic dataset, we need to rely on standard datasets for semantic segmentation like PASCAL VOC [10] or COCO [11] to create standardized benchmarks. In this regard, we propose using the provided or generated captions of the training images in these datasets as the text prompts for SD. This is solely for the purpose of standard benchmarking where the text prompts are fixed, and we do not utilize real images or image-label associations in our synthetic dataset generation. We call these new benchmarks as synth-VOC and synth-COCO. When using the COCO dataset, we can rely on the provided captions to describe the training images. However, in the case of the PASCAL VOC dataset, which lacks captions, we employ a state-of-the-art image captioner like BLIP [49] to generate captions for each image. However, we encountered several issues with the provided or generated captions. Firstly, the text prompts may not use the exact terms as the target class names \(\mathcal{C}\) provided in the dataset. For instance, terms like "man" and "woman" may be used instead of "person", or "bike" instead of "bicycle", resulting in a mismatch with the target classes. Secondly, many captions do not contain all the classes that are actually present in the images (as illustrated in Fig. 3). This leads to a shortage of text prompts for certain classes, affecting the generation process for those particular classes. To address the issues, we propose a method that leverages the class labels provided by the datasets. We append the provided (or generated) captions \(\mathcal{P}_{i}\) with the class labels, creating new text prompts \(\mathcal{P^{\prime}}_{i}\) that explicitly incorporate all the target classes \(\mathcal{C}_{i}=[c_{1};\ldots;c_{M}]\), where \(M\) is the number of classes in image \(i\). This is achieved through the text appending operation or _class-prompt appending_ technique: \(\mathcal{P^{\prime}}_{i}=[\mathcal{P}_{i};\mathcal{C}_{i}]\). For example, in the case of the left image in Fig. 3, the final text prompt would be "a photograph of a kitchen inside a house; bottle microwave sink refrigerator". This ensures that the new text prompts encompass all the target classes, addressing the issue of mismatched or missing class names in the captions. ### Generating Segmentation from Self and Cross-attention Maps We build our segmentation generator on Stable Diffusion (SD) by leveraging its self and cross-attention layers. Given a text prompt \(\mathcal{P^{\prime}}\) first encoded by a text encoder into text embedding \(e\in\mathbb{R}^{\Lambda\times d_{e}}\) with the text length \(\Lambda\) and the number of dimensions \(d_{e}\), SD seeks to output the final latent state \(z_{0}\in\mathbb{R}^{H\times W\times d_{e}}\), where \(H,W,d_{z}\) are height, width, and number of channels of \(z_{0}\), reflecting the content encoded in \(e\) from the initial latent state \(z_{T}\sim\mathcal{N}(0,I)\) after \(T\) denoising steps. Figure 2: **Three stages of Dataset Diffusion**. In the first stage, the target classes are provided, and text prompts are generated using language models such as ChatGPT [48]. Real captions (for COCO) or image-based captions (for VOC) can also be used for prompt generation to ensure standard evaluation. The text prompts are then augmented with the target class labels to avoid missing objects. In the second stage, given the augmented text prompt, a frozen Stable Diffusion [5] is employed to generate an image and its self- and cross-attention maps. The cross-attention map for each target class is refined using the self-attention map to match the object’s shape. Finally, the generated images and corresponding semantic segmentations are used to train a semantic segmenter with uncertainty-aware loss and the self-training technique. At each denoising step \(t\), a UNet architecture with \(L\) layers of self and cross-attention is used to transform \(z_{t}\) to \(z_{t-1}\). In particular, at layer \(l\) and time step \(t\), the self-attention layer captures the pairwise similarity between positions within a latent state \(z_{t}^{l}\) in order to enhance the local feature with the global context in \(z_{t}^{l+1}\). In the meantime, the cross-attention layer models the relationship between each position of the latent state \(z_{t}^{l}\) and each token of the text embedding \(e\) so that \(z_{t}^{l+1}\) can express more of the content encoded in \(e\). Formally, the self-attention map \(\mathcal{A}_{S}^{l,t}\in[0,1]^{HW\times HW}\) and cross-attention map \(\mathcal{A}_{C}^{l,t}\in[0,1]^{HW\times\Lambda}\) at layer \(l\) and time step \(t\) are computed as follows: \[\mathcal{A}_{S}^{l,t}=\text{Softmax}\left(\frac{Q_{z}K_{z}^{\top}}{\sqrt{d_{l} }}\right),\qquad\qquad\mathcal{A}_{C}^{l,t}=\text{Softmax}\left(\frac{Q_{z}K_ {e}^{\top}}{\sqrt{d_{l}}}\right), \tag{1}\] where \(Q_{z},K_{z},K_{e}\) are the query of \(z\), key of \(z\), and key of \(e\), respectively, obtained by linear projections and taken as inputs to the attention mechanisms, and \(d_{l}\) is # features at layer \(l\). Since we only want to obtain the cross-attention map of the class labels \(C_{i}\) of image \(i\) for semantic segmentation, we introduce _class-prompt cross-attention_ that is similar to cross-attention in Eq. (1) but produced by only taking the softmax over the class name part \(C_{i}\) rather than entire of the text prompt \(\mathcal{P^{\prime}}_{i}\). In practice, we form a new text prompt \(\hat{\mathcal{P}}_{i}=C_{i}\) just for the purpose of extracting the cross-attention maps while the original text prompt \(\mathcal{P^{\prime}}_{i}\) for generating images keeps unchanged. After this, we obtain \(\mathcal{A}_{C}^{l,t}\in[0,1]^{HW\times M}\), where \(M\) is the number of classes in the image. With the observation that using different ranges of timesteps only affects the final result marginally, (provided in Supp.), we average these cross and self-attention maps over layers and timesteps: \[\mathcal{A}_{S}=\frac{1}{L\times T}\sum_{l=1}^{L}\sum_{t=0}^{T}\mathcal{A}_{S} ^{l,t},\qquad\quad\mathcal{A}_{C}=\frac{1}{L\times T}\sum_{l=1}^{L}\sum_{t=0} ^{T}\mathcal{A}_{C}^{l,t}, \tag{2}\] Although the cross-attention maps \(\mathcal{A}_{C}\) already exhibit the location of the target classes in the image, they are still coarse-grained and noisy, as illustrated in Fig. 4. Thus, we propose to use the self-attention map \(\mathcal{A}_{S}\) (as illustrated in Fig. 6 - Left) to enhance \(\mathcal{A}_{C}\) for a more precise object location. This is because the self-attention maps capturing the pairwise correlations among positions within the latent \(z_{t}\) can help propagate the initial cross-attention maps to the highly similar positions, e.g., non-salient parts of the object, thereby enhancing their quality. Therefore, we propose _self-attention exponentiation_ where the self-attention map \(\mathcal{A}_{S}\) is powered to \(\tau\) before multiplying to the cross-attention map \(\mathcal{A}_{C}\) as: \[\mathcal{A}_{C}^{\star}=(\mathcal{A}_{S})^{\tau}\cdot\mathcal{A}_{C},\qquad \qquad\mathcal{A}_{C}^{\star}\in[0,1]^{HW\times M}. \tag{3}\] Figure 4: Given a text prompt “A bike is parked in a room; bicycle”, we obtain the generated image, cross-attention map, enhanced cross-attention map by the self-attention with \(\tau=\{1,2,4\}\) described in the Eq. (3), and mask with uncertainty value (white region) by Eq. (4) and Eq. (5). Figure 3: Common issues of using provided (or generated) captions. Red classes are often missing from the captions, resulting in a lack of text prompts for those classes. Blue classes may have different terms used in the captions, causing a discrepancy between the target class names and the text prompts. Next, we aim to identify two matrices: \(\mathcal{V}\in[0,1]^{H\times W}\) representing the objectness value at each location (the higher the objectness, the more likely that location contains an object), and \(\mathcal{S}\in\{1,\dots,M\}^{H\times\tilde{W}}\) indicating which objects in the class labels \(C_{i}\) that each location could be. To obtain those, we perform the pixel-wise \(\operatorname*{arg\,max}\) and \(\max\) operator (over the category \(M\) dimension): \[\mathcal{S}=\operatorname*{arg\,max}_{m}\mathcal{A}_{C}^{*,m}, \mathcal{V}=\max_{m}\mathcal{A}_{C}^{*,m}. \tag{4}\] At a location \(x\) in the map \(\mathcal{V}\), if its value is less than a threshold, one can set its label to the background class \(0\). However, we find that using a fixed threshold does not work for all images. Instead, we use a lower threshold \(\alpha\) for certain background decisions and a higher threshold \(\beta\) for certain foreground decisions. Any value that falls inside the range \((\alpha,\beta)\) expresses an uncertain mask prediction with value \(U=255\). That is, the final mask \(\bar{\mathcal{S}}\) is illustrated in the last image of Fig. 4 and calculated as: \[\bar{\mathcal{S}}_{x}=\begin{cases}0&\text{if }\mathcal{V}_{x}\leq\alpha,\\ U&\text{if }\alpha<\mathcal{V}_{x}<\beta,\\ S_{x}&\text{otherwise}.\end{cases} \tag{5}\] ### Training Semantic Segmenter on Generated Segmentation Given the synthetic images \(\mathcal{I}\) and semantic segmentation masks \(\bar{\mathcal{S}}\), we train a semantic segmenter \(\Phi\) with an uncertainty-aware cross-entropy loss. Specifically, for pixels marked as uncertain, we ignore the loss from those as: \(\mathcal{L}=\sum_{x}\mathds{1}(\bar{\mathcal{S}}_{x}\neq U)\mathcal{L}_{\text {CE}}(\hat{\mathcal{S}}_{x},\bar{\mathcal{S}}_{x})\), where \(\mathds{1}\) is the indication function, \(\mathcal{L}_{\text{CE}}\) is the cross entropy loss, and \(\hat{\mathcal{S}}=\Phi(\mathcal{I})\) is the predicted segmentation from the generated image \(\mathcal{I}\). We further enhance the segmentation mask \(\bar{\mathcal{S}}\) by the self-training technique [50]. That is, after being trained with \(\bar{\mathcal{S}}\), the segmenter \(\Phi\) makes its own prediction on \(\mathcal{I}\) as pseudo labels \(\mathcal{S}^{*}\) without uncertainty value \(U\). Finally, the final semantic segmenter \(\Phi^{*}\) is the segmenter \(\Phi\) trained again on \(\mathcal{S}^{*}\). ## 4 Experiments **Datasets:** We evaluate our Dataset Diffusion on two datasets: PASCAL VOC 2012 [10] and COCO 2017 [11]. The PASCAL VOC 2012 dataset has 20 object classes and 1 background class. For standard semantic segmentation evaluation, this dataset is usually augmented with the SBD dataset [51] to have a total of \(12,046\) training, \(1,449\) validation, and \(1,456\) test images. We additionally augment the training images with captions generated from BLIP [49]. The COCO 2017 dataset contains 80 object classes and 1 background class with \(118,288\) training and \(5K\) validation images, along with provided captions for each image. It is worth noting that we only use the image-level class annotation to form the text prompts as described in Sec. 3.1. We introduce the set of our prepared text prompts along with the validation set of each dataset as synth-VOC and synth-COCO - the two benchmarks for evaluation of semantic segmentation dataset synthesis. To create a balance synthetic dataset among classes, we generate \(2k\) images per object class for PASCAL VOC, resulting in a total of \(40k\) image-mask pairs and about \(1k\) images per object class for COCO, resulting in a total of \(80k\) image-mask pairs. If the number of text prompts associated with a certain class is insufficient, we use more random seeds to generate more images. **Evaluation metric:** We evaluate the performance of Dataset Diffusion using the mean Intersection over Union (mIoU) metric. The mIoU(%) score measures the overlap between the predicted segmentation masks and the ground truth masks for each class and takes the average across all classes. **Implementation details:** We build our framework on PyTorch deep learning framework [52] and Stable Diffusion [5] version 2.1-base with \(T=100\) timesteps. We construct the masks using optimal values for \(\tau\), \(\alpha\), and \(\beta\), which are defined in Sec. 6.2. Regarding semantic segmenter, we employ the DeepLabV3 [15] and Mask2Former [24] segmenter implemented in the MMSegmentation framework [53]. We use the AdamW optimizer with a learning rate of \(1e^{-4}\) and weight decay of \(1e^{-4}\). For other hyper-parameters, we follow standard settings in MMSegmentation. ### Main Results **Quantitative results:** Tab. 1 compares the results of DeepLabV3 [15] and Mask2Former [24] trained on the real training set, a synthetic dataset of DiffuMask [8], and the synthetic dataset of Dataset Diffusion. On VOC, our approach yields satisfactory results of \(64.8\) mIoU when compared to the real training set of \(79.9\) mIoU. Further, ours outperforms DiffuMask by a large margin of 4.2 mIoU using the same Resnet50 backbone. The detailed IoU of each class is reported in the Supp. Also, Dataset Diffusion achieves a promising result of \(34.2\) mIoU compared to \(54.9\) mIoU of real COCO training set. These results demonstrate the effectiveness of Dataset Diffusion, although the gaps with the real dataset are still substantial, i.e., \(15\) mIoU in VOC and \(20\) mIoU in COCO. This is due to the fact that the image content of COCO is more complex than that of VOC, reducing the ability of Stable Diffusion to produce images with the same level of complexity. We will discuss more in Sec. 5. **Qualitative results** on the validation set of VOC are shown in Fig. 5. In Fig. 4(a), the synthetic images and their corresponding masks are utilized for training the semantic segmenter. The first two rows (1, 2) serve as excellent examples of successful segmentation, while the last two rows (3, 4) demonstrate failure cases. In certain instances, the self-training technique proves effective in rectifying mis-segmented objects (as seen in rows 2 and 3). However, it can also adversely impact the original masks when dealing with objects of small size (as observed in row 4). In Fig. 4(b), our predicted segmentation results on the validation set of VOC exhibit varying outcomes. The first three rows exhibit satisfactory results, with the predicted masks closely aligning with the ground truth. Conversely, the last three rows illustrate failure cases resulting from multiple small objects (row 4) and the presence of intertwined objects (rows 5 and 6). ### Ablation Study We conduct all ablation study experiments on the text prompts described in Sec. 3.1. Additionally, we report the results with 20k images using the initial mask generated by Dataset Diffusion without using the self-training technique or test-time augmentation unless indicated in each experiment. **Effect of text prompt selection**. Tab. 2 compares different text prompt selection methods. Our _class-prompt appending_ technique outperforms the text prompts using captions or class labels only. Specifically, the _class-prompt appending_ technique increases the performance by \(11.2\) and \(4.6\) mIoU over the "caption-only" and "class-label-only" text prompts, respectively. _Class-prompt appending_ also outperforms the simple text prompts by \(7.3\) mIoU. These results indicate that our text prompt selection method can help SD generate datasets with both diversity and accurate attention. **Effects of different components** of stage 2 and stage 3 in Fig. 2 on the overall performance are summarized in Tab. 3. Using only cross-attention results in a low performance of \(44.8\) mIoU as the \begin{table} \begin{tabular}{l l l c c c} \hline \hline \multirow{2}{*}{**Segmenter**} & \multirow{2}{*}{**Backbone**} & \multicolumn{2}{c}{**VOC dataset**} & \multicolumn{2}{c}{**COCO dataset**} \\ \cline{3-6} & & **Training set** & **Val** & **Test** & **Training set** & **Val** \\ \hline DeepLabV3 & ResNet50 & VOC’s training & 77.4 & 75.2 & \multirow{2}{*}{COCO’s training} & 48.9 \\ DeepLabV3 & ResNet101 & (\(11.5k\) images) & 79.9 & 79.8 & \multirow{2}{*}{54.9} \\ Mask2Former & ResNet50 & & 77.3 & 77.2 & \multirow{2}{*}{57.8} \\ \hline Mask2Former & ResNet50 & DiffuMask [8] & & & & \\ & (\(60k\) images) & & & & \\ \hline DeepLabV3 & ResNet50 & & 61.6 & 59.0 & \multirow{2}{*}{Dataset Diffusion} & 32.4 \\ DeepLabV3 & ResNet101 & (\(40k\) images) & 64.8 & 64.6 & (\(80k\) images) & 34.2 \\ Mask2Former & ResNet50 & & 60.2 & 60.5 & 31.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison in mIoU between training DeepLabV3 [15] and Mask2Former [24] on the real training set, the synthetic dataset of DiffuMask [8], and the synthetic dataset of Dataset Diffusion. \begin{table} \begin{tabular}{l l c} \hline \hline **Method** & **Example** & **mIoU (\%)** \\ \hline 1: Simple text prompts & a photo of an aeroplane & 54.7 \\ 2: Captions only & a large white airplane sitting on top of a boat & 50.8 \\ 3: Class labels only & aeroplane boat & 57.4 \\ 4: Simple text prompts + class labels & a photo of an aeroplane; aeroplane boat & 57.6 \\ 5: Caption + class labels & a large white plane sitting on top of a boat; aeroplane boat & **62.0** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of different text prompt selections. Red: class names, blue: similar terms. cross-attention map is coarse and inaccurate (as illustrated in Fig. 4). Using self-attention refinement boosts the performance significantly to \(61.0\) mIoU. Also, using other techniques like uncertainty-aware loss, self-training, and test time augmentation help improve performance incrementally. **Effect of different feature scales** used for aggregating self-attention and cross-attention maps is shown in Tab.4. As can be seen, for the cross-attention map, choosing too small and too large feature scales both hurt the performance since the former lacks details while the latter focuses on fine details instead of object shape. For the self-attention map, using the scale of 32 gives slightly better results. **Hyper-parameters selection for mask generation (Sec. 3.2).** We conduct sensitivity analysis on \(\tau\), \(\alpha\), and \(\beta\) to determine the optimal values in Tab. 5. Tab. (a)a shows the results of choosing \(\tau\) (with fixed \(\alpha=0.5,\beta=0.6\)) with the best result with \(\tau=4\). A too-large value of \(\tau=5\) decreases the performance as the refined cross-attention map tends to spread out the whole image rather than the object only. Additionally, Tab. (b)b exhibits the analysis on the \((\alpha,\beta)\) range given the fixed \(\tau=4\), the range of \((0.5-0.6)\) achieves the best performance of \(62.0\) mIoU. Figure 5: **(a)** Row 1 (R1) and R2 are successful cases, while R3 and R4 demonstrate failures. Self-training helps correct mis-segmented objects in some cases (R2 and R3) but can harm the original mask for small objects (R4). **(b)** R1 to R3 show accurate results, closely matching the GT. R4 to R6 reveal failure cases due to numerous small objects (R4) and intertwined objects (R5 and R6). \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Cross-attention** & **Self-attention** & **Uncertainty** & **Self-training** & **TTA** & **mIoU (\%)** \\ \hline ✓ & & & & & 44.8 \\ ✓ & ✓ & & & & 61.0 \\ ✓ & ✓ & ✓ & & & 62.0 \\ ✓ & ✓ & ✓ & ✓ & & 62.7 \\ ✓ & ✓ & ✓ & ✓ & ✓ & **64.3** \\ \hline \hline \end{tabular} \end{table} Table 3: Impact of cross-attention, self-attention, uncertainty, self-training, and test time augmentation (TTA) (refer to Sec. 3.2, Sec. 3.3). TTA includes multi-scale and input flipping at test time. ## 5 Discussion and Conclusion **Limitations:** While our method is effective for generating synthetic datasets, there are some limitations to consider. Our primary reliance on Stable Diffusion [5] for image generation can result in difficulties with producing complex scenes. _First_, when given a text prompt that involves three or more objects, the diffusion model may only produce an image depicting two or three objects as exemplified in Fig. 6 - Right. However, there is ongoing research to improve the quality of the diffusion model and to incorporate stronger guidance, such as layout or box conditions, which shows promise in addressing this issue. _Second_, it is worth noting that in some cases, our Dataset Diffusion may not produce high-quality segmentation masks when objects are closely intertwined, as seen in Fig.4(a) with the example of a man riding a horse. _Third_, the bias in the LAION-5B dataset, on which Stable Diffusion was trained, may be transferred to the generated dataset. This is the current limitation of Stable Diffusion as it was trained on a large-scale uncurated dataset like LAION-5B. However, there are several studies addressing the bias problem in generative models [54, 55, 56] focusing on enhancing fairness and reducing biases in generative models. We believe that these studies and future work on the topic of fairness in GenAI will help to mitigate the bias in the generated images. **Conclusion:** We have presented our novel framework - Dataset Diffusion - which enables the generation of synthetic semantic segmentation datasets. By leveraging Stable Diffusion, Dataset Diffusion can produce high-quality semantic segmentation and visually realistic images from specified object classes. Throughout our experiments, we have demonstrated the superiority of Dataset Diffusion over the concurrent method, DiffuMask, achieving an impressive mIoU of \(64.8\) in VOC and \(34.2\) in COCO. This remarkable advancement paves the way for future research endeavors focused on the creation of large-scale datasets with precise annotations using generative models. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Self-attention**} \\ \cline{2-7} **Cross-attention** & 32 & 64 & & & & \\ \hline 8 & 39.7 & 38.1 & & & & \\ 16 & **62.0** & 59.6 & & & & \\ 32 & 52.8 & 50.9 & & & & \\ 64 & 35.4 & 31.5 & & & & \\ 16, 32 & 59.7 & 57.3 & & & & \\ 16, 32, 64 & 59.1 & 57.2 & & & & \\ \hline \hline \end{tabular} \end{table} Table 4: Study on different feature scales Figure 6: **Left: Correlation maps at some positions with others, extracted from a self-attention map. Right: Failure cases of SD when generating images with multiple objects. Red: classes are missed.**
2309.04676
Flexible and Robust Counterfactual Explanations with Minimal Satisfiable Perturbations
Counterfactual explanations (CFEs) exemplify how to minimally modify a feature vector to achieve a different prediction for an instance. CFEs can enhance informational fairness and trustworthiness, and provide suggestions for users who receive adverse predictions. However, recent research has shown that multiple CFEs can be offered for the same instance or instances with slight differences. Multiple CFEs provide flexible choices and cover diverse desiderata for user selection. However, individual fairness and model reliability will be damaged if unstable CFEs with different costs are returned. Existing methods fail to exploit flexibility and address the concerns of non-robustness simultaneously. To address these issues, we propose a conceptually simple yet effective solution named Counterfactual Explanations with Minimal Satisfiable Perturbations (CEMSP). Specifically, CEMSP constrains changing values of abnormal features with the help of their semantically meaningful normal ranges. For efficiency, we model the problem as a Boolean satisfiability problem to modify as few features as possible. Additionally, CEMSP is a general framework and can easily accommodate more practical requirements, e.g., casualty and actionability. Compared to existing methods, we conduct comprehensive experiments on both synthetic and real-world datasets to demonstrate that our method provides more robust explanations while preserving flexibility.
Yongjie Wang, Hangwei Qian, Yongjie Liu, Wei Guo, Chunyan Miao
2023-09-09T04:05:56Z
http://arxiv.org/abs/2309.04676v1
# Flexible and Robust Counterfactual Explanations with Minimal Satisfiable Perturbations ###### Abstract. Counterfactual explanations (CFEs) exemplify how to minimally modify a feature vector to achieve a different prediction for an instance. CFEs can enhance informational fairness and trustworthiness, and provide suggestions for users who receive adverse predictions. However, recent research has shown that multiple CFEs can be offered for the same instance or instances with slight differences. Multiple CFEs provide flexible choices and cover diverse desiderata for user selection. However, individual fairness and model reliability will be damaged if unstable CFEs with different costs are returned. Existing methods fail to exploit flexibility and address the concerns of non-robustness simultaneously. To address these issues, we propose a conceptually simple yet effective solution named _Counterfactual Explanations with Minimal Satisfiable Perturbations (CEMSP)_. Specifically, CEMSP constrains changing values of abnormal features with the help of their semantically meaningful normal ranges. For efficiency, we model the problem as a Boolean satisfiability problem to modify as few features as possible. Additionally, CEMSP is a general framework and can easily accommodate more practical requirements, e.g., casualty and actionability. Compared to existing methods, we conduct comprehensive experiments on both synthetic and real-world datasets to demonstrate that our method provides more robust explanations while preserving flexibility. Counterfactual explanations, multiplicity, normal ranges, flexibility, robustness + Footnote †: ccs: 2023, 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 2023, 2022-15 + Footnote †: ccs: 203, 2022-15 + Footnote †: ccs: 2023, 2022-15 or seemingly inconsequential differences may receive inconsistent CFEs (e.g., two different diverse sets) as the CFE method itself does not store historical CFEs and guarantee the optimal solutions either. Such inconsistency inevitably raises fairness issues (Bartos et al., 2015; Krizhevsky et al., 2016) and undermines users' trust (Krizhevsky et al., 2016) in CFEs. For example, two financially similar individuals are rejected when they apply for a loan. Yet, CFEs for two people are quite different-one needs to update the salary slightly while the other is required to get a higher education degree and a better job. Another negative example is when users make some efforts towards previous CFEs but receive a significantly different CFE, rendering their previous efforts futile. Therefore, it is crucial to take advantage of the flexibility of multiple CFEs and maintain consistency for users having the same feature values or slight differences. Recent research (Bartos et al., 2015; Krizhevsky et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016) on robustness mainly studies CFEs with consistent predictions under slight model updates (by restricting CFEs to preserve causal constraints, or follow data distribution, etc.), rather than generates CFEs with consistent feature values. Therefore, these studies fail to address fairness concerns and ignore the freedom of user selection. Generally, models to explain are highly complex and non-convex, e.g., DNN models. Even constrained by consistent prediction, heuristic search strategies (Krizhevsky et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016) can still converge to different non-optimal solutions due to the huge search space. Meanwhile, these works do not explicitly exploit flexibility to meet user preferences. As the number of possible CFEs can be huge, existing methods on flexibility or robustness can be explained to be different selection strategies from a solution pool (without optimizing diversity and robustness in advance), i.e., selecting CFEs that are diverse (Krizhevsky et al., 2016), follow data distribution (Krizhevsky et al., 2016), or withhold causal constraints (Krizhevsky et al., 2016). Motivated by this, we target to design a novel method that obtains a diverse and robust set of CFEs simultaneously. To overcome the above limitations, we propose to incorporate task priors (normal ranges, a.k.a. reference intervals) to stabilize valid search regions, while ensuring that counterfactual explanations (CFEs) are diverse to meet various user requirements. It should be noted that robustness measures the differences between two sets of CFEs in different trials while diversity measures the inherent discrepancy within a set of CFEs. Normal ranges in our approach commonly exist in broad domains and are easy to obtain from prior knowledge. For example, the normal range of heart rate per minute is between 60 and 100; the IELTS score should be greater than 6.5 and the minimal GPA is 3.5 for Ph.D. admissions. We assume that the undesired prediction results from certain features outside of normal ranges and thus, we attempt to move abnormal features into normal ranges to generate CFEs with the desired prediction. Specifically, we replace an abnormal feature with the closest end-point of its normal range. As the endpoints are stationary, CFEs after feature replacement tend to have the same feature value in different trials for the same/similar input. In practice, it may be unnecessary to move all abnormal features into normal ranges for the desired prediction. Therefore, we aim to select minimal subsets of abnormal features to replace where each subset corresponds to a CFE. CFEs determined by all minimal subsets are diverse as an arbitrary minimal subset is not contained by another subset. As mentioned earlier, the problem of finding CFEs boils down to selecting minimal subsets of abnormal features to replace, which can be formulated as either the Maximally Satisfiable Subsets (MSS) or Minimal Unsatisfiable Subsets (MUS) problem (Krizhevsky et al., 2016; Krizhevsky et al., 2016). However, finding all minimal sets for satisfiable CFEs is an NP-Complete problem as an exponential number of subsets should be checked. To enhance efficiency, we covert the enumeration of minimal subsets to the Boolean satisfiability problem (SAT) that finds satisfiable Boolean assignments over a series of Boolean logic formulas, which can be solved with efficient modern solvers. As for commonly mentioned constraints (e.g., actionability, correlation, and causality), we can conveniently write them in Boolean logic formulas, which can be conjugated into current clauses in conjunctive normal form (CNF). Therefore, our framework is flexible to provide feasible counterfactual recommendations. The main contributions of this paper are summarized as follows. * We reformulate the counterfactual explanation problem to satisfy both flexibility and robustness by replacing a minimal subset of abnormal features with the closest endpoints of normal ranges. * We convert this problem by checking the satisfiability of Boolean logic formulas for a Boolean assignment, which can be solved by modern SAT solvers efficiently. In addition, common constraints can be easily incorporated into current Boolean logic formulas and solved together. * We conduct intensive experiments on both synthetic and real-world datasets to demonstrate that our approach produces more consistent and diverse CFEs than state-of-the-art methods. ## 2. Related Work Counterfactual explanations (Krizhevsky et al., 2016) refer to perturbed instances with the minimum cost that result in a different prediction from a pre-trained model given an input instance. These explanations provide ways to comprehend the model's prediction logic and offer advice to users receiving adverse predictions. Most existing algorithms focus on modeling practical requirements and user preferences with proper constraints. Typical constraints include actionability (Krizhevsky et al., 2016), which freezes immutable features such as race, gender, etc; plausibility (Krizhevsky et al., 2016; Krizhevsky et al., 2016), which requires CFEs to follow the data distribution; diversity (Krizhevsky et al., 2016; Krizhevsky et al., 2016), which generates a diverse set of explanations at a time; sparsity (Krizhevsky et al., 2016), which favors fewer features changed; causality (Krizhevsky et al., 2016; Krizhevsky et al., 2016), which restricts CFEs to meet specific causal relations. However, recent studies (Krizhevsky et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016) have revealed that there often exist multiple CFEs with equivalent performance but different feature values for an input or seemingly indifferent inputs. Next, we review research that takes advantage of counterfactual multiplicity and addresses concerns regarding non-robustness. Multiple CFEs provide users with more flexibility to prioritize their preferences without compromising the validity and proximity of CFEs. When a single CFE is inadequate to meet users' requirements, employing a diverse set of CFEs is an effective and straightforward strategy to overcome this limitation. For example, Wachter et al. (Wachter et al., 2016) generate a diverse set by running multiple times with different initializations; Russell (Russell, 2017) prohibits the transition to previous CFEs in each run; while Mothil et al. (Mothil et al., 2016) add a DPP (Determinantal Point Processes) term to ensure that the CFEs are far apart from each other. In addition, multiple CFEs also enable researchers to develop Human-Computer Interaction (HCI) tools for interactively satisfying user requirements (Hernandez et al., 2017; Kern et al., 2018). However, such a diverse set can be inconsistent for two inputs with no or slight differences. In our paper, we aim to generate a consistent and diverse set of CFEs for an input or seemingly different inputs, to enhance the robustness and reliability of CFEs. The non-robustness issue of CFEs has garnered significant attention recently. As introduced in (Sack et al., 2017; Kern et al., 2018), even a slight perturbation to the input can result in drastically different CFEs. To verify this phenomenon for neural network models, Slack et al. (Slack et al., 2017) train an adversarial model that is sensitive to trivial input changes. Some relevant works (Hernandez et al., 2017; Kern et al., 2018; Kern et al., 2018; Kern et al., 2018) propose to generate CFEs that yield consistent predictions when the model is retrained. For example, (Kern et al., 2018) proves that adhering to the data manifold ensures stable predictions for CFEs; (Sack et al., 2017) incorporates adversarial training to produce robust models for generating explanations; Black et al. (Black et al., 2018) states that closeness to the data manifold is insufficient to indicate counterfactual stability, and they propose Stable Neighbor Search (SNS) to find an explanation with the lower model Lipschitz continuity and higher confidence. However, constraining consistent predictions of CFEs does not necessarily ensure CFEs with the same or similar feature values, and still fails to address the unfairness issue. Moreover, these robustness methods lack the flexibility to meet user requirements while our work considers both flexibility and robustness simultaneously. ## 3. Preliminary ### Counterfactual Explanations Let us consider a pretrained model \(f:\mathcal{X}\rightarrow\mathcal{Y}\), where \(\mathcal{X}\subseteq\mathbb{R}^{d}\) denotes the feature space and \(\mathcal{Y}\) is the prediction space. For simplicity, let \(\mathcal{Y}=\{0,1\}\), where \(0/1\) denotes unfavorable/favorable prediction, respectively. Given an input instance \(\mathbf{x}\in\mathcal{X}\), which is predicted to be the unfavorable outcome (\(f(\mathbf{x})=0\)), a counterfactual explanation (CFE) \(\mathbf{c}\) is a data point that leads to a favorable prediction, i.e., \(f(\mathbf{c})=1\), with minimal perturbations of \(\mathbf{x}\). Formally, a counterfactual explanation method \(g:f\times\mathcal{X}\rightarrow\mathcal{X}\) can be mathematically defined as follows: \[\operatorname*{arg\,min}_{\mathbf{c}} \operatorname{cost}(\mathbf{x},\mathbf{c})\] \[s.t. f(\mathbf{c})=1 \tag{1}\] where \(\operatorname{cost}(\cdot,\cdot):\mathcal{X}\times\mathcal{X}\rightarrow \mathbb{R}^{+}\) is a distance or cost metric that quantifies the efforts required in order to change from an input \(\mathbf{x}\) to its CFE \(\mathbf{c}\). In practice, the commonly used cost function includes \(L_{1}/\text{MAD}\)(Sack et al., 2017; Kern et al., 2018), total log-percentile shift (Sack et al., 2017), and \(L_{2}\) norm on latent space (Kern et al., 2018). To optimize Eqn. (1), it can be further transformed to the Lagrangian form (Kern et al., 2018), as shown below: \[\mathcal{L}(\mathbf{c},\lambda)=\operatorname{cost}(\mathbf{x},\mathbf{c})+ \lambda\ell(f(\mathbf{c}),1) \tag{2}\] where \(f(\cdot,\cdot)\) is a differential function to measure the gap between \(f(\mathbf{c})\) and the favorable prediction \(1\), and \(\lambda\) is a positive trade-off factor. By optimizing the above objective, a CFE method \(g(f,\mathbf{x})\) returns a single CFE or a set of CFEs for an input \(\mathbf{x}\). The definition in Eqn. (1) captures the most basic form of counterfactual explanations. Usually, additional constraints are often required to ensure that the produced CFEs are useful and actionable for specific applications (Kern et al., 2018). ### Robustness of Counterfactual Explanations Motivated by the formalization in (Brandt et al., 2018), we formally define the robustness of CFEs in more general cases that include slight input perturbations and model changes. However, before presenting technical details, one critical question that needs to answer is "Do we want CFEs to remain consistent after a series of slight changes, or should they vary to reflect such changes?". The answer depends on practical scenarios. In certain applications, one may expect CFEs to be sensitive to such tiny changes. For example, in the study of the effects of climate change on sea turtles (Brock et al., 2018), one may expect CFEs to be sensitive to temperature changes. In this paper, we assume that such trivial changes are either irrelevant or less important to the generation of CFEs. As such, we aim to produce CFEs that are robust to trivial changes, such as inputs added with random noise, and model retraining on new data from the same distribution. Let \(\hat{\mathbf{x}}\) represent a slightly perturbed sample that is close to \(\mathbf{x}\), meaning \(\hat{\mathbf{x}}\sim p(\mathbf{x})\), where \(p(\mathbf{x})\) is the density estimation of perturbed samples that yield the same prediction as the input \(\mathbf{x}\). Similarly, let \(\hat{f}\in\mathcal{F}\) denote a retrained model belonging to the class \(\mathcal{F}\), which consists of potential models that perform equivalently well as the original one. Definition 1 (Robustness of Counterfactual Explanations).: _Given a function \(d(\cdot,\cdot)\) computing the similarity between two sets of CFEs, we quantify the robustness of the explanations \(g(f,\mathbf{x})\) by assessing the expected similarity between the current set of CFEs, and a new set of CFEs after potential input perturbations or model changes._ \[\operatorname*{\mathbb{E}}_{\begin{subarray}{c}\mathbf{x}^{\prime}\sim p( \mathbf{x}),\\ f\in\mathcal{F}\end{subarray}}[d(g(f,\mathbf{x}),g(\hat{f},\mathbf{x}^{ \prime}))] \tag{3}\] A lower value indicates higher robustness. By minimizing the above expectation, we can generate robust CFEs. However, in real life, \(p(\mathbf{x})\) and \(\mathcal{F}\) are typically unknown. Intuitively, they can be determined based on specific changes that users desire to be robust against. For instance, one can consider adding Gaussian noise to the input, masking certain features, or retraining the models on data from the same distribution, to decide \(p(\mathbf{x})\) and \(\mathcal{F}\). ### Causes of Non-robustness Here, we explain the root causes of non-robustness. The total loss in Eqn. (2) form is usually non-convex due to the non-convex decision surface of probabilistic models and other constraints. As shown in Figure 1, multiple local minima can be found, but current methods often select a single or \(k\) CFEs from them. Such selected CFEs can be different in each trial. Next, we discuss several influential factors that result in non-robust CFEs. * Input perturbations. Input instances can be perturbed by adding some noise or masking random features. Due to the local sensitivity of large models, such trivial perturbations can significantly influence model predictions, leading to different counterfactual explanations (CFEs) (Slack et al., 2017). * Model updates. The predictive model \(f\) in Eqn. (1) is typically retrained periodically in deployment. The updated model \(f^{\prime}\) may exhibit slightly different behavior compared to the previous model and thus may have a great impact on the cost of the desired prediction. * Random factors. Heuristic search methods \(g(f,\mathbf{x})\) for Eq. (2) often involve random factors, e.g., random initial points in gradient descent (Koren and Hinton, 1999), random samples in Growing Sphere (Kri Proof.: Let's denote the inner loop term \(\sum_{l=1}^{d}\mathds{1}_{[\mathbf{c}_{l}^{d}\mathbf{c}_{j}^{d}]}\) as \(D(\mathbf{c}_{i},\mathbf{c}_{j})\) for brevity. We aim to prove that for any two arbitrary CFEs \(\mathbf{c}_{i}\) and \(\mathbf{c}_{j}\), the pairwise distance \(D(\mathbf{c}_{i},\mathbf{c}_{j})\) is always greater than or equal to \(2\). To demonstrate this, we consider two contradictive cases by assuming \(0\leq D(\mathbf{c}_{i},\mathbf{c}_{j})<2\). Case 1 (\(D(\mathbf{c}_{i},\mathbf{c}_{j})=0\)): This case implies \(\mathbf{c}_{i}=\mathbf{c}_{j}\), which contradicts the fact that two distinct CFEs are from the solution set. Case 2 (\(D(\mathbf{c}_{i},\mathbf{c}_{j})=1\)): In this case, there is only one feature difference. Let \(\mathcal{A}_{i}\) and \(\mathcal{A}_{j}\) denote the indices of abnormal features of two solutions. Then, \(\mathcal{A}_{i}\subseteq\mathcal{A}_{j}\) or \(\mathcal{A}_{j}\subseteq\mathcal{A}_{i}\) must exist, which also contradicts the minimal set to return in our problem definition. One CFE should be excluded because it costs more than the other. The above contradictive proof shows that \(D(\mathbf{c}_{i},\mathbf{c}_{j})\geq 2\) holds. Summing up all \(\frac{k(k-1)}{2}\) pair-wise distance \(D(\mathbf{c}_{i},\mathbf{c}_{j})\), we can obtain the lower bound \(\frac{2}{d}\). **Robustness Analysis:** Let \(\mathbf{z}=\mathbf{c}-\mathbf{x}\) represent the recommended actions for a user. In our method, \(\mathbf{z}\) consistently applies to slightly perturbed instance \(\hat{\mathbf{x}}\), except in the following two situations: (1) \(f(\hat{\mathbf{x}}+\mathbf{z})\) is no longer valid, which occurs when slight perturbations have a negative impact on the desired prediction. For example, normal features may be turned into abnormal ones. We need more effort than \(\mathbf{z}\) to achieve the desired prediction. (2) Changing fewer abnormal features is sufficient to achieve the desired prediction, indicating that slight perturbations are beneficial. In this case, \(\mathbf{z}\) is omitted as there exist more cost-efficient solutions. As both model continuity and perturbation strategies can influence the \(\mathbf{z}\), we leave the determination of the maximal bound of perturbation, to which our method remains robust, for future work. ### Problem Solving The brute-force method that evaluates all possible subsets is exponentially complex with respect to the number of abnormal features. Next, we propose a technique Counterfactual Explanations with Minimal Satisfiable Perturbation (CEMSP) to boost the search process. Our method starts with finding the binary vectors \(\mathbf{m}\) that satisfy the desired prediction after feature replacement. This can be converted into the Boolean satisfiability problem that checks whether there exists a Boolean value assignment on \(d_{0}\) variables (features in abnormal ranges) such that the conjunction of Boolean formulas evaluates to \(True\). For better efficiency, we introduce the following proposition from domain knowledge. Proposition 1 (Monotonicity or \(f(r(\mathbf{x},\cdot))\)).: _The function \(f(r(\mathbf{x},\cdot))\) is monotone, that is \(f(r(\mathbf{x},\mathcal{A}))\leq f(r(\mathbf{x},\mathcal{B}))\) holds for all \(\mathcal{A}\subseteq\mathcal{B}\subseteq\mathcal{N}_{0}\)._ This proposition aligns with common sense in practical applications. It is important to note that the undesired prediction arises from specific abnormal features according to our assumption. Intuitively, moving an abnormal feature into the normal range should never decrease the desired probability. Additionally, we assume that the predictive model \(f\) has learned the relationship between feature normal ranges and the predicted classes. Based on the monotonicity of function \(f(r(\mathbf{x},\cdot))\), we derive the following two theorems for any \(\mathcal{A}\subseteq\mathcal{B}\subseteq\mathcal{N}_{0}\). Theorem 2 ().: \(ff(r(\mathbf{x},\mathcal{A}))\) _can achieve the desired target, \(f(r(\mathbf{x},\mathcal{B}))\) can also achieve the desired target for any superset \(\mathcal{B}\) of \(\mathcal{A}\)._ Theorem 3 ().: \(ff(r(\mathbf{x},\mathcal{B}))\) _cannot achieve the desired target, \(f(r(\mathbf{x},\mathcal{A}))\) cannot satisfy the desired target either for any subset \(\mathcal{A}\) of \(\mathcal{B}\)._ Proof.: We first prove theorem 2. If \(f(r(\mathbf{x},\mathcal{A}))\geq\delta\), we can induce that \(f(r(\mathbf{x},\mathcal{B}))\geq\delta\) as \(f(r(\mathbf{x},\mathcal{B}))\geq f(r(\mathbf{x},\mathcal{A}))\) holds for \(\mathcal{A}\subseteq\mathcal{B}\), where \(\delta\) is the confidence threshold of desired prediction. Similarly, theorem 3 can be proved. Theorem 2 illustrates that if we can achieve the desired prediction by replacing abnormal features in \(\mathcal{A}\), there is no need to change more abnormal features. The theorem tends to produce sparser results at a lower cost. Theorem 3 demonstrates that if we cannot achieve the desired prediction by changing abnormal features in \(\mathcal{B}\), there is no need to check the satisfiability of any subsets of \(\mathcal{B}\). To prune as many as subsets at a time with these two theorems, we need to find the minimal satisfiable subset (MSS) and the maximal unsatisfiable subset (MUS), shown as boxes filled with green/red background in Figure 2. Next, we introduce two algorithms to achieve this: Grow(\(\cdot\)) in Algorithm 1 and Shrink(\(\cdot\)) in Algorithm 2. The Grow(\(\cdot\)) algorithm starts with an arbitrary unsatisfiable subset \(\mathcal{A}\) and iteratively attempts to change other abnormal features until a maximal unsatisfiable subset is found. The Shrink(\(\cdot\)) algorithm starts with an arbitrary satisfiable subset and iteratively attempts to reserve some features until a minimal satisfiable subset is found. Note that Grow(\(\cdot\)) and Shrink(\(\cdot\)) algorithms serve two plugins in our method, which can be replaced by any advanced algorithm with the same purpose. Further, we introduce how to solve it under Boolean satisfiability problem (Bach et al., 2016; Goyal et al., 2016). In particular, any subset can be converted to a satisfiable Boolean assignment under a set of propositional logic formulas in conjunctive normal form (CNF), i.e., \(\mathcal{A}\)\(\implies\)\(\mathbf{m}:\) CNF = \(True\). For example, for a subset \(\mathcal{A}=\{1,2\}\) in Figure 2, we can write the CNF in the following equation, and \([1,1,0,0]\) is the Figure 2. The figure shows all subsets of a toy example with 4 abnormal features. The bitvectors denote the binary vector \(\mathbf{m}\). Boxes with red/green borders represent the unsatisfiable/satisfiable subsets respectively. The minimal subsets for CFEs are filled with green background and the maximal unsatisfiable subsets are filled with red background. only solution. \[\text{CNF}=\mathbf{m}^{1}\wedge\mathbf{m}^{2}\wedge\neg\mathbf{m}^{3}\wedge\neg \mathbf{m}^{4} \tag{8}\] By employing this approach, the explicit materialization of all subsets can be avoided, thereby mitigating the exponential space complexity. The crux lies in devising the appropriate propositional logic formulas. Our complete algorithm is shown in Algorithm 3. Initially, in line 1, we merely forbid the changes on normal features and any possible binary assignments on abnormal features can satisfy the CNF. getMask(CNF) uses an SAT solver to return a solution satisfying the CNF in line 3. In our paper, we adopt the Z3 package1. In line 7, we convert the binary vector \(\mathbf{m}\) to indices of subsets. Next, we check whether replacing features in this subset can achieve the desired prediction. If not satisfying the desired prediction, we call the \(\text{Grow}(\cdot)\) algorithm to find the maximal unsatisfiable subset and then prune all subsets of it. The prune operation is achieved by the following propositional Boolean formulas which are conjugated into the existing CNF. Footnote 1: [https://github.com/Z3Prover/z3](https://github.com/Z3Prover/z3) \[\text{PruneSubSet}(\hat{\mathcal{A}})=\vee_{\mathit{cir}\in\mathcal{N}_{0} \setminus\hat{\mathcal{A}}}\mathbf{m}^{i} \tag{9}\] Similarly, if we find a subset satisfying the desired prediction, we call \(\text{Shrink}(\cdot)\) function to return a minimal satisfiable subset, which can induce a CFE with minimal perturbations of features. Then, we prune all supersets of it with the following logic formula, \[\text{PruneSuperSet}(\mathcal{A}^{*})=\vee_{\mathit{cir}\in\mathcal{N}_{0} \setminus\hat{\mathcal{A}}}\neg\mathbf{m}^{i} \tag{10}\] If no solution satisfies the CNF in the current iteration, we stop our algorithm in the line 4 and 5. ``` 0: An input \(\mathbf{x}\), a pretrained model \(f\). 0: All minimal subsets \(\mathcal{A}^{*}\) for CFEs. 1:\(\text{CNF}=\wedge_{\mathit{cir}\in\mathcal{N}_{1}}\neg\mathbf{m}^{i}\) 2:while True do 3:\(\mathbf{m}\leftarrow\text{getMask(CNF)}\) 4:if not \(\mathbf{m}\)then\(\triangleright\) No assignment \(\mathbf{m}\) returned. 5: Break 6:endif 7:\(\mathcal{A}\leftarrow\{i\in\mathcal{N}_{0}:\mathbf{m}^{i}=1\}\) 8:if\(f(r(\mathbf{x},\mathcal{A}))==0\)then 9:\(\mathcal{A}\leftarrow\text{Grow}(\mathcal{A})\) 10:\(\text{CNF}\rightarrow\text{CNF}\wedge\text{PruneSubset}(\hat{\mathcal{A}})\)\(\triangleright\) Prune any subset of \(\mathcal{A}\) 11:else 12:\(\mathcal{A}^{*}\leftarrow\text{Shrink}(\mathcal{A})\) 13:\(\textbf{yield}\mathcal{A}^{*}\) 14:\(\text{CNF}\rightarrow\text{CNF}\wedge\text{PruneSuperset}(\mathcal{A}^{*})\)\(\triangleright\) Prune any superset of \(\mathcal{A}^{*}\) 15:endif 16:endwhile ``` **Algorithm 1**\(\text{Grow}(\mathcal{A})\) **Correctness.** In our algorithm, each subset is either evaluated to be an MSS/MUS or pruned by an MSS/MUS. Hence, our algorithm will return all minimal CFEs, providing the same solutions as the brute-force search. **Space Complexity.** The space complexity depends on how many minimal CFEs are returned. The worst case is \(O(\binom{d_{0}}{\lfloor d_{0}/2\rfloor})\), which corresponds to that feature replacements on arbitrary abnormal features of size \(\lfloor d_{0}/2\rfloor\) are satisfiable. **Time Complexity.** The runtime of our method primarily depends on two parts: (1) a solver (line 3) that takes a set of constraints and returns a mask \(\mathbf{m}\). The solver is considerably faster than calling the pretrained model. (2) evaluating the prediction on a subset. Compared with brute-force search, which calls the deep model \(2^{d_{0}}\) times to check all \(2^{d_{0}}\) possible sets, our method reduces the number of calls on the model by pruning on certain subsets and supersets. Therefore, the empirical running time will decrease. ### Compatibility with Other Constraints The major theme of recent research is to model various constraints into CFE generation. Here, we show how to write these constraints by propositional logic formulas that can be conjugated into the CNF in line 1 of our Algorithm 3. **Immutable features.** Considering some features are immutable (e.g., race, birthplace), CFEs should avoid perturbations on these features. To achieve this, we add the following Boolean logic formula for a set of immutable features \(\mathcal{I}\), \[\text{Actionability}(\mathcal{I})=\wedge_{\mathit{1}\in\mathcal{I}}\neg\mathbf{m} ^{i} \tag{11}\] Accordingly, these features should be ignored in \(\text{Grow}(\cdot)\) algorithm when it searches for the maximal unsatisfiable set. Alternatively, we can directly treat immutable features as normal features to avoid any changes to them. **Conditional immutable features.** These features must change in one direction, e.g., education degree. We can examine whether moving a feature value into its normal range follows the valid direction. If violating the valid direction, we treat this feature as an immutable feature, otherwise, we put no restriction on this feature. **Causality.** In practice, changing one feature may cause a change in other features. Such causal relations among features are generally written by a set of triplets in the structural causal model (SCM) (Zhou et al., 2017), that is, \(\mathcal{M}=\langle U,V,F\rangle\), where \(U\) are exogenous features, \(V\) are endogenous features, and \(F:U\to V\) is a set of functions that describe how endogenous features are quantitatively affected by exogenous features. To adapt causality to our method, we only keep these triplets that normal exogenous features lead to normal endogenous features as our method merely considers discrete feature changes (from abnormal to normal). For example, feature \(\mathbf{x}^{1}\) is an exogenous feature that affects two endogenous features \(\mathbf{x}^{2}\) and \(\mathbf{x}^{3}\), and \(\mathbf{x}^{2}\) and \(\mathbf{x}^{3}\) become normal in the consequence of its normal ancestor feature \(\mathbf{x}^{1}\). In this example, we can add two material conditions in the following that restrict the feature change of CFEs to follow the causal relations. \[\text{Causality}=(\neg\mathbf{m}^{1}\vee\mathbf{m}^{2})\wedge(\neg\mathbf{m}^{1 }\vee\mathbf{m}^{3}) \tag{12}\] At the same time, \(\text{Grow}(\cdot)\) and \(\text{Shrink}(\cdot)\) algorithms should be updated to satisfy such causal relations when they attempt to add/remove a feature. This can be easily implemented by storing these causal relations by an inverted index where an entry is an exogenous feature and the inverted list contains all its endogenous features. **Correlation.** Correlation can be regarded as bidirectional causal relations. For example, if features \(\mathbf{x}^{1}\) and \(\mathbf{x}^{2}\) are correlated and in the normal range simultaneously, we can write the correlation between \(\mathbf{x}^{1}\) and \(\mathbf{x}^{2}\) as, \[\text{Correlation}=(\neg\mathbf{m}^{1}\vee\mathbf{m}^{2})\wedge(\neg\mathbf{m}^ {2}\vee\mathbf{m}^{1}) \tag{13}\] The great advantage of our framework is that it allows us to insert these constraints gradually and flexibly, as the complete relation graphs (e.g., full causal graph) are often difficult to derive in the beginning. ## 5. Experiments In this section, we undertake a quantitative comparison between our proposed method CEMSP and state-of-the-art approaches. Additionally, we demonstrate empirical examples of counterfactual explanations that effectively integrate practical constraints. The source code is available at the GitHub repository2. Footnote 2: [https://github.com/wangyongliu-etu/CEMSP](https://github.com/wangyongliu-etu/CEMSP) _Datasets._ We conducted a comprehensive series of experiments involving a synthetic dataset and two real-world UCI medical datasets. Notably, the medical datasets encompass diagnostic features with well-defined and clinically significant normal ranges. * **Synthetic Dataset** is a binary class dataset consisting of \(20,000\) samples with \(4\) features. Each feature is sampled from the normal distribution independently. Regarding label balance, the binary label \(y\) is assigned a value of \(1\) when the following equation is satisfied; otherwise, \(y\) is set to \(0\): \[(\mathbf{x}^{1}>0.5)\vee(\mathbf{x}^{2}>0.4\wedge\mathbf{x}^{3}>0)\vee( \mathbf{x}^{2}>0.4\wedge\mathbf{x}^{3}>0.5)\] We set the lower values of normal ranges of four features as \([0.55,0.45,0.05,0.55]\) for a higher confidence prediction. * **UCI HCV Dataset**(Kang et al., 2017). This dataset contains \(615\) instances. Following (Bang et al., 2018), we convert \(5\) categories of diagnosis into binary classes. After label conversion, the dataset consists of \(75\) individuals diagnosed with HCV and \(540\) individuals labeled as healthy. Next, we remove the "Age" and "Sex" and keep the other \(10\) medical features with normal ranges. We adopt the tight normal ranges from laboratory tests in (Kang et al., 2018) as certain normal ranges depend on "Sex" and we remove the "Sex" attribute in preprocessing. * **UCI Thyroid Dataset**(Kang et al., 2019). The raw dataset contains \(3,772\) instances where each instance is described by \(15\) features and labeled as either hypothyroidism or normal class. We retain the most discriminative features "FIT", "TSH", "T3", "TT4" that have meaningful normal ranges and remove other features. Subsequently, we drop certain rows with missing values. The final dataset consists of \(223\) patients and \(2530\) healthy users. Normal ranges of "TSH", "T3", "TT4" are from laboratory tests in (Kang et al., 2019). As we do not find the normal range of "FIT" that matches the "FTI" values in this dataset, we simply choose the \(1\)-sigma interval of "FTI" of the normal group. _Evaluation Metrics._ To comprehensively compare over CFEs across various approaches, we employ the following evaluation metrics. * **Inconsistency**. We propose to adopt a modified Hausdorff distance (Kang et al., 2018; Krizhevsky et al., 2017) to measure the inconsistency between two sets of CFEs \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\), \[H(\mathcal{C},\mathcal{C}^{\prime})=\max(h_{mod}(\mathcal{C},\mathcal{C}^{ \prime}),h_{mod}(\mathcal{C}^{\prime},\mathcal{C}))\] (14) where \(h_{mod}(\mathcal{C},\mathcal{C}^{\prime})=\frac{1}{|\mathcal{C}|}\sum_{\mathbf{ c}\in\mathcal{C}}\min_{\mathbf{c}^{\prime}\in\mathcal{C}^{\prime}}||\mathbf{c}- \mathbf{c}^{\prime}||_{2}\) and the lower \(H\) is better. * **Average Percentile Shift (APS)**(Kang et al., 2019) measures the relative cost of perturbations of CFEs, \[\text{APS}(\mathbf{x},\mathcal{C})=\frac{1}{d*|\mathcal{C}|}\sum_{\mathbf{c} \in\mathcal{C}}\sum_{\mathbf{i}=1}^{d}|Q^{i}(\mathbf{c}^{\mathbf{i}})-Q^{i}( \mathbf{x}^{\mathbf{i}})|\] (15) where \(Q^{i}(\cdot)\) denotes the percentile of the \(i\)-th feature value relative to all values of the feature in the whole data set. A lower score is favored. * **Sparsity**. It measures the percentage of features that remain unchanged and we prefer higher sparsity, \[\text{Sparsity}(\mathbf{x},\mathcal{C})=\frac{1}{d*|\mathcal{C}|}\sum_{ \mathbf{c}\in\mathcal{C}}\sum_{\mathbf{i}=1}^{d}\mathbbm{1}_{\mathbf{c}^{ \prime}=\mathbf{x}^{\mathbf{i}}}.\] (16) * **Diversity**. We consider two diversity metrics, named Diversity, which is introduced in (Kang et al., 2019), and count-diversity (named C-Diversity for abbreviation), which is defined in Eqn (7) in Section 4.1, to measure the discrepancy within returned solutions. \[\text{Diversity}=\frac{2}{k(k-1)}\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}\text{dist}( \mathbf{c}_{i},\mathbf{c}_{j})\] (17) where \(\text{dist}(\cdot,\cdot)\) represents the \(L_{1}/MAD\) and \(k\) is the number of CFEs. _Baselines_. We compare our method with the following baseline methods. * **GrowingSphere (GS)**[(20)]. This algorithm searches for CFEs from random samples in the sphere neighborhood of the input. The radius of the sphere grows until a CFE is found. It adopts the postprocessing on returned CFEs to make sparser solutions. * **PlainCF**[(44)]. It minimizes the objective in Eqn. (2) with gradient descent. We run this algorithm from a random initial point and stop when an iteration threshold is reached or the loss difference is below a specified threshold. * **CFProto**[(40)]. It adds a prototype term to restrict that CFEs should resemble the prototype of the desired class. In our experiment, we set the prototype as the closest endpoints of normal ranges of these abnormal features, that is, \(r(\mathbf{x},\mathcal{N}_{0})\). * **DiCE**[(25)]. Compared with PlainCF, it considers the diversity constraint that is modeled by a \(dpp(\cdot)\) term over a set of CFEs. * **SNS**[(4)]. It finds a CFE with higher confidence and lower Lipschitz constant in the neighborhood of a given CFE, to produce consistent prediction under model update. _Experiment configurations_. We first randomly split the datasets into train/test sets at the ratio of \(7:3\) and normalize all features by a standard scaler on two UCI datasets (no feature normalization on the synthetic dataset). Then, we train a 3-layer Multilayer perceptron (MLP) model \(f\) with Adam optimizer. The test accuracies are 99%, 96%, and 98% on three datasets correspondingly. As we intend to convert unhealthy patients to healthy ones, we produce CFEs for all correctly classified patients in test sets for two UCI datasets. For saving time, we only produce CFEs for 100 random true negative samples in the synthetic dataset. Our method produces a set of CFEs of varied size \(k\), while GS, PlainCF, CFProto, and SNS generate one at a time. We run GS, PlainCF, and CFProto \(k\) times for fair comparisons to generate the same number of CFEs as ours. For DiCE, we directly keep the size of the diverse set as ours. We evaluate all CFE methods \(g(f,\mathbf{x})\) under the following two kinds of slight updates to measure the algorithm robustness. * Inputs are fixed, and we produce two sets of CFEs from two models that are trained on the same dataset with different initializations. * Model is fixed, and we produce two sets of CFEs from an input \(\mathbf{x}\) and its perturbed instance \(\mathbf{x}^{\prime}\), where \(\mathbf{x}^{\prime}=\mathbf{x}+\alpha\) and \(\alpha\) is the random noise sampling from a Gaussian distribution \(\mathcal{N}(0,\sigma)\). In our experiments, \(\sigma\in\{0.0001,0.001,0.01,0.1\}\). ### Quantitative Evaluation We first report the quantitative comparison of sparsity, APS, Diversity, and C-Diversity in Figure 3, where the \(x\)-axis denotes a counterfactual explanation generation method, the \(y\)-axis represents the average score of each metric of all evaluated instances. We can see that our method achieves the competitive sparsity as GS. GS achieves sparsity through post-preprocessing techniques, whereas our method CEMSP focuses on making minimal modifications to subsets of abnormal features. In contrast, other methods do not explicitly optimize for sparsity and consequently fall behind in this aspect. For the APS, CEMSP is slightly better. This result is grounded in two key considerations: firstly, we substitute an abnormal feature with its closest endpoints and secondly, we aim to change the minimal number of abnormal features. It is worth noting that although PlainCF minimizes the \(L_{1}/\mathit{MAD}\) distance in its objective, this does not equate to minimizing the APS since Figure 4. Evaluation of inconsistency score of model retraining. Figure 5. Evaluation of inconsistency score of input perturbations. \(x\)-axis represents the standard deviation \(\sigma\) of added Gaussian noise. Figure 3. Evaluation of sparsity, APS, Diversity, and C-Diversity over three datasets. The \(\uparrow\downarrow\) means the higher/lower score is better. Diversity and C-Diversity of the Thyroid dataset are missing as our method CEMSP only produces a single counterfactual explanation. APS is a density-aware metric among the population. Our method achieves at least \(\frac{2}{4}\) C-Diversity as expected. In contrast to sparsity, C-Diversity sums up the fraction of features that are different between any two CFEs. Therefore, the method with a higher sparsity often associates with a lower C-Diversity. As a result, our CGNP appears to have less competitive C-Diversity than methods that make simultaneous changes on many numerous features. However, our CGNP has a competitive Diversity score defined in (Han et al., 2017). Figure 4 and 5 report inconsistency scores under model retraining and input perturbations. Our CGNP exhibits superior performance compared to other baseline methods. Specifically, GS, plainCF, and DiCE yield the poorest results when consistency restrictions are not enforced. CFProto, which incorporates a prototype term, achieves a better inconsistency score than PlainCF by directing all CFEs towards the prototype. As discussed earlier, our findings demonstrate that SNS does not perform well in generating CFEs with consistent feature values, despite having CFEs that yield consistent model predictions. In summary, our CGNP outperforms the baseline methods across the aforementioned metrics, establishing its overall superiority. ### Use-Case Evaluation Next, we use the use-case evaluation in Figure 6 to present the compatibility of our method. The input instance is a patient in the HCV dataset who has the undesired prediction. Without any constraint, our method generates 4 CFEs, as illustrated in the table located at the bottom left. By introducing additional constraints, our model can effortlessly generate new CFEs that meet the desired criteria. For example, if we want to keep the original value of \(BIL\), we can easily incorporate the CNF (\(\neg\mathbf{m}^{5}\)), leading to CFEs that solely modify the remaining features. Furthermore, domain knowledge reveals that \(ALT\) and \(AST\) are correlated (Shen et al., 2017). Consequently, we can incorporate correlation constraints that limit simultaneous changes to both features. This can be achieved by including the CNF (\(\neg\mathbf{m}^{3}\wedge\mathbf{m}^{4}\)) \(\vee\) (\(\mathbf{m}^{3}\wedge\neg\mathbf{m}^{4}\)) in our method, effectively enforcing the desired correlation constraint. Although this use-case evaluation may not yet be fully applicable to real-life scenarios, it offers valuable insights and demonstrates the potential to accommodate more practical considerations. ## 6. Conclusion Lacking robustness in counterfactual explanations can undermine both individual fairness and model reliability. In this work, we present a novel framework to generate robust and diverse counterfactual explanations (CFEs). Our work leverages the feature normal ranges from domain knowledge and generates CFEs that replace the minimal number of abnormal features to the closest endpoints of their normal ranges. We convert this problem into the Boolean satisfiability problem and solve it with modern SAT solvers. Experiments on both synthetic and real-life datasets demonstrate that our generated CFEs are more consistent than baselines while preserving flexibility for user preferences. ## 7. Limitations and Future Work While our work offers the potential to address the non-robustness issue through the utilization of domain knowledge, such as the normal ranges in healthcare and finance, some limitations hinder its applicability in broader contexts. Firstly, the scalability of the proposed method is underestimated. SAT solvers exhibit exponential complexity in the worst-case scenario. When dealing with a substantial number of features, the time required to find a binary mask from an SAT solver may surpass that of a forward pass in the DNN model. This concern can be addressed through empirical comparisons of the execution time between the SAT solver and DNN model revoking. Secondly, our approach is not directly applicable to scenarios where a portion of normal ranges is unknown. It might be necessary to incorporate additional information to determine the appropriate replacement values for these features. Thirdly, our study is established on binary classification tasks. However, the direct adaptation of our method to multi-class classification or regression tasks remains challenging. Normal ranges are typically contingent upon the target prediction. In the context of multi-class classification or regression, the target predictions can become intricate, rendering the normal ranges unattainable. In future work, our ultimate goal is to investigate robust and flexible counterfactual explanations in more general situations without any assumption about normal ranges. In addition, we intend to develop a sustainable system that offers users actionable recommendations and gathers valuable feedback to nourish continual enhancements. ###### Acknowledgements. This research is supported, in part, by Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI), Nanyang Technological University, Singapore. This research is also supported, in part, by the National Research Foundation, Prime Minister's Office, Singapore under its NRF Investigatorship Programme (NRFI Award No. NRF-NRFI05-2019-0002). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of National Research Foundation, Singapore. Figure 6. Use-case evaluation of a patient in the HCV dataset. The top table presents the original feature values of the patient, while the shaded features represent the altered features of CFEs and their closest endpoints. The tables below display CFEs before/after incorporating constraints. To incorporate actionability and correlation, we introduced the expressions \(\neg m^{5}\) and (\(\neg\mathbf{m}^{3}\wedge\mathbf{m}^{4}\)) \(\vee\) (\(\mathbf{m}^{3}\wedge\neg\mathbf{m}^{4}\)), respectively.
2309.12014
Singular Control in a Cash Management Model with Ambiguity
We consider a singular control model of cash reserve management, driven by a diffusion under ambiguity. The manager is assumed to have maxmin preferences over a set of priors characterized by $\kappa$-ignorance. A verification theorem is established to determine the firm's cost function and the optimal cash policy; the latter taking the form of a control barrier policy. In a model driven by arithmetic Brownian motion, we numerically show that an increase in ambiguity leads to higher expected costs under the worst-case prior and a narrower inaction region. The latter effect can be used to provide an ambiguity-driven explanation for observed cash management behavior.
Arnon Archankul, Giorgio Ferrari, Tobias Hellmann, Jacco J. J. Thijssen
2023-09-21T12:32:41Z
http://arxiv.org/abs/2309.12014v1
# Singular Control in a Cash Management Model with Ambiguity ###### Abstract We consider a singular control model of cash reserve management, driven by a diffusion under ambiguity. The manager is assumed to have maxmin preferences over a set of priors characterized by \(\kappa\)-ignorance. A verification theorem is established to determine the firm's cost function and the optimal cash policy; the latter taking the form of a control barrier policy. In a model driven by arithmetic Brownian motion, we numerically show that an increase in ambiguity leads to higher expected costs under the worst-case prior and a narrower inaction region. The latter effect can be used to provide an ambiguity-driven explanation for observed cash management behavior. _Keywords:_ Singular control, Ambiguity, Inventory models ## 1 Introduction An important question in corporate finance is that of optimal cash management. On the one hand, firms require cash to finance the firm as a going concern. On the other hand, shareholders require dividend payouts as a reward for providing capital. The seminal contribution by Jeanblanc-Picque and Shiryaev (1995) uses a stochastic storage models _a la_ Harrison and Taksar (1978) to find the optimal size of a firm's cash hoard in the face of stochastically evolving net cash flows. In this paper, we are interested in optimal cash management under ambiguity, i.e., a situation where the manager is not able to reduce the uncertainty over future net cash flows into a single probability measure. We are interested in the interplay between traditional concerns over risk (as measured by, e.g., confidence intervals provided by a given probability measure) and ambiguity (as measured by the "size" of the set of probability measures considered by the manager) under the assumption that the manager is ambiguity averse. Our motivation for including ambiguity in a model of optimal cash holding is the finding from the recent literature that shows that decision makers' beliefs influence corporate cash holdings. In particular, Deshmukh et al. (2021) show that "relative to rational CEOs, optimistic CEOs hold 24% less cash." In a standard Bayesian setting this can only be explained if different CEOs use different probability measures. This then almost inevitably leads to one CEO being "more rational" than another. From a theoretical perspective this is unsatisfactory. For example, in Deshmukh et al. (2021) there is a "true" probability measure which then gets distorted by non-rational optimistic or pessimistic CEOs. We propose to use a framework in which all managers may use the same _reference prior_ over future cash flows, but they may differ in their level of _ambiguity_ over the true probability measure, e.g., due to differing levels of experience in their industry. The distinction between uncertainty resulting from randomness governed by a distribution ("known unknowns") and uncertainty over the correct distribution ("unknown unknowns") goes back to Knight (1921). In his seminal work he refers to the former as _risk_ and the latter as _uncertainty_ or _ambiguity_. The effect of ambiguity on decision making has been studied extensively, most famously by Ellsberg (1961). The overwhelming conclusion of the experimental literature is that decision makers are _ambiguity averse_. In the classical Ellsberg experiment, a DM has to place bets on one of two urns, both with 100 red or blue balls. For the first urn it is known that half the balls are red. For the second urn no such information is available. Since most people are observed to choose bets on the first urn over bets on the second urn, Savage's "sure thing principle" is being violated. Note that the Ellsberg paradox is not really a paradox, because it does not result from a cognitive bias or irrationality. Rather, observed behaviour is driven by a lack of information. It is perfectly possible for DMs to make consistent decisions under ambiguity. This has been shown by Gilboa and Schmeidler (1989), who incorporate an ambiguity aversion axiom into the subjective expected utility framework. They then show that a rational decision maker acts _as if_ she maximizes expected utility over the worst-case prior within a (subjectively chosen) set of priors. This approach has been successfully used in many applications in economics, finance, and OR.1 Footnote 1: See, for example, Nishimura and Ozaki (2007); Trojanowska and Kort (2010); Thijssen (2011); Cheng and Riedel (2013); Hellmann and Thijssen (2018) for applications to the investment decisions of timing game. The works of Lin and Riedel (2014); Jin and Yu Zhou (2015); Fouque et al. (2016) apply ambiguity to portfolio management. For the broader theory of ambiguity in volatility and interest rate in asset pricing, we refer to Epstein and Ji (2013) and Lin and Riedel (2021), respectively. Our contribution is to apply the maxmin multiple prior model to a singular control model of optimal storage inventory, with an application to a firm's cash management. On a regular basis, firms are faced with operational costs (e.g. rent, capital stock, labour's wage, etc.) that have to be settled promptly with reserved cash. The fact that this cash generates no (or low) return means holding it results in an opportunity loss, which can be interpreted as a holding cost as it could potentially be used for income-generating activities, such as investments or paying out dividends. Therefore, excessive cash holding is undesirable. On the other hand, having a shortage of cash reserves results in a delay of cost settlement, which often incurs a penalty fee. Therefore, the firm has an incentive to inject some amount of cash into the system. This could, for example, be done by selling some assets or issuing bonds. These two circumstances create a trade-off that suggests the existence of target level of cash. In a model where cash adjustments are costly, we show that there exists an optimal _control band policy_, where the firm keeps its cash hoard between an upper and lower bound. While this is no different from a standard model under risk, ambiguity does bring some new aspects to the comparative statics of the optimal policy. For example, as in the standard model without ambiguity, the higher the risk, the higher the long-term discounted cost of cash. Ambiguity amplifies this effect, even though an increase in the degree of ambiguity leads to manager to exert control _earlier_. This is in contrast to the risk-only model, where an increase in risk leads the manager to exert control _later_. The reason for this result is that a more ambiguous DM expects the cash level to increase (when positive) or decrease (when negative) more rapidly (in expectation) than a less ambiguous DM. Since holding costs are increasing in the absolute value of the cash hoard, a more ambiguous DM will, thus, exert control sooner. This can provide an explanation for the empirically observed behaviour in Deshmukh et al. (2021) that "more optimistic" CEOs have bigger cash hoards. In our model, this behaviour is not due to irrationality, but an aspect of the uncertain environment that the manager faces. The cash reserve problem was first addressed in the literature by Baumol (1952) and Tobin (1956), who studied the cash balance problem under the assumption that demand is deterministic, which is far from realistic. The stochastic treatment was later established under a discrete-time (Markov chain) framework by e.g. Eppen and Fama (1969). A more general approach for storage systems in continuous time, in particular, with demand driven by Brownian motion, has been developed over the past decades. Bather (1966), Vial (1972), Constantinides (1976), Harrison (1978), Harrison and Taksar (1983) and many others are among the notable authors. To get an overview of the related papers, we refer the reader to Harrison (2013). One of the first papers to axiomatize ambiguity is Gilboa and Schmeidler (1989). They model ambiguity as a set of priors, among which the DM (subjectively) selects the one that maximises the DM's expected utility. Under an axiom of ambiguity aversion, the prior that is chosen is called the _worse-case prior_, which captures the intuition that an ambiguity-averse DM is cautious about their beliefs and heavily weighs the pos sibility of undesirable consequences of their decision. The Gilboa - Schmeidler criterion has become known as _maxmin utility_. However, the Gilboa - Schmeidler framework is a static one and is, thus, insufficient for dealing with situations where the worst-case prior might change over time. An inter-temporal version was proposed by Epstein and Wang (1994) in discrete time and by Chen and Epstein (2002) in continuous time. In these models, the worst-case prior is updated in a Bellman principle-like one-step-ahead procedure. In order to make this work, attention is restricted to sets of priors that are called _strongly rectangular_. We use the Chen and Epstein (2002) approach to modeling multiple priors. In fact, we use a stronger assumption, also introduced in Chen and Epstein (2002), and assume that ambiguity takes form of _\(\kappa\)-ignorance_. That is to say, the DM has a reference probability, which is distorted through a density generator. The density generator is assumed to take values in an interval \([-\kappa,\kappa]\), so that the reference prior together with the parameter \(\kappa\) determines the set of priors that is considered by the DM. While restrictive, an advantage of this approach is that the degree of ambiguity can be seen to be measured by \(\kappa\). Importantly, in our model the worst-case prior is not constant but varies over time, depending on the evolution of the actual amount of cash currently held. This unusual feature has been observed by Cheng and Riedel (2013) in the context of pricing a straddle option and Hellmann and Thijssen (2018). The latter paper models a timing game between two firms contemplating an investment opportunity under ambiguity and show that ambiguity aversion has two effects: ambiguity over future demand (fear of the market), as in the standard literature, but also ambiguity over the other firm's investment decision (fear of the competitor). These have opposite effects on what constitutes the worst-case prior. It turns out there is a threshold to distinguish which type of ambiguity dominates through time. Our model has a similar feature in that control costs are incurred whether at the upper or lower barrier. The worst-case priors at each of these barriers are opposite and this leads, in turn, to the existence of a threshold somewhere in the inaction region (endogenously determined) that separates two regions where different measures constitute the worst-case prior. The most closely contribution to our work is Chakraborty et al. (2021) in which a one-side singular control of a firm's dividend payout policy is considered under ambiguity. They assume, in addition to the classical singular control, that there is a penalty cost associated with a change of measure, which is determined by the Kullback-Leibler divergence. The use of Kullback-Leibler divergence as a model for multiple priors is well-established in the literature on robust control; see, e.g., Hansen et al. (2006); Hansen and Sargent (2010, 2011); Hansen and Miao (2018, 2022); Ferrari et al. (2022) and references therein. The more behavioral approach that motivates \(\kappa\)-ignorance is, in fact, closely related to the robust control approach. In both cases, the solution to the control problem takes the form of a control band policy. However, it is important to note that in a robust control framework, the DM _chooses_ a probability measure, while acknowledging the possibility of mis-specification. As a result, the DM assigns a higher weight to the cost function using an uncertainty equivalent expectation. In our work, on the other hand, the worst-case prior _follows_ from the chosen control policy. The structure of this paper is as follows: In Section 2 we construct a general formulation for singular control of the Brownian cash reserve under ambiguity. We provide a verification theorem for the optimal control band policy and the existence of the ambiguity trigger in Section 3. In Section 4 we provide a simplification of the verification theorem for the case where the present value of the (uncontrolled) expected holding costs is affine in the current value of the cash holdings. This includes, e.g., the case where the uncontrolled cash process follows an arithmetic Brownian motion, or a mean-reverting Ornstein-Uhlenbeck process. A numerical illustration for the arithmetic Brownian motion case is given in Section 5. ## 2 Simple Cash-Management Model with Drift Ambiguity Let \(E\subseteq\mathds{R}\) be a connected state space endowed with the Euclidean topology and such that \(0\in E\). Given \((\Omega,\mathscr{F})\) a measurable space on which we define, for all \(x\in E\), a probability measure \(\mathsf{P}_{x}\) with associated expectation operator \(\mathsf{E}_{x}\). On \((\Omega,\mathscr{F},\mathsf{P}_{x})\), we assume that \(\alpha:E\to\mathds{R}\) and \(\sigma:E\to\mathds{R}\) are continuously differentiable functions such that \[|\alpha(x)|+|\sigma(x)|\leq C(1+|x|)\ \ \text{for all}\ x\in E \tag{1}\] for some \(C\in\mathds{R}\). Then a time-homogeneous diffusion, \(X\triangleq\left(X_{t}\right)_{t\geq 0}\), taking values in \(E\), is the unique strong solution to the stochastic differential equation (SDE), \[\mathrm{d}X_{t}=\alpha(X_{t})\mathrm{d}t+\sigma(X_{t})\mathrm{d}B_{t},\quad X _{0}=x,\quad\mathsf{P}_{x}-\text{a.s.}, \tag{2}\] where \(B\triangleq\left(B_{t}\right)_{t\geq 0}\) is a standard Brownian motion. Dynamic revelation of information is modeled by the natural filtration \(\mathbf{F}=\left(\mathscr{F}_{t}\right)_{t\geq 0}\) generated by \(X\). We assume that the end points of \(E\) are \(\mathsf{P}_{x}\)-a.s. unattainable. A _control policy_ is a pair of processes \((L,U)\), where \(L\) and \(U\) are adapted, non-decreasing, and non-negative. These processes are associated with increases and decreases, respectively, of \(X\) at times at which control is exerted. With the policy \((L,U)\) we associate the _controlled process_\(X^{L,U}\) and we say that a control policy \((L,U)\) is _feasible_ if for all \(x\in E\), there exists a unique \(X^{L,U}\) that strongly solves \[\mathrm{d}X_{t}^{L,U}=\alpha(X_{t}^{L,U})\mathrm{d}t+\sigma(X_{t}^{L,U}) \mathrm{d}B_{t}+\mathrm{d}L_{t}-\mathrm{d}U_{t}\quad X_{0}=x,\quad\mathsf{P} _{x}-\text{a.s.}, \tag{3}\] and if there exist \(A>0\) and \(B<0\), such that \[\mathsf{P}_{x}\left(\sup_{t\geq 0}X_{t}^{L,U}<A,\inf_{t\geq 0}X_{t}^{L,U}>B\right)=1 \tag{4}\] The set of feasible control policies is denoted by \(\mathscr{D}\), while we denote by \(X^{0}\), the uncontrolled process; that is, \(X^{0}\triangleq X^{0,0}\). The instantaneous holding costs are given by an almost everywhere differentiable function \(c:\mathds{R}\to\mathds{R}_{+}\). For simplicity we will assume that \[c(x)=\begin{cases}\hat{c}|x|&\text{if }x\geq 0\\ \hat{c}|x|&\text{if }x<0.\end{cases} \tag{5}\] for some \(\hat{c},\tilde{c}>0\). The instantaneous and proportional costs of lower and upper control are denoted by \(\ell>0\) and \(u>0\), respectively. Our results can easily be extended to more general convex holding costs with \(c(0)=0\), albeit at the cost of more cumbersome notation. In a cash management setting, one could think of 0 as the _target level_ of cash. When \(x>0\), the firm has excess cash while if \(x<0\) the firm needs to access cash on the markets. When the cash reserves get too low the firm may need to issue new equity, which incurs costs \(\ell\), whereas when \(x\) gets too large, the firm may wish to pay out dividends, which incurs a cost \(u\). The decision-maker (DM) discounts costs at the constant rate \(\rho>0\). We, furthermore, assume that \[\mathsf{E}_{x}\left[\int_{0}^{\infty}e^{-\rho t}|X_{t}^{0}|\mathrm{d}t\right] <\infty,\;\;\;x\in E.\] A typical process that satisfies all the assumptions made so far is the arithmetic Brownian motion, defined on the state space \(E=\mathds{R}\), being the strong solution of the SDE \[\mathrm{d}X_{t}^{0}=\alpha\mathrm{d}t+\sigma\mathrm{d}B_{t}, \tag{6}\] with constant drift \(\alpha\in\mathds{R}\) and standard deviation \(\sigma>0\). For this specification the uncontrolled cash process is \[X_{t}^{0}=x+\alpha t+\sigma B_{t},\] whereas for any feasible control policy \((L,U)\in\mathscr{D}\), the controlled cash process satisfies \[X_{t}^{L,U}=x+\alpha t+\sigma B_{t}+L_{t}-U_{t}.\] Another process that can be used is the mean-reverting Ornstein-Uhlenbeck process \[\mathrm{d}X_{t}^{0}=-\eta X_{t}^{0}\mathrm{d}t+\sigma\mathrm{d}B_{t},\] where \(\eta>0\) is the speed of mean-reversion. In this case \[X_{t}^{0}=xe^{-\eta t}+\int_{0}^{t}e^{-\eta(t-s)}\mathrm{d}B_{s}.\] It is assumed that the DM is _ambiguous_ about the measure \(\mathsf{P}_{x}\) and, consequently, considers a set of priors \(\mathscr{P}^{\Theta}\). Each of these priors is constructed from the reference measure \(\mathsf{P}_{x}\) by means of a _density generator_\(\theta\in\Theta\). A process \(\theta=\left(\theta_{t}\right)_{t\geq 0}\) is a density generator if the process \(\left(M_{t}^{\theta}\right)_{t\geq 0}\), with \[\frac{\mathrm{d}M_{t}^{\theta}}{M_{t}^{\theta}}=-\theta_{t}\mathrm{d}B_{t}, \quad M_{0}^{\theta}=1, \tag{7}\] is a \(\mathsf{P}_{x}\)-martingale. Such a process \(\theta\) generates a new measure \(\mathsf{P}_{x}^{\theta}\) on \((\Omega,\mathscr{F}^{B})\) via the Radon-Nikodym derivative \(\mathrm{d}\mathsf{P}_{x}^{\theta}/\mathrm{d}\mathsf{P}_{x}|_{\mathscr{F}_{T}^ {B}}=M_{T}^{\theta}\) for any \(T>0\). Here, \(\mathscr{F}^{B}\triangleq\mathscr{F}_{\infty}^{B}\), where \(\mathsf{F}^{B}\triangleq\left(\mathscr{F}_{t}^{B}\right)_{t\geq 0}\) is the (uncompleted) filtration generated by \(B\). Indeed, if \(\theta\in\Theta\), then it follows from Girsanov's theorem (see, Corollary 5.2 in Chapter 3.5 of Karatzas and Shreve, 1991) that under the measure \(\mathsf{P}_{x}^{\theta}\) the process \(B^{\theta}\triangleq\left(B_{t}^{\theta}\right)_{t\geq 0}\), defined by \[B_{t}^{\theta}\triangleq B_{t}+\int_{0}^{t}\theta_{s}\mathrm{d}s,\] is a Brownian motion on \((\Omega,\mathscr{F}^{B},\mathbf{F}^{B},\mathsf{P}_{x}^{\theta})\) and that, under \(\mathsf{P}_{x}^{\theta}\), the process \(X^{L,U}\) is the unique strong solution to the SDE \[\mathrm{d}X_{t}^{L,U}=\left(\alpha(X_{t}^{L,U})-\sigma(X_{t}^{L,U})\theta_{t} \right)\mathrm{d}t+\sigma(X_{t}^{L,U})\mathrm{d}B_{t}^{\theta}+\mathrm{d}L_{t }-\mathrm{d}U_{t},\quad\mathsf{P}_{x}^{\theta}(X_{0}^{L,U}=x)=1.\] In the remainder we restrict attention to so-called \(\kappa\)_-ignorance_, i.e. we only use density generators \(\theta\) for which \(\theta_{t}\in[-\kappa,+\kappa]\) for all \(t\geq 0\) and some \(\kappa\geq 0\). Note that \(\mathscr{P}^{\Theta}=\{\mathsf{P}_{x}\}\) if \(\kappa=0\). To model _ambiguity aversion_, it is assumed that the DM uses maxmin utility _a la_Gilboa and Schmeidler (1989). That is, the _worst-case cost function_ associated with the feasible policy \((L,U)\in\mathscr{D}\) is given by \(J^{L,U}:E\rightarrow\mathds{R}\), where \[J^{L,U}(x)\triangleq\sup_{\theta\in\Theta}\mathsf{E}_{x}^{\theta}\left[\int_{0 }^{\infty}e^{-\rho t}\left(c(X_{t}^{L,U})\mathrm{d}t+\ell\mathrm{d}L_{t}+u \mathrm{d}U_{t}\right)\right]. \tag{8}\] The DM's objective is to find the feasible policy that minimizes the worst-case expected costs over the set of priors \(\mathscr{P}^{\Theta}\). The firm's _minimal cost function_ is \[J^{*}(x)\triangleq\inf_{(L,U)\in\mathscr{D}}J^{L,U}(x). \tag{9}\] From Chen and Epstein (2002, Theorem 2.1) it follows that there exists an _upper-rim generator_\(\theta^{*}\in\Theta\) so that \[J^{L,U}(x)=\mathsf{E}_{x}^{\theta^{*}}\left[\int_{0}^{\infty}e^{-\rho t}\left\{ c\left(X_{t}^{L,U}\right)\mathrm{d}t+\ell\mathrm{d}L_{t}+u\mathrm{d}U_{t} \right\}\right]. \tag{10}\] Furthermore, from Chen and Epstein (2002, Section 3.3) it follows that under \(\kappa\)-ignorance it holds that \(\theta_{t}^{*}\in\{-\kappa,\kappa\}\) for all \(t\geq 0\). Finally, in many cases the optimal policy consists of exerting control only when the process \(X\) exits an interval \((\underline{x},\overline{x})\). Therefore, with each pair \((\underline{x},\overline{x})\in E\times E\), \(\underline{x}<\overline{x}\), we associate the _control band policy_\((L,U)\in\mathscr{D}\) for which \(\underline{x}\) is an (upward) reflecting barrier for \(L\) and \(\overline{x}\) is a (downward) reflecting barrier for \(U\). For such policies it holds that 1. \(X_{t}^{L,U}\in[\underline{x},\overline{x}]\), \(\mathsf{P}_{x}\)-a.s. for all \(t\geq 0\), and 2. \(\int_{0}^{\infty}1_{(\underline{x},\overline{x})}(X_{t}^{L,U})\mathrm{d}(L_{ t}+U_{t})=0\), \(\mathsf{P}_{x}\)-a.s. Following Tanaka (1979), our assumptions on \(X\) are sufficient to guarantee the existence of control band policies. ## 3 A General Verification Theorem Let \(\mathscr{L}\) denote the characteristic operator on \(C^{2}(E)\) of the killed process \(\big{(}e^{-\rho t}X_{t}\big{)}_{t\geq 0}\) under \(\mathsf{P}_{x}\), i.e. \[\mathscr{L}\varphi(x)=\frac{1}{2}\sigma^{2}(x)\varphi^{\prime\prime}(x)+ \alpha(x)\varphi^{\prime}(x)-\rho\varphi(x). \tag{11}\] On \(C^{1}(E)\) we also define the density generator, \[\theta^{\varphi}\triangleq\theta^{\varphi}(x)\triangleq\begin{cases}-\kappa& \text{if }\varphi^{\prime}(x)\geq 0\\ +\kappa&\text{if }\varphi^{\prime}(x)<0.\end{cases}\] We get the following verification theorem. **Theorem 1**.: _Suppose there exists a pair \((\underline{x},\overline{x})\in E\times E\), \(\underline{x}<0<\overline{x}\), and a non-negative, convex, and \(C^{2}\)-function \(\varphi\) on \((\underline{x},\overline{x})\) such that_ 1. \(\theta^{\varphi}(x)\sigma(x)\varphi^{\prime}(x)-\mathscr{L}\varphi(x)=c(x)\) _on_ \((\underline{x},\overline{x})\)_,_ 2. \(\varphi^{\prime}(\underline{x}+)=-\ell\)_,_ \(\varphi^{\prime}(\overline{x}-)=u\)_,_ 3. \(\varphi^{\prime\prime}(\underline{x}+)=\varphi^{\prime\prime}(\overline{x}-)=0\)_,_ 4. \(\check{c}|x-\underline{x}|\geq\ell\big{[}\rho|\underline{x}-x|-\big{(} \alpha(x)-\alpha(\underline{x})\big{)}-\kappa\big{(}\sigma(x)-\sigma( \underline{x})\big{)}\big{]}\)_, for all_ \(x<\underline{x}\)_,_ 5. \(\check{c}|x-\overline{x}|\geq u\big{[}\rho|\overline{x}-x|-\big{(}\alpha( \overline{x})-\alpha(x)\big{)}+\kappa\big{(}\sigma(\overline{x})-\sigma(x) \big{)}\big{]}\)_, for all_ \(x>\overline{x}\)_, and_ _._ 6. \(\lim_{T\to\infty}e^{-\rho T}\mathbf{E}_{x}^{\rho\varphi}\left[\varphi\big{(}X_{s}^{L,U} \big{)}\right]=0\)_, for all_ \((L,U)\in\mathscr{D}\)_._ _Then the optimal policy \((L^{*},U^{*})\) is the control band policy associated with \((\underline{x},\overline{x})\) and the minimal cost function is_ \[J^{*}(x)=\begin{cases}\ell|\underline{x}-x|+\varphi(\underline{x}+)&\text{if }x \leq\underline{x}\\ \varphi(x)&\text{if }\underline{x}<x<\overline{x}\\ u|x-\overline{x}|+\varphi(\overline{x}-)&\text{if }x\geq\overline{x}\end{cases}.\] **Remark 1**.: _Conditions 4 and 5 guarantee existence of a feasible policy under ambiguity. For the case of an uncontrolled arithmetic Brownian motion,_ \[\mathrm{d}X^{0}=\alpha\mathrm{d}t+\sigma\mathrm{d}B_{t},\] _these conditions reduce to_ \[\check{c}\geq\rho\ell,\quad\text{and}\quad\hat{c}\geq\rho u.\] _That is, the discounted perpetual holding costs of positive (negative) cash balances should exceed the control costs of reducing (increasing) the cash balance. As another example, for the case of an uncontrolled mean-reverting Ornstein-Uhlenbeck process,_ \[\mathrm{d}X^{0}=\eta(\tilde{x}-X_{t}^{0})\mathrm{d}t+\sigma\mathrm{d}B_{t},\] _where \(\tilde{x}\) is the long-run mean and \(\eta\) is the speed of mean reversion, conditions 4 and 5 reduce to_ \[\check{c}\geq(\rho+\eta)\ell,\quad\text{and}\quad\hat{c}\geq(\rho+\eta)u.\] **Proof.** Let \(\varphi\) and \(\underline{x}<0<\overline{x}\) satisfy conditions 1-6. Extend \(\varphi\) to \(E\), in a twice-continuously differentiable way, as follows: \[\varphi(x)=\begin{cases}\ell|\underline{x}-x|+\varphi(\underline{x}+)&\text{ if }x\leq\underline{x}\\ \varphi(x)&\text{if }\underline{x}<x<\overline{x}\\ u|x-\overline{x}|+\varphi(\overline{x}-)&\text{if }x\geq\overline{x}\end{cases}.\] Let \((L^{*},U^{*})\) be the control band policy associated with \((\underline{x},\overline{x})\). The proof proceeds in two steps. First we prove that \(J^{L^{*},U^{*}}=\varphi\). Then we show that for any other feasible policy \((L,U)\) it holds that \(J^{L,U}\geq J^{L^{*},U^{*}}\), so that \(J^{*}=J^{L^{*},U^{*}}\). Note that \[\theta^{\varphi}(x)=\arg\min_{\theta\in[-\kappa,+\kappa]}\bigl{(}\theta \sigma(x)\text{sign}(\varphi^{\prime}(x))\bigr{)}, \tag{12}\] so that the worst-case prior is generated by \[\theta_{t}^{*}(\omega)=\theta^{\varphi}\big{(}X_{t}(\omega)\big{)}. \tag{13}\] **1.** Fix \(T>0\), \(x\in E\), \(\theta\in\Theta\) and set \(\theta_{t}^{\varphi}\triangleq\theta^{\varphi}(X_{t}^{L^{*},U^{*}})\). From Ito's lemma it then follows that \[\mathsf{E}_{x}^{\theta}\left[e^{-\rho T}\varphi\big{(}X_{T}^{L^{* },U^{*}}\big{)}\right]= \varphi(x)+\mathsf{E}_{x}^{\theta}\left[\int_{0}^{T}e^{-\rho s} \varphi^{\prime}\big{(}X_{s}^{L^{*},U^{*}}\big{)}\mathrm{d}\left(L_{s}^{*}+U_{ s}^{*}\right)\right]\] \[+\mathsf{E}_{x}^{\theta}\left[\int_{0}^{T}e^{-\rho s}\left\{ \mathscr{L}\varphi\big{(}X_{s}^{L^{*},U^{*}}\big{)}-\theta_{s}\sigma\big{(}X_ {s}^{L^{*},U^{*}}\big{)}\varphi^{\prime}\big{(}X_{s}^{L^{*},U^{*}}\big{)} \right\}\mathrm{d}s\right]\] \[\leq \varphi(x)+\mathsf{E}_{x}^{\theta}\left[\int_{0}^{T}e^{-\rho s} \varphi^{\prime}\big{(}X_{s}^{L^{*},U^{*}}\big{)}\mathrm{d}\left(L_{s}^{*}+U_{ s}^{*}\right)\right]\] \[+\mathsf{E}_{x}^{\theta}\left[\int_{0}^{T}e^{-\rho s}\left\{ \mathscr{L}\varphi\big{(}X_{s}^{L^{*},U^{*}}\big{)}-\theta_{s}^{\varphi}\sigma \big{(}X_{s}^{L^{*},U^{*}}\big{)}\varphi^{\prime}\big{(}X_{s}^{L^{*},U^{*}} \big{)}\right\}\mathrm{d}s\right]\] \[= \varphi(x)-\mathsf{E}_{x}^{\theta}\left[\int_{0}^{T}e^{-\rho s} c\left(X_{s}^{L^{*},U^{*}}\right)\mathrm{d}s+\int_{0}^{T}e^{-\rho s}\left(u \mathrm{d}U_{s}^{*}+\ell\mathrm{d}L_{s}^{*}\right)\right],\] where the inequality follows from (12) and (13), and the final equality follows from conditions 1 and 2. Sending \(T\to\infty\) and exploiting the non-negativity of \(\varphi\), \(c\), \(u\), and \(\ell\), we find that \[\varphi(x)\geq\mathsf{E}_{x}^{\theta}\left[\int_{0}^{\infty}e^{-\rho s}c \left(X_{s}^{L^{*},U^{*}}\right)\mathrm{d}s+\int_{0}^{\infty}e^{-\rho s}\left( u\mathrm{d}U_{s}^{*}+\ell\mathrm{d}L_{s}^{*}\right)\right].\] Since \(\theta\in\Theta\) was chosen arbitrarily, this implies that \[\varphi(x)\geq\inf_{(L,U)\in\mathscr{D}}\sup_{\theta\in\Theta}\mathsf{E}_{x}^ {\theta}\left[\int_{0}^{\infty}e^{-\rho s}c\left(X_{s}^{L,U}\right)\mathrm{d}s +\int_{0}^{\infty}e^{-\rho s}\left(u\mathrm{d}U_{s}+\ell\mathrm{d}L_{s}\right) \right].\] **2.** Next, note that Conditions 4 and 5 ensure that \[\theta^{\varphi}(\underline{x})\sigma(x)\varphi^{\prime}(x)-\mathscr{L} \varphi(x)\leq c(x),\quad\text{on }E. \tag{14}\] On \((\underline{x},\overline{x})\) this holds by construction. To see that it holds for \(x\leq\underline{x}\), note that condition 4 implies that \[c(x)\geq \rho\ell|\underline{x}-x|+c(\underline{x})-\ell\big{\{}\big{(} \alpha(x)-\alpha(\underline{x})\big{)}-\kappa\big{(}\sigma(x)-\sigma(\underline {x})\big{)}\big{\}}\] \[= \rho\ell|\underline{x}-x|+\ell\theta^{\varphi}(\underline{x}) \sigma(\underline{x})-\mathscr{L}\varphi(\underline{x})-\ell\big{\{}\big{(} \alpha(x)-\alpha(\underline{x})\big{)}-\kappa\big{(}\sigma(x)-\sigma( \underline{x})\big{)}\big{\}}\] \[= \rho\ell|\underline{x}-x|+\ell\big{(}\theta^{\varphi}(\underline{x })\sigma(x)-\alpha(x)\big{)}+\rho\varphi(\underline{x})\] \[= \ell\theta^{\varphi}(\underline{x})\sigma(x)+\rho\varphi(x)-\ell \alpha(x)\] \[= \theta^{\varphi}(\underline{x})\sigma(x)\varphi^{\prime}(x)- \mathscr{L}\varphi(x).\] Similarly, Condition 5 ensures that the results holds for \(x\geq\overline{x}\). Then, from convexity it follows that \[-\ell\leq\varphi^{\prime}(x)\leq u,\quad\text{on }E. \tag{15}\] **3.** Let \((\bar{L},\bar{U})\) be a feasible control policy. Fix \(T>0\). An application of Ito's lemma now gives that \[\mathsf{E}_{x}^{\theta^{\varphi}}\Big{[}\int_{0}^{T}e^{-\rho s} \Big{\{}c(X_{s}^{\bar{L},\bar{U}})\mathrm{d}s+u\mathrm{d}\bar{U}_{s}+\ell \mathrm{d}\bar{L}_{s}\Big{\}}\Big{]}\] \[\qquad-\int_{0}^{T}e^{-\rho s}\varphi^{\prime}\big{(}X_{s}^{\bar{ L},\bar{U}}\big{)}\big{(}\mathrm{d}\bar{L}_{s}-\mathrm{d}\bar{U}_{s}\big{)}\Big{]}\] \[= \varphi(x)-\mathsf{E}_{x}^{\theta^{\varphi}}\left[e^{-\rho T} \varphi\big{(}X_{s}^{\bar{L},\bar{U}}\big{)}\right].\] Therefore, \[\varphi(x)\leq\mathsf{E}_{x}^{\theta^{\varphi}}\Big{[}\int_{0}^{ T}e^{-\rho s}\Big{\{}c(X_{s}^{\bar{L},\bar{U}})\mathrm{d}s+u\mathrm{d}\bar{U}_{s}+ \ell\mathrm{d}\bar{L}_{s}\Big{\}}\Big{]}\] \[\qquad+\mathsf{E}_{x}^{\theta^{\varphi}}\left[e^{-\rho T} \varphi\big{(}X_{s}^{\bar{L},\bar{U}}\big{)}\right].\] Sending \(T\to\infty\) and by exploiting Condition 6, the monotone convergence theorem gives \[\varphi(x)\leq\mathsf{E}_{x}^{\theta^{\varphi}}\Big{[}\int_{0}^{ \infty}e^{-\rho s}\Big{\{}c(X_{s}^{\bar{L},\bar{U}})\mathrm{d}s+u\mathrm{d} \bar{U}_{s}+\ell\mathrm{d}\bar{L}_{s}\Big{\}}\Big{]}.\] By arbitrariness of \((\bar{L},\bar{U})\), it then follows that \[\varphi(x)\leq\sup_{\theta\in\Theta}\inf_{(L,U)\in\mathscr{D}} \mathsf{E}_{x}^{\theta}\Big{[}\int_{0}^{\infty}e^{-\rho s}\Big{\{}c(X_{s}^{L, U})\mathrm{d}s+u\mathrm{d}U_{s}+\ell\mathrm{d}L_{s}\Big{\}}\Big{]}.\] **4.** Combining the results from Steps 1 and 3 gives that \[\varphi(x) =\inf_{L,U\mathscr{D}}\sup_{\theta\in\Theta}\mathsf{E}_{x}^{ \theta}\left[\int_{0}^{\infty}e^{-\rho s}\Big{\{}c(X_{s}^{L,U})\mathrm{d}s+u \mathrm{d}U_{s}+\ell\mathrm{d}L_{s}\Big{\}}\right]\] \[=\] and that \((\theta^{\varphi},(L^{*},U^{*}))\) realise a saddle-point. ## 4 Affine Perpetual Holding Costs Under some additional assumptions, it is often possible to write down conditions that are easier to check under which a solution to the problem can be found. In order to pursue this program, we first derive an expression for the perpetual holding costs of the _uncontrolled_ process. First we let \(\hat{\varphi}_{\pm\kappa}\) and \(\check{\varphi}_{\pm\kappa}\) denote the increasing and decreasing solutions to the ordinary differential equation (ODE) \[\mathscr{L}\varphi(x)-\theta^{\pm\kappa}(x)\sigma(x)\varphi^{\prime}(x)=0, \quad\text{on }E,\] respectively. Here \(\theta^{\pm\kappa}\) is the density generator \(\theta^{\pm\kappa}(x)=\pm\kappa\), all \(x\in E\). The measure generated by \(\theta^{\pm\kappa}\) is denoted by \(\mathsf{P}^{\pm\kappa}\). We normalize \(\hat{\varphi}_{\pm\kappa}(0)=\check{\varphi}_{\pm\kappa}(0)=1\), and denote \[f_{\pm\kappa}(x)\triangleq\mathsf{E}_{x}^{\theta^{\pm\kappa}}\left[\int_{0}^{ \infty}e^{-\rho t}X_{t}^{0}\mathrm{d}t\right],\] where we assume that \(f_{\pm\kappa}\) is affine in \(x\). We summarize our assumptions on \(X^{0}\) for future reference. **Assumption 1**.: The process \(X^{0}\) is such that 1. the present value of its expected evolution is affine in its current state, i.e. \[f_{\pm\kappa}(x)=\mathsf{E}_{x}^{\theta^{\pm\kappa}}\left[\int_{0}^{\infty}e^ {-\rho t}X_{t}^{0}\mathrm{d}t\right]=ax+b,\quad(a\neq 0),\] (16) 2. the increasing and decreasing solutions, \(\hat{\varphi}\) and \(\check{\varphi}\), to \(\mathscr{L}\varphi(x)-\theta^{\pm\kappa}(x)\sigma(x)\varphi^{\prime}(x)=0\) are convex (see Alvarez, 2003 for sufficient conditions) and such that \[\hat{\varphi}(0)=\check{\varphi}(0)=1,\] (17) and 3. the holding costs \(\check{c}\) and \(\hat{c}\) are such that \[\ell\leq\check{c}f_{-\kappa}^{\prime}(x),\quad\text{and}\quad u\leq\hat{c}f_{ +\kappa}^{\prime}(x).\] (18) For example, if \(X^{0}\) follows an ABM, then \[f_{\pm\kappa}(x)=\frac{x}{\rho}+\frac{\alpha\pm\kappa\sigma}{\rho^{2}}.\] Moreover, \[\hat{\varphi}_{\pm\kappa}(x)=e^{\beta_{\pm\kappa}x},\quad\text{and}\quad \check{\varphi}_{\pm\kappa}(x)=e^{\gamma_{\pm\kappa}x},\] where \(\beta_{\pm\kappa}>0\) and \(\gamma_{\pm\kappa}<0\) are the roots of the quadratic equation \[\mathscr{D}_{\pm\kappa}(\chi)\equiv\frac{1}{2}\sigma^{2}\chi^{2}+(\alpha\pm \kappa\sigma)\chi-\rho=0.\] Condition (18) now reduces to \[\ell\leq\check{c}/\rho,\quad\text{and}\quad u\leq\hat{c}/\rho.\] If, on the other hand, \(X^{0}\) follows the mean-reverting process \[\mathrm{d}X_{t}^{0}=-\eta X_{t}^{0}\mathrm{d}t+\sigma\mathrm{d}B_{t},\quad( \eta>0),\] under P, then under \(\textsf{P}^{\pm\kappa}\) it holds that, \[\mathrm{d}X_{t}^{0}=(-\eta X_{t}^{0}\pm\kappa\sigma)\mathrm{d}t+\sigma\mathrm{d}B _{t}^{\pm\kappa},\] where \(B^{\pm\kappa}\) is a \(\textsf{P}^{\pm\kappa}\)-Brownian motion. This process can be seen as an Ornstein-Uhlenbeck process with long-run mean \(\tilde{x}_{\pm\kappa}\), i.e. \[\mathrm{d}X_{t}^{0}=\eta(\tilde{x}_{\pm\kappa}-X_{t}^{0})\mathrm{d}t+\sigma \mathrm{d}B_{t}^{\pm\kappa},\quad\text{where}\quad\tilde{x}_{\pm\kappa}=\pm \kappa\sigma/\eta.\] Therefore, \[f_{\pm\kappa}(x)=\frac{x-\tilde{x}_{\pm\kappa}}{\rho+\eta}+\frac{\tilde{x}_{ \pm\kappa}}{\rho}.\] Condition (18) now reduces to \[\ell\leq\tilde{c}/(\rho+\eta),\quad\text{and}\quad u\leq\hat{c}/(\rho+\eta).\] The perpetual holding costs of the uncontrolled process can be found using the Feynman-Kac formula in the standard way: \[R_{\pm\kappa}(x) \triangleq\textsf{E}_{x}^{\theta\pm\kappa}\left[\int_{0}^{\infty }e^{-\rho t}c(X_{t}^{0})\mathrm{d}t\right]\] \[=\begin{cases}-\tilde{c}f_{\pm\kappa}(x)+\hat{E}_{\pm\kappa} \hat{\varphi}_{\pm\kappa}(x)&\text{if }x<0\\ +\hat{c}f_{\pm\kappa}(x)+\hat{E}_{\pm\kappa}\check{\varphi}_{\pm\kappa}(x)& \text{if }x\geq 0.\end{cases}\] Here, \(\hat{E}_{\pm\kappa}\) and \(\tilde{E}_{\pm\kappa}\) are constants that are determined by "value-matching" and "smooth-pasting" conditions at 0, i.e., \[R_{\pm\kappa}(0-)=R_{\pm\kappa}(0+),\quad\text{and}\quad R_{\pm\kappa}^{\prime }(0-)=R_{\pm\kappa}^{\prime}(0+),\] respectively. This gives \[\hat{E}_{\pm\kappa}=(\hat{c}+\tilde{c})\frac{f_{\pm\kappa}^{\prime}(0)-f_{\pm \kappa}(0)\check{\varphi}_{\pm\kappa}^{\prime}(0)}{\check{\varphi}_{\pm\kappa }^{\prime}(0)-\check{\varphi}_{\pm\kappa}^{\prime}(0)},\quad\text{and}\quad \tilde{E}_{\pm\kappa}=(\hat{c}+\tilde{c})\frac{f_{\pm\kappa}^{\prime}(0)-f_{ \pm\kappa}(0)\check{\varphi}_{\pm\kappa}^{\prime}(0)}{\check{\varphi}_{\pm \kappa}^{\prime}(0)-\check{\varphi}_{\pm\kappa}^{\prime}(0)},\] so that \[R_{\pm\kappa}(x)=\begin{cases}-\tilde{c}f_{\pm\kappa}(x)+(\hat{c}+\tilde{c}) \frac{f_{\pm\kappa}^{\prime}(0)-f_{\pm\kappa}(0)\check{\varphi}_{\pm\kappa}^{ \prime}(0)}{\check{\varphi}_{\pm\kappa}^{\prime}(0)-\check{\varphi}_{\pm \kappa}^{\prime}(0)}\hat{\varphi}_{\pm\kappa}(x)&\text{if }x<0\\ +\hat{c}f_{\pm\kappa}(x)+(\hat{c}+\tilde{c})\frac{f_{\pm\kappa}^{\prime}(0)-f_{ \pm\kappa}(0)\check{\varphi}_{\pm\kappa}^{\prime}(0)}{\check{\varphi}_{\pm \kappa}^{\prime}(0)-\check{\varphi}_{\pm\kappa}^{\prime}(0)}\hat{\varphi}_{ \pm\kappa}(x)&\text{if }x\geq 0.\end{cases}\] Without ambiguity (\(\kappa=0\)), in order to construct the function \(\varphi\) of Theorem 1, one would now find constants \(A\) and \(B\), and control barriers \(\underline{x}\) and \(\overline{x}\) such that the following value-matching and smooth-pasting conditions hold: \[R^{\prime}_{0}(\underline{x})+A\hat{\varphi}^{\prime}_{0}( \underline{x})+B\check{\varphi}^{\prime}_{0}(\underline{x}) =-\ell\] \[R^{\prime}_{0}(\overline{x})+A\hat{\varphi}^{\prime}_{0}( \overline{x})+B\check{\varphi}^{\prime}_{0}(\overline{x}) =u\] \[R^{\prime\prime}_{0}(\underline{x})+A\check{\varphi}^{\prime\prime }_{0}(\underline{x})+B\check{\varphi}^{\prime\prime}_{0}(\underline{x}) =0\] \[R^{\prime\prime}_{0}(\overline{x})+A\check{\varphi}^{\prime\prime }_{0}(\overline{x})+B\check{\varphi}^{\prime\prime}_{0}(\overline{x}) =0.\] One then proceeds by showing that the resulting function, \[\varphi(x)=R_{0}(x)+A\varphi_{0}(x)+B\varphi_{0}(x),\quad\text{on }(\underline{x}, \overline{x}),\] and the constants \(\underline{x}<0<\overline{x}\) satisfy the conditions of the verification Theorem 1. Under ambiguity (\(\kappa>0\)) matters are a bit more complicated. Intuitively speaking, the main issue is that the "worst-case drift" is different at \(\underline{x}\) and \(\overline{x}\). In particular, at the lower control bound the worst case drift is \(\alpha(\underline{x})-\kappa\sigma(\underline{x})\), because the worst that can happen is that the cash hoard depletes even more and, thus, increases the control costs. Similarly, at the upper control bound the worst case drift is \(\alpha(\overline{x})+\kappa\sigma(\overline{x})\), because the worst that can happen is that the cash hoard increases even more and, thus, increases the control costs. So, at \(\underline{x}\) and \(\overline{x}\) we need to work with functions \(R\), \(\hat{\varphi}\), and \(\check{\varphi}\) under \(-\kappa\) and \(+\kappa\), respectively. That is, we will look for constants \(A\), \(B\), \(C\), and \(D\), as well as control bounds \(\underline{x}\) and \(\overline{x}\) such that the following value-matching and smooth-pasting conditions hold: \[R^{\prime}_{-\kappa}(\underline{x})+A\hat{\varphi}^{\prime}_{- \kappa}(\underline{x})+B\check{\varphi}^{\prime}_{-\kappa}(\underline{x}) =-\ell \tag{19}\] \[R^{\prime\prime}_{-\kappa}(\underline{x})+A\hat{\varphi}^{\prime \prime}_{-\kappa}(\underline{x})+B\check{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}) =0\] (20) \[R^{\prime}_{+\kappa}(\overline{x})+C\hat{\varphi}^{\prime}_{+ \kappa}(\overline{x})+D\check{\varphi}^{\prime}_{+\kappa}(\overline{x}) =u\] (21) \[R^{\prime\prime}_{+\kappa}(\overline{x})+C\hat{\varphi}^{\prime \prime}_{+\kappa}(\overline{x})+D\check{\varphi}^{\prime\prime}_{+\kappa}( \overline{x}) =0. \tag{22}\] Now, of course, we have too few equations to determine all the constants. The "missing" constraints come from the fact that there is a point \(x^{*}\) where the worst-case drift changes. This is the point where the firm's cost function changes from being decreasing to increasing. At this point we also impose a value-matching and smooth-pasting condition, i.e., we find \(x^{*}\in(\underline{x},\overline{x})\) such that \[R^{\prime}_{-\kappa}(x^{*}-)+A\check{\varphi}^{\prime}_{-\kappa}( x^{*}-)+B\check{\varphi}^{\prime}_{-\kappa}(x^{*}-) =0 \tag{23}\] \[R^{\prime}_{+\kappa}(x^{*}+)+C\check{\varphi}^{\prime}_{+\kappa} (x^{*}+)+D\check{\varphi}^{\prime}_{+\kappa}(x^{*}+) =0\] (24) \[R^{\prime\prime}_{-\kappa}(x^{*}-)+A\check{\varphi}^{\prime \prime}_{-\kappa}(x^{*}-)+B\check{\varphi}^{\prime\prime}_{-\kappa}(x^{*}-) =R^{\prime\prime}_{+\kappa}(x^{*}+)+C\check{\varphi}^{\prime \prime}_{+\kappa}(x^{*}+)+D\check{\varphi}^{\prime\prime}_{+\kappa}(x^{*}+). \tag{25}\] We show below that if this system of 7 equations in 7 unknowns has a solution, then a function \(\varphi\) can be constructed on \((\underline{x},\overline{x})\) so that the conditions of verification Theorem 1 are satisfied. A similar approach has also been used by Cheng and Riedel (2013) to price a straddle option under ambiguity and by Hellmann and Thijssen (2018) to analyse preemptive investment behavior in a duopoly under ambiguity. **Theorem 2**.: _Suppose that the system of equations (19)-(25) admits a solution \((A,B,C,D,\underline{x},\overline{x},x^{*})\) with \(\underline{x}<x^{*}<\overline{x}\). Then the optimal policy \((L^{*},U^{*})\) is the control band policy associated with \((\underline{x},\overline{x})\) and the firm's cost function is_ \[J^{*}(x)=\begin{cases}\ell|\underline{x}-x|+R_{-\kappa}(\underline{x}+)+A\hat{ \varphi}_{-\kappa}(\underline{x}+)+B\check{\varphi}_{-\kappa}(\underline{x}+)& \text{ if }x\leq\underline{x}\\ R_{-\kappa}(x)+A\hat{\varphi}_{-\kappa}(x)+B\check{\varphi}_{-\kappa}(x)& \text{ if }\underline{x}<x<x^{*}\\ R_{+\kappa}(x)+C\hat{\varphi}_{+\kappa}(x)+D\check{\varphi}_{+\kappa}(x)& \text{ if }x^{*}\leq x<\overline{x}\\ u|x-\overline{x}|+R_{+\kappa}(\overline{x}-)+C\hat{\varphi}_{+\kappa}(\overline {x}-)+D\check{\varphi}_{+\kappa}(\overline{x}-)&\text{ if }x\geq\overline{x}.\end{cases} \tag{26}\] **Proof.** First note that the constants \(A\) and \(B\) depend on \(\underline{x}\), whereas the constants \(C\) and \(D\) depend on \(\overline{x}\). In what follows we will make this dependence explicit by writing, say, the constant \(A\), as a mapping \(\underline{x}\mapsto A(\underline{x})\). In fact, for given \(\underline{x}\) and \(\overline{x}\), the systems of linear equations \[\begin{bmatrix}\hat{\varphi}^{\prime}_{-\kappa}(\underline{x}+)&\check{\varphi }^{\prime}_{-\kappa}(\underline{x}+)\\ \hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)&\check{\varphi}^{ \prime\prime}_{-\kappa}(\underline{x}+)\end{bmatrix}\begin{bmatrix}A( \underline{x})\\ B(\underline{x})\end{bmatrix}=\begin{bmatrix}-\ell-R^{\prime}_{-\kappa}( \underline{x})\\ -R^{\prime\prime}_{-\kappa}(\underline{x})\end{bmatrix}, \tag{27}\] and \[\begin{bmatrix}\hat{\varphi}^{\prime}_{+\kappa}(\overline{x}-)&\check{\varphi }^{\prime}_{+\kappa}(\overline{x}-)\\ \hat{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)&\check{\varphi}^{ \prime\prime}_{+\kappa}(\overline{x}-)\end{bmatrix}\begin{bmatrix}C( \overline{x})\\ D(\overline{x})\end{bmatrix}=\begin{bmatrix}u-R^{\prime}_{+\kappa}(\overline{x} )\\ -R^{\prime\prime}_{+\kappa}(\overline{x})\end{bmatrix}, \tag{28}\] have unique solutions: \[A(\underline{x}) =\frac{\check{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)( -\ell-R^{\prime}_{-\kappa}(\underline{x}))+\check{\varphi}^{\prime}_{-\kappa}( \underline{x}+)R^{\prime\prime}_{-\kappa}(\underline{x})}{\check{\varphi}^{ \prime\prime}_{-\kappa}(\underline{x}+)\check{\varphi}^{\prime}_{-\kappa}( \underline{x}+)-\check{\varphi}^{\prime}_{-\kappa}(\underline{x}+)\check{ \varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)}, \tag{29}\] \[B(\underline{x}) =\frac{-\check{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+) (-\ell-R^{\prime}_{-\kappa}(\underline{x}))-\check{\varphi}^{\prime}_{-\kappa }(\underline{x}+)R^{\prime\prime}_{-\kappa}(\underline{x})}{\check{\varphi}^{ \prime\prime}_{-\kappa}(\underline{x}+)\check{\varphi}^{\prime}_{-\kappa}( \underline{x}+)-\check{\varphi}^{\prime}_{-\kappa}(\underline{x}+)\check{ \varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)},\] (30) \[C(\overline{x}) =\frac{\check{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)(u -R^{\prime}_{+\kappa}(\overline{x}))+\check{\varphi}^{\prime}_{+\kappa}( \overline{x}-)R^{\prime\prime}_{+\kappa}(\overline{x})}{\check{\varphi}^{ \prime\prime}_{+\kappa}(\overline{x}-)\check{\varphi}^{\prime}_{+\kappa}( \overline{x}-)-\check{\varphi}^{\prime}_{+\kappa}(\overline{x}-)},\] (31) \[D(\overline{x}) =\frac{-\check{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)( u-R^{\prime}_{+\kappa}(\overline{x}))-\check{\varphi}^{\prime}_{+\kappa}( \overline{x}-)R^{\prime\prime}_{+\kappa}(\overline{x})}{\check{\varphi}^{ \prime\prime}_{+\kappa}(\overline{x}-)\check{\varphi}^{\prime}_{+\kappa}( \overline{x}-)-\check{\varphi}^{\prime}_{+\kappa}(\overline{x}-)}. \tag{32}\] Define the function \(\varphi\) on \((\underline{x},\overline{x})\) as follows: \[\varphi(x)=\begin{cases}R_{-\kappa}(x)+A(\underline{x})\hat{\varphi}_{-\kappa }(x)+B(\underline{x})\check{\varphi}_{-\kappa}(x)&\text{if }\underline{x}<x<x^{*}\\ R_{+\kappa}(x)+C(\overline{x})\hat{\varphi}_{+\kappa}(x)+D(\overline{x})\check{ \varphi}_{+\kappa}(x)&\text{if }x^{*}\leq x<\overline{x}.\end{cases} \tag{33}\] We establish that \(\varphi\) is convex on \((\underline{x},\overline{x})\). First, suppose that \(x\in(\underline{x},x^{*})\) and that \(x^{*}<0\). It then holds that \[\varphi^{\prime\prime}(x) =R^{\prime\prime}_{-\kappa}(x)+A(\underline{x})\hat{\varphi}^{ \prime\prime}_{-\kappa}(x)+B(\underline{x})\hat{\varphi}^{\prime\prime}_{- \kappa}(x)\] \[=R^{\prime\prime}_{-\kappa}(x)+\frac{\hat{\varphi}^{\prime\prime}_ {-\kappa}(\underline{x}+)(-\ell-R^{\prime}_{-\kappa}(\underline{x}))+\hat{ \varphi}^{\prime}_{-\kappa}(\underline{x}+)R^{\prime\prime}_{-\kappa}( \underline{x})}{\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)\hat{ \varphi}^{\prime}_{-\kappa}(\underline{x}+)-\hat{\varphi}^{\prime}_{-\kappa}( \underline{x}+)\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)}\hat{ \varphi}^{\prime\prime}_{-\kappa}(x)\] \[\qquad+\frac{-\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{ x}+)(-\ell-R^{\prime}_{-\kappa}(\underline{x}))-\hat{\varphi}^{\prime}_{-\kappa}( \underline{x}+)R^{\prime\prime}_{-\kappa}(\underline{x})}{\hat{\varphi}^{ \prime\prime}_{-\kappa}(\underline{x}+)\hat{\varphi}^{\prime}_{-\kappa}( \underline{x}+)-\hat{\varphi}^{\prime}_{-\kappa}(\underline{x}+)\hat{ \varphi}^{\prime\prime}_{-\kappa}(x)}\hat{\varphi}^{\prime\prime}_{-\kappa}(x)\] \[=\tilde{c}f^{\prime\prime}_{-\kappa}(x)+\hat{E}_{-\kappa}\hat{ \varphi}^{\prime\prime}_{-\kappa}(x)\] \[\qquad+\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x} +)(-\ell-\tilde{c}f^{\prime}_{-\kappa}(x)(\underline{x})-\hat{E}_{-\kappa} \hat{\varphi}^{\prime}_{-\kappa}(\underline{x}+))+\hat{\varphi}^{\prime}_{- \kappa}(\underline{x}+)(\tilde{c}f^{\prime\prime}_{-\kappa}(x)(\underline{x} )+\hat{E}_{-\kappa}\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+))} \hat{\varphi}^{\prime\prime}_{-\kappa}(x)\] \[\qquad-\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{ x}+)(-\ell-f^{\prime}_{-\kappa}(\underline{x})-\hat{E}_{-\kappa}\hat{\varphi}^{ \prime}_{-\kappa}(\underline{x}+))+\hat{\varphi}^{\prime}_{-\kappa}( \underline{x}+)(\tilde{c}f^{\prime\prime}_{-\kappa}(x)(\underline{x})+\hat{ E}_{-\kappa}\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+))}{\hat{\varphi}^{ \prime\prime}_{-\kappa}(\underline{x}+)\hat{\varphi}^{\prime}_{-\kappa}( \underline{x}+)-\hat{\varphi}^{\prime}_{-\kappa}(\underline{x}+)}\hat{ \varphi}^{\prime\prime}_{-\kappa}(x)\] \[=\left[\frac{\check{\varphi}^{\prime\prime}_{-\kappa}(\underline{ x}+)(-\ell-\tilde{c}f^{\prime}_{-\kappa}(x)(\underline{x}))}{\check{\varphi}^{ \prime\prime}_{-\kappa}(\underline{x}+)\hat{\varphi}^{\prime}_{-\kappa}( \underline{x}+)-\hat{\varphi}^{\prime}_{-\kappa}(\underline{x}+)\hat{ \varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)}\right]\hat{\varphi}^{ \prime\prime}_{-\kappa}(x)\] \[\qquad-\left[\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}+)(-\ell-\tilde{c}f^{\prime}_{-\kappa}(x)(\underline{x}))}{\hat{ \varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)\hat{\varphi}^{\prime}_{- \kappa}(\underline{x}+)-\hat{\varphi}^{\prime}_{-\kappa}(\underline{x}+)\hat{ \varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)}\right]\hat{\varphi}^{ \prime\prime}_{-\kappa}(x)\] \[=\left[\frac{(-\ell-\tilde{c}f^{\prime}_{-\kappa}(x)^{\prime}( \underline{x}))}{\check{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+) \hat{\varphi}^{\prime}_{-\kappa}(\underline{x}+)-\hat{\varphi}^{\prime}_{- \kappa}(\underline{x}+)\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)} \right](\check{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)\hat{ \varphi}^{\prime\prime}_{-\kappa}(x)-\hat{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}+)\check{\varphi}^{\prime\prime}_{-\kappa}(x))\] \[\geq 0.\] Here, the last inequality holds due to (i) assumption (18), (ii) the fact that \(\check{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)\hat{\varphi}^{\prime}_ {-\kappa}(\underline{x}+)-\check{\varphi}^{\prime}_{-\kappa}(\underline{x}+) \hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)>0\), and (iii) non-negativity of the term \(\check{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)\hat{\varphi}^{ \prime\prime}_{-\kappa}(x)-\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+) \check{\varphi}^{\prime\prime}_{-\kappa}(x)\) on \((\underline{x},x^{*})\). This last part can be seen as follows; the term is zero at \(\underline{x}+\), and is increasing in \(x\) on \((\underline{x},x^{*})\). When \(x^{*}\geq 0\), the same result holds on \((\underline{x},0)\), so in that case we only need to prove that \(\varphi\) is convex on \([0,x^{*})\). Unlike the previous proof that the sign of \(A(\underline{x})\) does not affect the convexity of \(\varphi\), it does matter in this case. Since \(B(\underline{x})\) is always non-positive, we thus separate the proof into two cases: \(A(\underline{x})\geq 0\) and \(A(\underline{x})<0\). In what follows, we use the fact that on \([0,x^{*})\) it holds that \(R_{-\kappa}(x)=\hat{c}f_{-\kappa}(x)+\tilde{E}\check{\varphi}_{-\kappa}(x)\). **Case 1**: when \(A(\underline{x})\geq 0\), it holds that \[\varphi^{\prime\prime}(x) =R^{\prime\prime}_{-\kappa}(x)+A(\underline{x})\hat{\varphi}^{ \prime\prime}_{-\kappa}(x)+B(\underline{x})\tilde{\varphi}^{\prime\prime}_{- \kappa}(x)\] \[=\hat{c}f^{\prime\prime}_{-\kappa}(x)+\tilde{E}_{-\kappa}\tilde{ \varphi}^{\prime\prime}_{-\kappa}(x)+A(\underline{x})\hat{\varphi}^{\prime \prime}_{-\kappa}(0)\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{ \varphi}^{\prime\prime}_{-\kappa}(0)}+B(\underline{x})\tilde{\varphi}^{\prime \prime}_{-\kappa}(0)\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{ \varphi}^{\prime\prime}_{-\kappa}(0)}\] \[\geq\tilde{E}_{-\kappa}\tilde{\varphi}^{\prime\prime}_{-\kappa}(0 )\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{\varphi}^{\prime \prime}_{-\kappa}(0)}+A(\underline{x})\hat{\varphi}^{\prime\prime}_{-\kappa}(0 )\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{\varphi}^{\prime\prime }_{-\kappa}(0)}+B(\underline{x})\tilde{\varphi}^{\prime\prime}_{-\kappa}(0) \frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{\varphi}^{\prime\prime }_{-\kappa}(0)}\] \[=\hat{E}_{-\kappa}\hat{\varphi}^{\prime\prime}_{-\kappa}(0)\frac{ \hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{\varphi}^{\prime\prime}_{- \kappa}(0)}\] \[\qquad+\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x} )(\ell-\tilde{c}f^{\prime}_{-\kappa}(\underline{x})-\hat{E}_{-\kappa}\hat{ \varphi}^{\prime}_{-\kappa}(\underline{x}+))+\hat{\varphi}^{\prime}_{-\kappa}( \underline{x}+)\hat{E}_{-\kappa}\hat{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}+)}{\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)\hat{ \varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)}\hat{\varphi}^{\prime\prime }_{-\kappa}(0)\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{\varphi}^ {\prime\prime}_{-\kappa}(0)}\] \[\qquad-\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x} )(\ell-\tilde{c}f^{\prime}_{-\kappa}(\underline{x})-\hat{E}_{-\kappa}\hat{ \varphi}^{\prime}_{-\kappa}(\underline{x}+))+\hat{\varphi}^{\prime}_{-\kappa}( \underline{x}+)\hat{E}_{-\kappa}\hat{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}+)}{\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)\hat{ \varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)}\hat{\varphi}^{\prime\prime }_{-\kappa}(0)\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{\varphi}^ {\prime\prime}_{-\kappa}(0)}\] \[=\hat{E}_{-\kappa}\hat{\varphi}^{\prime\prime}_{-\kappa}(0)\frac {\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{\varphi}^{\prime\prime}_{- \kappa}(0)}\] \[\qquad+\left[\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}+)\hat{\varphi}^{\prime\prime}_{-\kappa}(0)(\ell-\tilde{c}f^{ \prime}_{-\kappa}(\underline{x}))}{\hat{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}+)\hat{\varphi}^{\prime}_{-\kappa}(\underline{x}+)-\hat{\varphi} ^{\prime}_{-\kappa}(\underline{x}+)\hat{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}+)}-\hat{E}_{-\kappa}\hat{\varphi}^{\prime\prime}_{-\kappa}(0) \right]\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{\varphi}^{\prime \prime}_{-\kappa}(0)}\] \[\qquad-\left[\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}+)\hat{\varphi}^{\prime\prime}_{-\kappa}(0)(\ell-\tilde{c}f^{ \prime}_{-\kappa}(\underline{x}))}{\hat{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}+)\hat{\varphi}^{\prime}_{-\kappa}(\underline{x}+)-\hat{\varphi} ^{\prime}_{-\kappa}(\underline{x}+)\hat{\varphi}^{\prime\prime}_{-\kappa}( \underline{x}+)}\right]\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{ \varphi}^{\prime\prime}_{-\kappa}(0)}\] \[=\left[\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x} +)\hat{\varphi}^{\prime\prime}_{-\kappa}(0)-\hat{\varphi}^{\prime\prime}_{- \kappa}(\underline{x}+)\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+) \hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)}{\hat{\varphi}^{\prime \prime}_{-\kappa}(\underline{x}+)\hat{\varphi}^{\prime}_{-\kappa}( \underline{x}+)-\hat{\varphi}^{\prime}_{-\kappa}(\underline{x}+)\hat{\varphi} ^{\prime\prime}_{-\kappa}(\underline{x}+)}(\ell-\tilde{c}f^{\prime}_{-\kappa}( \underline{x}))\right]\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{ \varphi}^{\prime}_{-\kappa}(\underline{x})}\] \[\geq 0.\] The first inequality holds since \(\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{\varphi}^{\prime\prime}_{- \kappa}(0)}\geq\frac{\hat{\varphi}^{\prime\prime}_{-\kappa}(x)}{\hat{\varphi}^{ \prime\prime}_{-\kappa}(0)}\) for all \(x\geq 0\). The last equality obtains from the the fact that \(f^{\prime}_{-\kappa}\) is constant, together with (18), i.e. \(\tilde{E}_{-\kappa}\hat{\varphi}^{\prime\prime}_{-\kappa}(0)=\check{E}_{- \kappa}\hat{\varphi}^{\prime\prime}_{-\kappa}(0)\). Condition (18) and the fact that \(\tilde{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)\hat{\varphi}^{\prime \prime}_{-\kappa}(0)-\hat{\varphi}^{\prime\prime}_{-\kappa}(\underline{x}+)\hat{ \varphi}^{\prime\prime}_{-\kappa}(0)\geq 0\) give the last inequality. **Case 2**: when \(A(\underline{x})<0\), it holds that \[\varphi^{\prime\prime}(x) =R^{\prime\prime}_{-\kappa}(x)+A(\underline{x})\hat{\varphi}^{ \prime\prime}_{-\kappa}(x)+B(\underline{x})\check{\varphi}^{\prime\prime}_{- \kappa}(x)\] \[=\hat{c}f^{\prime\prime}_{-\kappa}(x)+\tilde{E}_{-\kappa}\check{ \varphi}^{\prime\prime}_{-\kappa}(x)+A(\underline{x})\check{\varphi}^{\prime \prime}_{-\kappa}(x^{*})\frac{\check{\varphi}^{\prime\prime}_{-\kappa}(x)}{ \check{\varphi}^{\prime\prime}_{-\kappa}(x^{*})}+B(\underline{x})\check{ \varphi}^{\prime\prime}_{-\kappa}(x^{*})\frac{\check{\varphi}^{\prime\prime}_{ -\kappa}(x)}{\check{\varphi}^{\prime\prime}_{-\kappa}(x^{*})}\] \[\geq\tilde{E}_{-\kappa}\check{\varphi}^{\prime\prime}_{-\kappa}(x^ {*})\frac{\check{\varphi}^{\prime\prime}_{-\kappa}(x)}{\check{\varphi}^{ \prime\prime}_{-\kappa}(x^{*})}+A(\underline{x})\check{\varphi}^{\prime\prime }_{-\kappa}(x^{*})\frac{\check{\varphi}^{\prime\prime}_{-\kappa}(x)}{\check{ \varphi}^{\prime\prime}_{-\kappa}(x^{*})}+B(\underline{x})\check{\varphi}^{ \prime\prime}_{-\kappa}(x^{*})\frac{\check{\varphi}^{\prime\prime}_{-\kappa}(x )}{\check{\varphi}^{\prime\prime}_{-\kappa}(x^{*})}\] \[=\left(\tilde{E}_{-\kappa}\check{\varphi}^{\prime\prime}_{-\kappa} (x^{*})+A(\underline{x})\hat{\varphi}^{\prime\prime}_{-\kappa}(x^{*})+B( \overline{x})\check{\varphi}^{\prime\prime}_{-\kappa}(x^{*})\right)\frac{ \check{\varphi}^{\prime\prime}_{-\kappa}(x)}{\check{\varphi}^{\prime\prime}_{- \kappa}(x^{*})}\] \[=\left(\tilde{E}_{+\kappa}\check{\varphi}^{\prime\prime}_{+\kappa} (x^{*})+C(\overline{x})\hat{\varphi}^{\prime\prime}_{+\kappa}(x^{*})+D( \overline{x})\check{\varphi}^{\prime\prime}_{+\kappa}(x^{*})\right)\frac{ \check{\varphi}^{\prime\prime}_{-\kappa}(x)}{\check{\varphi}^{\prime\prime}_{ -\kappa}(x^{*})}\] \[=\left[\tilde{E}_{+\kappa}\check{\varphi}^{\prime\prime}_{+\kappa }(x^{*})+\frac{\check{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)(u-\hat{ c}f^{\prime}_{+\kappa}(\overline{x})-\tilde{E}_{+\kappa}\check{\varphi}^{\prime}_{+ \kappa}(\overline{x}))+\check{\varphi}^{\prime}_{+\kappa}(\overline{x}-) \check{E}_{+\kappa}\check{\varphi}^{\prime\prime}_{+\kappa}(\overline{x})}{ \check{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)\hat{\varphi}^{\prime \prime}_{+\kappa}(\overline{x}-)-\check{\varphi}^{\prime}_{+\kappa}(\overline{ x}-)\hat{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)}\hat{\varphi}^{\prime\prime}_{+ \kappa}(x^{*})\right]\] \[\qquad-\frac{\check{\varphi}^{\prime\prime}_{+\kappa}(\overline{ x}-)(u-\hat{c}f^{\prime}_{+\kappa}(\overline{x})-\tilde{E}_{+\kappa}\check{\varphi}^{ \prime}_{+\kappa}(\overline{x}))+\hat{\varphi}^{\prime}_{+\kappa}(\overline{ x}-)\check{\varphi}^{\prime}_{+\kappa}(\overline{x}-)\check{\varphi}^{ \prime}_{+\kappa}(\overline{x}-)}{\check{\varphi}^{\prime\prime}_{+\kappa}( \overline{x}-)}\check{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)}\check{ \varphi}^{\prime\prime}_{+\kappa}(x^{*})\right]\frac{\check{\varphi}^{\prime \prime}_{-\kappa}(x)}{\check{\varphi}^{\prime\prime}_{-\kappa}(x^{*})}\] \[=\left[\tilde{E}_{+\kappa}\check{\varphi}^{\prime\prime}_{+\kappa }(x^{*})+\frac{\check{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)(u-\hat{ c}f^{\prime}_{+\kappa}(\overline{x}))}{\check{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-) \check{\varphi}^{\prime}_{+\kappa}(\overline{x}-)-\check{\varphi}^{\prime}_{+ \kappa}(\overline{x}-)\check{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-) }\hat{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)}\hat{\varphi}^{\prime \prime}_{+\kappa}(x^{*})\right]\] \[\qquad-\left(\frac{\check{\varphi}^{\prime\prime}_{+\kappa}( \overline{x}-)(u-\hat{c}f^{\prime}_{+\kappa}(\overline{x}))}{\check{\varphi}^{ \prime\prime}_{+\kappa}(\overline{x}-)\check{\varphi}^{\prime}_{+\kappa}( \overline{x}-)-\check{\varphi}^{\prime}_{+\kappa}(\overline{x}-)\check{ \varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)}+\tilde{E}_{+\kappa}\right) \check{\varphi}^{\prime\prime}_{+\kappa}(x^{*})\right]\frac{\check{\varphi}^{ \prime\prime}_{-\kappa}(x)}{\check{\varphi}^{\prime\prime}_{-\kappa}(x^{*})}\] \[\geq 0.\] Here, the first inequality follows from the fact that \(\frac{\check{\varphi}^{\prime\prime}_{-\kappa}(x)}{\check{\varphi}^{\prime \prime}_{-\kappa}(x^{*})}\leq\frac{\check{\varphi}^{\prime\prime}_{-\kappa}(x)}{ \check{\varphi}^{\prime\prime}_{-\kappa}(x^{*})}\) for all \(x\geq 0\). The fourth equality follows from (25). From \(\check{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-)\check{\varphi}^{\prime \prime}_{+\kappa}(x^{*})-\hat{\varphi}^{\prime\prime}_{+\kappa}(\overline{x}-) \check{\varphi}^{\prime\prime}_{+\kappa}(x^{*})\geq 0\), together with (18), we obtain that \(\varphi^{\prime\prime}\) is non-negative in this case too. Hence, we conclude that \(\varphi\) is convex on \((\underline{x},x^{*})\). By a similar argument, it can be shown that \(\varphi\) is convex on \([x^{*},\overline{x})\). This establishes convexity of \(\varphi\) on \((\underline{x},\overline{x})\). It, therefore, holds that \(\varphi\) is decreasing on \((\underline{x},x^{*})\) and increasing on \((x^{*},\overline{x})\). Direct verification then shows that Condition 1 of Proposition 1 is satisfied. The second condition is satisfied by assumption. The third condition is satisfied by the choice of \(A(\underline{x}),B(\underline{x}),C(\overline{x}),D(\overline{x})\), as given by eqs. (27) and (28), respectively. Conditions 4 and 5 are satisfied due to (18) and transversality (condition 6) is also trivially satisfied given requirement (4). Hence, all assumptions of Proposition 1 are satisfied and the conclusion follows. ## 5 Illustration with Arithmetic Brownian Motion Suppose that the uncontrolled cash inventory \(X^{0}\) follows, under \(\mathsf{P}_{x}\), the ABM \[X^{0}_{t}=x+\alpha t+\sigma B_{t}, \tag{34}\] so that \[\hat{\varphi}_{\pm\kappa}(x)=e^{\beta_{\pm\kappa}x},\quad\text{and}\quad\check {\varphi}_{\pm\kappa}(x)=e^{\gamma_{\pm\kappa}x}, \tag{35}\] where \(\beta_{\pm\kappa}>0\) and \(\gamma_{\pm\kappa}<0\) are the positive and negative roots, respectively, of the quadratic equation \[\mathscr{Q}_{\pm\kappa}(\chi)\equiv\frac{1}{2}\sigma^{2}\chi^{2}+\left(\alpha \pm\kappa\sigma\right)\chi-\rho=0. \tag{36}\] Recall that the holding costs of cash are given by \[c(x)=\hat{c}x\cdot 1_{[0,\infty)}(x)-\check{c}x\cdot 1_{(-\infty,0)}(x), \tag{37}\] for some \(\hat{c},\check{c}>0\). As mentioned before, if \(\underline{x}<0<\overline{x}\), then Conditions 4 and 5 in Theorem 1 reduce to \[\check{c}\geq\rho\ell,\quad\text{and}\quad\hat{c}\geq\rho u,\] i.e. the control costs must not exceed the expected discounted uncontrolled holding costs; otherwise, it would never be optimal to exercise control. The expected discounted uncontrolled holding costs in this case are given by \[R_{\pm\kappa}(x)=\begin{cases}-\frac{\check{c}}{\rho}\left[x+\frac{\alpha\pm \kappa\sigma}{\rho}\right]+\hat{E}_{\pm\kappa}e^{\beta_{\pm\kappa}x}&\text{if }x<0\\ +\frac{\hat{c}}{\rho}\left[x+\frac{\alpha\pm\kappa\sigma}{\rho}\right]+\check{E }_{\pm\kappa}e^{\gamma_{\pm\kappa}x}&\text{if }x\geq 0\end{cases},\] where \[\hat{E}_{\pm\kappa}=\frac{\hat{c}+\check{c}}{\rho^{2}}\frac{\rho-\gamma_{\pm \kappa}(\alpha\pm\sigma\kappa)}{\beta_{\pm\kappa}-\gamma_{\pm\kappa}},\quad \text{and}\quad\check{E}_{\pm\kappa}=\frac{\hat{c}+\check{c}}{\rho^{2}}\frac{ \rho-\beta_{\pm\kappa}(\alpha\pm\sigma\kappa)}{\beta_{\pm\kappa}-\gamma_{\pm \kappa}}.\] Since \[\mathscr{Q}_{\pm\kappa}(\beta_{\pm\kappa})=\mathscr{Q}_{\pm\kappa}(\gamma_{ \pm\kappa})=0,\] the constants \(\hat{E}\) and \(\check{E}\) can be written as \[\hat{E}=\frac{\hat{c}+\check{c}}{2\rho^{2}}\frac{\sigma^{2}\gamma_{\pm\kappa} ^{2}}{\beta_{\pm\kappa}-\gamma_{\pm\kappa}}>0,\quad\text{and}\quad\check{E}= \frac{\hat{c}+\check{c}}{2\rho^{2}}\frac{\sigma^{2}\beta_{\pm\kappa}^{2}}{ \beta_{\pm\kappa}-\gamma_{\pm\kappa}}>0,\] respectively. That is, the expected discounted holding costs of uncontrolled cash inventory equals \[R_{\pm\kappa}(x)=\begin{cases}\frac{-\check{c}}{\rho}\left[x+\frac{\alpha\pm \kappa\sigma}{\rho}\right]+\frac{(\hat{c}+\check{c})\sigma^{2}\gamma_{\pm \kappa}^{2}}{2\rho^{2}(\beta_{\pm\kappa}-\gamma_{\pm\kappa})}e^{\beta_{\pm \kappa}x}&\text{if }x<0\\ \frac{\pm\hat{c}}{\rho}\left[x+\frac{\alpha\pm\kappa\sigma}{\rho}\right]+\frac {(\hat{c}+\check{c})\sigma^{2}\beta_{\pm\kappa}^{2}}{2\rho^{2}(\beta_{\pm \kappa}-\gamma_{\pm\kappa})}e^{\gamma_{\pm\kappa}x}&\text{if }x\geq 0\end{cases}. \tag{38}\] As a numerical example, we take \(\rho=0.1\), \(\alpha\in\{-2,1,0,1,2\}\), \(\check{c},\hat{c}\in\{1,2,3,6\}\), \(\ell,u\in\{2,4,6\}\), \(\kappa\in\{0,0.5,1\}\), and \(\sigma\in[1,10]\). For \(\alpha=0\), \(\ell=4\), \(u=2\) and \(\check{c}=\hat{c}=1\) we obtain the function \(J^{L^{\star},U^{\star}}\) for \(\kappa=0.5\) and \(\sigma=5.4\) corresponding to the various values for \(\sigma\) and \(\kappa\), respectively, as depicted in Figure 1. Not surprisingly, apart from the familiar result that more risk increases the firm's cost function, so does ambiguity. That is, a manager with maxmin utility assigns a higher expected cost to cash management. In the remainder of this section, we present numerical results of the optimal control barriers different different levels of risk, after which we study how ambiguity affects these results. #### Comparative statics of risk As is common in the literature, we identify _risk_ with the standard deviation of the cash process, \(\sigma\), while we identify _ambiguity_ with the set of density generators as parameterized by \(\kappa\). First we explore the comparative statics of \(\sigma\). As can be seen in Figure 2, the inaction region \((\underline{x},xU)\) expands as \(\sigma\) increases. That is, an increase in risk normally leads to a delay in taking action.2 However, note that in the symmetric case (\(\alpha=0\)), the inaction region expands less as \(\kappa\) increases. This means that higher levels of ambiguity enforce earlier actions, which inevitably comes with higher costs. It highlights the pessimistic mindset of managers when they lack complete confidence in the drift of the cash flow process. Footnote 2: An increase in \(\sigma\) makes more extreme observations more likely, so it is possible that an increase in \(\sigma\) leads leads to a higher probability of early action, although this is not normally the case; see, e.g., Sarkar (2000). It is worth noting that these characteristics of risk and ambiguity not only persist in cases where the symmetry of the continuation region is violated, but also give rise to interesting features that are not commonly found in the standard literature on singular control. In the following numerical examples, we focus on the following asymmetries: (i) non-zero reference drift, (ii) inequality between the upper and lower control costs, and (iii) distinct holding costs for positive and negative cash balances. #### Comparative statics of drift First, we look at the effect of changing the drift \(\alpha\) under the reference prior. Figure 2(a) shows that for \(\alpha>0\) the upper control barrier approaches the target level while the lower control barrier decreases. The inaction region undergoes a symmetric translation with an increase in \(\alpha\). Conversely, Figure 2(b) shows the opposite effect when \(\alpha<0\). This phenomenon can be intuitively understood as follows. The positive (negative) drift tends to increase (decrease) the cash level, on average, faster. This leads to an increase in the holding cost for positive (negative) cash balances. As a result, it becomes more (less) attractive to exert control for positive cash hoards and less (more) attractive to exert control for negative cash hoards. To examine the impact of ambiguity on the control barriers, we select values of \(\alpha\) from the set \(\{-1.5,1.5\}\) and compare the results for different levels of \(\kappa\) from the set \(\{0.0,0.5,1.0\}\). For the case of \(\alpha=1.5\), we observe that as the ambiguity level increases, the inaction regions shrinks as expected. Also note that the point where the worst-case drift changes, \(x^{*}\), shifts downwards, which shows that ambiguity amplifies the positive drift; see Figure 2(c). A similar effect is observed in the opposite direction for \(\alpha=-1.5\); see Figure 2(d). These findings suggest that a manager facing ambiguity should anticipate keeping their cash balance in the upper (lower) region for a longer time, incurring higher costs due to an increased likelihood of exercising Figure 3: Control barriers for varying \(\alpha\) and \(\kappa\) as a function of \(\sigma\), with \(\ell=u=2\), \(\rho=0.1\), \(\alpha=0\) and \(\tilde{c}=\hat{c}=1\). the upper (lower) control. Meanwhile, the probability of having their cash hoard in the lower (higher) region decreases, resulting in, on average, a less expensive intervention. #### Comparative statics of holding costs We now consider the influence of the proportional holding costs of negative and positive cash balances on the control barriers. To examine this, we set \(\check{c},\check{c}\in\{1.0,2.0,3.0\}\) while all other parameters as in the base case. Figures 3(a) and 3(b) show that the difference between \(\check{c}\) and \(\hat{c}\) plays a role similar to that of the drift, \(\alpha\), in determining the control barriers. Specifically, when \(\hat{c}\) is greater (lower) than \(\check{c}\), the inaction region rotates in a clockwise (counter-clockwise) direction. This phenomenon can be explained by two factors. Firstly, the optimal control policy leads to the upper (lower) barrier moving closer to the target level when \(\check{c}<(>)\hat{c}\). This prompts the DM to exercise control sooner in order to prevent an increase in the holding cost. Simultaneously, it decreases (increases) the lower (upper) barrier, allowing the cash hoard to remain longer in the lower (upper) inaction region and thereby reducing the cost of holding cash. Secondly, when \(\check{c}<(>)\hat{c}\) the cost function becomes more (less) convex where the cost function is increasing (decreasing). This creates an asymmetry between positive and negative cash balances and contrasts with the previous scenario where the "upper" and "lower" inaction regions were equivalent. These two factors together contribute to the observed rotation of the inaction region. When we fix \(\check{c}-\hat{c}\) and increase levels of ambiguity, we observe an amplification of the familiar "shrinking" of the control barriers, as depicted in the lower panel of Figure 4. An interesting interplay arises between the optimal control policy and maxmin utility. As \(\kappa\) increases, the inaction region with a narrower width (corresponding to a higher instantaneous holding cost) shrinks even further, leading to a counter-intuitive effect of ambiguity. Since the previous result implies that it is costlier to remain in a narrower inaction region, the additional shrinkage induced by the optimal control policy encourages the cash balance to stay in the other region for a longer duration to mitigate against the holding cost. This represents a trade-off between maxmin utility and the optimal control policy. Importantly, the optimal control policy not only determines the control barriers but also influences the point at which the worst-case scenario changes. #### Comparative statics of control costs We now examine the shape of control bands when there is an asymmetry in the instantaneous cost of control, i.e., when \(\ell\neq u\). As previously, all other parameters are taken as in the base case. When examining the impact of the difference between \(\ell\) and \(u\) on control barriers, we observe in the upper panel of Figure 5 that increasing either value expands the entire inaction region. This expansion not only pushes the corresponding control barrier further away from the target level but also affects the other barrier. The reason behind this is clear: when \(\ell<(>)u\), the cost of exercising the upper (lower) control becomes higher, prompting the manager to hold cash for a longer period rather than exercising control to minimize long-term cash holding costs. As the cost of controlling the cash balance at the opposite barrier decreases, the optimal control policy deviates from this barrier towards the target value, reducing the likelihood of the sample path reaching the more expensive side. This asymmetry is manifested in the shape of the value functions, which become more convex as the interval increases (decreases) when \(\ell<(>)u\). As \(\kappa\) increases, all the previously mentioned characteristics persist, including the familiar shrinking effect illustrated in the lower panel of Figure 5. Additionally, there is a rotation of the inaction region, which represents another consequence of ambiguity and its interaction with the optimal control policy. Since it is more costly to exercise the upper (lower) control, an ambiguous manager anticipates that when the cash Figure 5: Control barriers for varying \(u\), \(\ell\) and \(\kappa\) as a function of \(\sigma\), with \(\alpha=0\), \(\rho=0.1\) and \(\check{c}=\hat{c}=1\). reserve is within the upper (lower) inaction region, it is more likely to approach the upper (lower) barrier, leading to even more expensive interventions in the long run. However, under the optimal control policy, the lower (upper) control barriers rotate in the same direction, enabling the manager to keep the cash balance in this region for a longer time, thereby mitigating the impact of ambiguity. #### Comparative statics of ambiguity Finally, we turn attention to the comparative statics of ambiguity, as measured by the parameter \(\kappa\). As usual, we take the other parameters as in the base case. As we have seen throughout, the control barriers move closer to the target as \(\kappa\) increases. That is, ambiguity accelerates control. We find that this effect holds both when the we vary \(\alpha\) or \(\hat{c}-\hat{c}\). The effect is non-monotonic when control costs \(\ell\) and \(u\) are varied, because there is a point beyond which ambiguity delays control. To illustrate these effects, let us first consider the typical cases when all parameters are fixed except \(\alpha\) and \(\check{c}-\hat{c}\). As we can see from Figures 6 and 7, the control barriers shrink as \(\kappa\) increases. We also see the familiar downward translation when \(\alpha\) and \(\hat{c}\) increase (with fixed \(\check{c}\)). This is because increasing \(\alpha\) elevates the upward drift of the controlled process (under the reference prior). A higher value of \(\hat{c}\) leads to higher holding costs of positive cash balances, so that optimal control policy responds by translating the control barriers downward to minimise the cost function. This effect no longer holds when considering variations in the control costs. Revisiting the results from the previous subsection, when \(u>\ell\) and \(\kappa\), the cost of exercising the upper control becomes relatively higher. Consequently, the optimal control policy is to delay control at the upper barrier. To achieve this, Figure 6: Control barriers with varying \(\alpha\) as a function of \(\kappa\). Figure 8: Control barriers with varying \(u\) as a function of \(\kappa\). Figure 7: Control barriers with varying \(\hat{c}\) as a function of \(\kappa\). control at the lower control barrier is also postponed in an effort to keep the controlled process away from the upper control region. However, this relationship no longer holds in the presence of ambiguity. As shown in Figure 8, there exists a region where control of positive cash balances is activated earlier than without ambiguity. Furthermore, while an increase in ambiguity brings the upper barrier closer to the target level as expected, there is also a downward move of the lower control barrier. So, an ambiguity-averse manager is willing to pay a higher cost, because a higher level of ambiguity encourages more frequent exercise of the more expensive control. Meanwhile, the optimal control policy appears to mitigate against this extra cost by adjusting the lower barrier downward, with the resulting expectation that the controlled process will spend more time in the lower inaction. This effect reduces the overall expected discounted cost of cash holdings. These opposing effects result in an unconventional shape of the control barriers, which not only amplifies the impact of ambiguity but also highlights the interplay between ambiguity and the optimal control policy. ## 6 Conclusion In this paper, we revisit the classical problem of two-sided singular control in the optimal cash reserve problem where the net cash position evolves according to an Ito diffusion. We extend this model to allow for managerial ambiguity use the maxmin expected utility framework \(\kappa\)-ignorance. We establish a verification theorem and use a numerical example to explore the effects of ambiguity on the optimal control of cash holdings. From a managerial perspective, the most important observation we make is that ambiguity increases the frequency with which control is exerted. This is due to the fact that under the worst-case prior the manager expects higher holding costs, which makes exerting control relatively cheaper. This results, on average, in a smaller cash inventory, which is the opposite effect to an increase in risk. If risk, as measured by the variance of the random walk that affects the net cash flow, increases then the standard "option value of waiting" (cf., Dixit and Pindyck, 1994) increases which implies that, typically, control is exerted later. This results in a larger cash inventory, on average. There are several avenenues for future research. First, one of the assumptions of our model is that the manager does not learn. It is as if the manager is confronted with a new Ellsberg urn at every point in time. In many real-world situations it may be more realistic to assume that the manager is confronted with the _same_ Ellsberg urn at every point in time. This then opens up the possibility of managerial learning. Secondly, our model of \(\kappa\)-ambiguity describes a fairly extreme version of cautious behavior. A more realistic version of the model would allow the manager to average over multiple priors. That would naturally lead to the smooth ambiguity model of Klibanoff et al. (2005). ## Acknowledgments Part of the research was conducted when Hellmann was at the Center for Mathematical Economics (IMW) at Bielefeld University, Germany. Thijssen gratefully acknowledges support from the Center for Mathematical Economics (IMW) and the Center for Interdisciplinary Research (ZiF) at Bielefeld University. Helpful comments were received from participants of the ZiF programm "Robust Finance: Strategic Power, Knightian Uncertainty, and the Foundations of Economic Policy Advice". The work of Giorgio Ferrari was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project ID: 317210226-SFB 1283
2309.05568
On the Meromorphic Integrability of the Critical Systems for Optimal Sums of Eigenvalues
The popularity of estimation to bounds for sums of eigenvalues started from P. Li and S. T. Yau for the study of the P\'{o}lya conjecture. This subject is extended to different types of differential operators. This paper explores for the sums of the first $m$ eigenvalues of Sturm-Liouville operators from two aspects. Firstly, by the complete continuity of eigenvalues, we propose a family of critical systems consisting of nonlinear ordinary differential equations, indexed by the exponent $p\in(1,\infty)$ of the Lebesgue spaces concerned. There have profound relations between the solvability of these systems and the optimal lower or upper bounds for the sums of the first $m$ eigenvalues of Sturm-Liouville operators, which provides a novel idea to study the optimal bounds. Secondly, we investigate the integrability or solvability of the critical systems. With suitable selection of exponents $p$, the critical systems are equivalent to the polynomial Hamiltonian systems of $m$ degrees of freedom. Using the differential Galois theory, we perform a complete classification for meromorphic integrability of these polynomial critical systems. As a by-product of this classification, it gives a positive answer to the conjecture raised by Tian, Wei and Zhang [J. Math. Phys. 64, 092701 (2023)] on the critical systems for optimal eigenvalue gaps. The numerical simulations of the Poincar\'{e} cross sections show that the critical systems for sums of eigenvalues can appear complex dynamical phenomena, such as periodic trajectories, quasi-periodic trajectories and chaos.
Yuzhou Tian, Meirong Zhang
2023-09-11T15:54:24Z
http://arxiv.org/abs/2309.05568v1
# On the Meromorphic Integrability of the Critical Systems for Optimal Sums of Eigenvalues ###### Abstract The popularity of estimation to bounds for sums of eigenvalues started from P. Li and S. T. Yau for the study of the Polya conjecture. This subject is extended to different types of differential operators. This paper explores for the sums of the first \(m\) eigenvalues of Sturm-Liouville operators from two aspects. Firstly, by the complete continuity of eigenvalues, we propose a family of critical systems consisting of nonlinear ordinary differential equations, indexed by the exponent \(p\in(1,\infty)\) of the Lebesgue spaces concerned. There have profound relations between the solvability of these systems and the optimal lower or upper bounds for the sums of the first \(m\) eigenvalues of Sturm-Liouville operators, which provides a novel idea to study the optimal bounds. Secondly, we investigate the integrability or solvability of the critical systems. With suitable selection of exponents \(p\), the critical systems are equivalent to the polynomial Hamiltonian systems of \(m\) degrees of freedom. Using the differential Galois theory, we perform a complete classification for meromorphic integrability of these polynomial critical systems. As a by-product of this classification, it gives a positive answer to the conjecture raised by Tian, Wei and Zhang [J. Math. Phys. 64, 092701 (2023)] on the critical systems for optimal eigenvalue gaps. The numerical simulations of the Poincare cross sections show that the critical systems for sums of eigenvalues can appear complex dynamical phenomena, such as periodic trajectories, quasi-periodic trajectories and chaos. \({}^{a}\) Department of Mathematics, Jinan University, Guangzhou 510632, China \({}^{b}\) Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China E-mail: [email protected] (Y. Tian) E-mail: [email protected] (M. Zhang) Introduction In 1807, the pioneering work of solving the heat equation by Fourier planted the seed for the spectral theory of differential operators. Inspired by Fourier's work, Sturm and Liouville in 1837 systematically treated the spectra of second-order linear ordinary differential operators, commonly referred to as Sturm-Liouville operators. Afterwards, their work gradually evolved a whole new branch of mathematics, namely Sturm-Liouville theory. In the 20th century, Weyl's famous work [47] together with the birth of quantum mechanics revolutionize this theory. Henceforth, the modern Sturm-Liouville theory not only provides a perfect medium for understanding the quantum mechanics, but also greatly promotes the development of other areas of mathematics, such as harmonic analysis, differential geometry and operator algebras. Nowadays, Sturm-Liouville theory is still an active area of research in modern mathematical physics. Let \(\Omega=\left[0,1\right]\) be the unit interval. Fixed an exponent \(p\in\left(1,\infty\right)\), the \(L^{p}\) Lebesgue space on \(\Omega\) is denoted by \[\mathcal{L}^{p}:=L^{p}\left(\Omega,\mathbb{R}\right).\] For an integrable potential \(q\in\mathcal{L}^{p}\), we consider the following Dirichlet eigenvalue problem for the _Sturm-Liouville operator or one-dimensional Schrodinger operator_ \[\mathscr{D}_{q}\psi:=-\psi^{\prime\prime}+q\psi=\lambda\psi,\qquad x \in\Omega, \tag{1.1}\] \[\psi\big{|}_{\partial\Omega}=0.\] A number \(\lambda\) is an _eigenvalue_ of the system (1.1) if it has a nontrivial solution \(\psi\left(x\right)\), called an _eigenfunction_ associated \(\lambda\). It is well-known that the eigenvalues of problem (1.1) can be written in the form of an increasing sequence \[\lambda_{1}(q)<\lambda_{2}(q)<\cdots<\lambda_{m}(q)<\cdots,\qquad\lambda_{m}( q)\to+\infty\text{ as }m\to\infty.\] Here we have regarded eigenvalues as nonlinear functionals of potentials \(q\in\mathcal{L}^{p}\). The sum of the first \(m\) eigenvalues is defined as \[\mathscr{E}_{m}\left(q\right):=\sum_{i=1}^{m}\lambda_{i}\left(q\right).\] In quantum theory, the eigenvalues have definite physical significance, which correspond to the energy levels of a particle within a potential \(q\). Thereby, \(\mathscr{E}_{m}\left(q\right)\) is the _total energy_ of \(m\) particles. Especially, the absorption energy for particle from the ground state to the first excited state is described by the _fundamental eigenvalue gap_\(\lambda_{2}\left(q\right)-\lambda_{1}\left(q\right)\). Because of the above physical interpretations, estimation to the lower and upper bounds for eigenvalue problems, including gap, ratio and sum, are central to a large part of Sturm-Liouville theory. For the lower bounds of the fundamental eigenvalue gaps, many fascinating results about different types of operators and boundary conditions have been contributed by a lot of mathematicians up to the present days, see for example [2, 8, 25, 45, 5, 9, 19, 23, 4] and references therein. The estimate for the upper bounds can be found in [17, 9, 19, 23, 39, 4]. The problems of estimations for the eigenvalue ratios also have been extremely studied in [3, 41, 18]. In applied sciences and mathematics, it is important to understand the sums of eigenvalues. For example, related with the quantum mechanics, elasticity theory, geometry and PDEs, it is natural to study the sum of the first \(m\) eigenvalues [13, 10]. Perhaps the most important motivation is to originate from the famous Polya conjecture [36] about the lower bound on \(i\)-th eigenvalue for the Dirichlet Laplacian, which still remains open. In 1983, Li and Yau [30] gave a partial answer to this conjecture and improved Lieb's result [31]. In order to be close to Polya conjecture, their technique is to estimate the lower bound for the sum of the first \(m\) eigenvalues, commonly known as the _Berezin-Li-Yau bound or inequality_. The estimates for sum of eigenvalues of different types of operators have gained wide investigation from mathematicians since Li and Yau. We refer the readers to [7, 29, 40, 10, 12, 11, 16], etc. But up to now, there is not a general method to obtain the optimal lower or upper bound for sum of eigenvalues of Sturm-Liouville operator (1.1). The purpose of this work is to investigate the optimization problems on sum of eigenvalues for (1.1). Let \[B_{p,r}=\{q\in\mathcal{L}^{p}:\parallel q\parallel_{p}\leq r\}\] be the (infinitely dimensional) ball of radius \(r\), centered at the origin, in the space \((\mathcal{L}^{p},\parallel\cdot\parallel_{p})\). We consider the following optimization problems \[\mathscr{E}_{m}^{-}:=\min_{q\in B_{p,r}}\mathscr{E}_{m}\left(q\right)\text{ and }\mathscr{E}_{m}^{+}:=\max_{q\in B_{p,r}}\mathscr{E}_{m}\left(q\right). \tag{1.2}\] Their solutions will provide the following estimations on sum of the eigenvalues \[\mathscr{E}_{m}^{-}\leq\mathscr{E}_{m}\left(q\right)\leq\mathscr{E}_{m}^{+}, \qquad\forall q\in\mathcal{L}^{p}. \tag{1.3}\] The lower bound \(\mathscr{E}_{m}^{-}\) is also called _Berezin-Li-Yau type lower bound_. Remarkably, \(\mathscr{E}_{m}^{-}\) and \(\mathscr{E}_{m}^{+}\) are the optimal lower and upper bounds of \(\mathscr{E}_{m}\left(q\right)\) in a certain sense, respectively. Our first result provides a completely different approach to attain possible solutions to problems (1.2). As a consequence of the complete continuity of eigenvalues in potentials [34, 50, 54], one shows that the optimization problems (1.2) can be attained by some optimizing potentials \(q^{\pm}\in B_{p,r}\). See Theorem 3.3. In order to determine \(q^{\pm}\) and \(\mathscr{E}_{m}^{\pm}\), we establish the next result. **Theorem 1.1**.: _Let the exponent \(p\in(1,\infty)\), \(r\in(0,\infty)\) and \(m\in\mathbb{N}\) be given with \(m\geq 2\). Denote by \(p^{*}:=p/(p-1)\in(1,\infty)\) the conjugate exponent of \(p\). For problems (1.2), indicated by \(\varepsilon=-\) and \(+\) respectively, there are \(m\)-dimensional parameters \((\mu_{1},\ldots,\mu_{m})=(\mu_{1}^{\varepsilon},\ldots,\mu_{m}^{\varepsilon})\) and non-trivial solutions \((u_{1}(x),\ldots,u_{m}(x))=(u_{1}^{\varepsilon}(x),\ldots,u_{m}^{\varepsilon} (x))\) to the following system_ \[-u_{i}^{\prime\prime}+\varepsilon\left(\sum_{j=1}^{m}u_{j}^{2}\right)^{p^{*}- 1}u_{i}=\mu_{i}u_{i},\quad i=1,\ldots,m, \tag{1.4}\] _such that_ (i) _the solutions \(u_{i}(x)\) satisfy the Dirichlet boundary condition for \(i=1,\ldots,m\)._ (ii) _the solutions \(u_{i}(x)\) satisfy_ \[\int_{\Omega}\left(\sum_{i=1}^{m}u_{i}^{2}(x)\right)^{p^{*}}\,\mathrm{d}x=r^ {p}, \tag{1.5}\] _and the optimizing potentials \(q^{\varepsilon}(x)\) are determined by_ \[q^{\varepsilon}(x):=\varepsilon\left(\sum_{i=1}^{m}\left(u_{i}^{\varepsilon}(x) \right)^{2}\right)^{p^{*}-1},\qquad x\in\Omega. \tag{1.6}\] (iii) _the minimal and maximal of the sum of the first \(m\) eigenvalues are given by_ \[\mathscr{E}_{m}^{-}=\sum_{i=1}^{m}\mu_{i}^{-}\text{ and }\mathscr{E}_{m}^{+}= \sum_{i=1}^{m}\mu_{i}^{+}, \tag{1.7}\] _respectively._ System (1.4) is called in this paper _the critical system_, which is deduced by the direct application of the Lagrangian multiplier method to problems (1.2), as done in [46, 55, 51]. Compared with the deductions of the critical systems to the inverse spectral problems for elliptic operators or Sturm-Liouville operators by Ilyasov and Valeev [21, 43, 20], our derivation approach, employed the complete continuity of eigenvalues \(\lambda_{m}\left(q\right)\) in \(q\in\mathcal{L}^{p}\)[34, 54], is very simpler. Let \(v_{i}=u_{i}^{\prime}\) for \(i=1,\ldots,m\). Then critical system (1.4) is equivalent to a Hamiltonian system of \(m\) degrees of freedom \[u_{i}^{\prime}=v_{i}=\frac{\partial H}{\partial v_{i}},\quad v_{i}^{\prime}=- \mu_{i}u_{i}+\varepsilon\left(\sum_{j=1}^{m}u_{j}^{2}\right)^{p^{*}-1}u_{i}=- \frac{\partial H}{\partial u_{i}},\quad i=1,\ldots,m, \tag{1.8}\] with the Hamiltonian \[H=\frac{1}{2}\sum_{i=1}^{m}\left(v_{i}^{2}+\mu_{i}u_{i}^{2}\right)-\frac{ \varepsilon}{2p^{*}}\left(\sum_{j=1}^{m}u_{j}^{2}\right)^{p^{*}}. \tag{1.9}\] Let \(\mathbf{u}=(u_{1},\ldots,u_{m})\) and \(\mathbf{v}=(v_{1},\ldots,v_{m})\). The non-constant function \(I=I\left(\mathbf{u},\mathbf{v}\right)\) is said to be a _first integral_ of the Hamiltonian system (1.8) if \(H\) and \(I\) are _in involution_, i.e. the Poisson bracket \[\left\{H,I\right\}=\sum_{i=1}^{m}\left(\frac{\partial H}{\partial v_{i}}\frac {\partial I}{\partial u_{i}}-\frac{\partial H}{\partial u_{i}}\frac{\partial I }{\partial v_{i}}\right)=0. \tag{1.10}\] The Hamiltonian function \(H\) itself is always a first integral due to the antisymmetry of Poisson bracket. Denote the gradient of function \(I\) by \(\nabla I\). The functions \(I_{i}\) for \(i=1,\ldots,l\) are _functionally independent_ on \(U\) if \[\operatorname{rank}\left(\nabla I_{1},\ldots,\nabla I_{l}\right)=l\] with the possible exception of sets of Lebesgue measure zero. The Hamiltonian system (1.8) is called _completely integrable_, or simply _integrable_ in Liouville's sense if there exist \(m\) functionally independent first integrals \(I_{1}\equiv H,I_{2},\ldots,I_{m}\) (\(H\) is the Hamiltonian). In addition, Hamiltonian system (1.8) is _meromorphic completely integrable_, or simply _meromorphic integrable_ if its \(m\) functionally independent first integrals \(I_{1}\equiv H,I_{2},\ldots,I_{m}\) are meromorphic. Theorem 1.1 allows us to determine a solution to the optimization problems (1.2) by solving a boundary value problem for critical system (1.4). In other words, the solvability of Hamiltonian system (1.8) means the solvability of problems (1.2). The classical Arnold-Liouville's theorem exhibits that if Hamiltonian system (1.8) is completely integrable, then it can be solved by quadrature, see [49]. Naturally, we will focus on the next problem. **Problem 1**.: _Whether or not Hamiltonian system (1.8) is completely integrable._ The answer is too difficult because of the two reasons: system (1.8) is not a polynomial system for some \(p^{*}\in(1,+\infty)\); there are no universal techniques to decide the integrability of Hamiltonian systems. On the other hand, of particular interest is the limiting case of problems (1.2), that is, \(p=1\). For this limiting case, like in [46, 51, 55], it is significant to investigate the limiting system of (1.4) as \(p\downarrow 1\), i.e., as \(p^{*}\uparrow\infty\). In such a limiting process, let us pay a special attention to exponents \(p\) so that \[p=\frac{k}{k-1}\;\text{and}\;p^{*}=k,\;\text{and}\;k=2,3,\cdots \tag{1.11}\] For these exponents, system (1.8) are the following polynomial Hamiltonian systems \[u_{i}^{\prime}=v_{i},\quad v_{i}^{\prime}=-\mu_{i}u_{i}+\varepsilon\left(\sum_ {j=1}^{m}u_{j}^{2}\right)^{k-1}u_{i},\quad i=1,\ldots,m, \tag{1.12}\] where \[H=\frac{1}{2}\sum_{i=1}^{m}\left(v_{i}^{2}+\mu_{i}u_{i}^{2}\right)-\frac{ \varepsilon}{2k}\left(\sum_{j=1}^{m}u_{j}^{2}\right)^{k}. \tag{1.13}\] At present, most studies have been dedicated to the integrability for Hamiltonian system of two degrees of freedom, see for instance [1, 53, 15, 33] and references therein. Except the natural Hamiltonian system with homogeneous potential, there are few literature relating to the integrability of other types of Hamiltonian systems of arbitrary degrees of freedom, see [32, 22, 38]. With the help of the differential Galois theory, we give a complete classification of meromorphic integrability for Hamiltonian system (1.12) as follows. **Theorem 1.2**.: _Let \(\mathscr{U}=(\mu_{1},\ldots,\mu_{m})\) and \(k\in\mathbb{N}^{+}\) with \(k\geq 2\). The Hamiltonian system (1.12) is meromorphic completely integrable if and only if \(k\) and \(\mathscr{U}\) belong to one of the following two families:_ Especially when \(\varepsilon=-\), \(m=2\) and \(k=2n\), the next corollary can be attained by the linear canonical transformation \((u_{1},u_{2},v_{1},v_{2})\mapsto(-u_{1},-\mathrm{i}u_{2},-v_{1},\mathrm{i}v_{2})\). **Corollary 1.3**.: _Consider the following Hamiltonian system_ \[u_{1}^{\prime}=v_{1},\quad u_{2}^{\prime}=-v_{2},\quad v_{1}^{ \prime}=-\mu_{1}u_{1}-u_{1}\left(u_{1}^{2}-u_{2}^{2}\right)^{2n-1},\quad v_{2}^ {\prime}=\mu_{2}u_{2}+u_{2}\left(u_{1}^{2}-u_{2}^{2}\right)^{2n-1} \tag{1.14}\] _with Hamiltonian_ \[H=\frac{1}{2}\left(v_{1}^{2}-v_{2}^{2}+\mu_{1}u_{1}^{2}-\mu_{2}u _{2}^{2}\right)+\frac{1}{4n}\left(u_{1}^{2}-u_{2}^{2}\right)^{2n}. \tag{1.15}\] _For \(\mu_{1}\neq\mu_{2}\), \(n\in\mathbb{N}^{+}\) and \(n\geq 2\), system (1.14) is meromorphic non-integrable._ In [42], the authors studied the critical system for optimal eigenvalue gaps and posed the next conjecture. **Conjecture**. For generic \(\mu_{1}\neq\mu_{2}\) and \(n\geq 2\), system (1.14) is not polynomial integrable. Obviously, Corollary 1.3 not only gives a positive answer to the above conjecture, but also extends it to meromorphic non-integrable. The framework of the paper is as follows. We briefly recall some preliminary concepts and results of differential Galois approach in section 2. After gathering the complete continuity results on eigenvalues, we deduce the critical system (1.4) in section 3. To prove Theorem 1.2, we will divide into two sections. In section 4, we show that system (1.12) is complete integrability if the parameters \(k\) and \(\mathscr{U}\) belong to Table 1. In section 5, by Morales-Ramis theory, we prove that system (1.12) is meromorphic non-integrability when the parameters \(k\) and \(\mathscr{U}\) are outside Table 1. Section 6 presents exemplary Poincare cross sections of the critical system (1.12), which exhibits that system (1.12) has abundant dynamical behaviors. ## 2 Preliminaries In this section, we introduce some necessary concepts and preliminary results, containing Morales-Ramis theory, Hypergeometric equation and Kovacic's results. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Case & \(k\) & \(\mathscr{U}\) & Additional meromorphic first integrals \\ \hline \(1\) & \(k=2\) & \(\mathscr{U}\in\mathbb{R}^{m}\) & See Proposition 4.3. \\ \hline \(2\) & \(k\geq 3\) & \(\mu_{1}=\mu_{2}=\cdots=\mu_{m}\) & \(I_{i}=u_{1}v_{i+1}-u_{i+1}v_{1},\quad i=1,\ldots,m-1\). \\ \hline \end{tabular} \end{table} Table 1: Meromorphic completely integrable cases ### Morales-Ramis theory The Morales-Ramis theory [35] is a powerful tool to determine the non-integrability of complex Hamiltonian systems. Roughly speaking, this theory establishes a relation between the meromorphic integrability and the differential Galois group of the variational equations or the normal variational equations. Next we briefly describe the Morales-Ramis theory. For some precise notions of differential Galois theory, see [44]. Consider a complex symplectic manifold \(M\subset\mathbb{C}^{2m}\) of dimension \(2m\) with the standard symplectic form \(\boldsymbol{\tilde{\omega}}=\sum_{j=1}^{m}du_{j}\wedge dv_{j}\). Let \(H:M\rightarrow\mathbb{C}\) be a holomorphic Hamiltonian. The Hamiltonian system with \(m\) degrees of freedom is given by \[\frac{d\mathbf{x}}{dt}=X_{H}\left(\mathbf{x}\right)=\left(\frac{\partial H}{ \partial\mathbf{v}},-\frac{\partial H}{\partial\mathbf{u}}\right),\quad t\in \mathbb{C},\quad\mathbf{x}=\left(\mathbf{u},\mathbf{v}\right)\in M, \tag{2.16}\] where \(\mathbf{u}=\left(u_{1},\ldots,u_{m}\right)\) and \(\mathbf{v}=\left(v_{1},\ldots,v_{m}\right)\) are the canonical coordinates. Let \(\Gamma\) be a non-equilibrium solution of system (2.16). Assume that \(\Gamma\) can be parameterized by time \(t\), that is, \[\boldsymbol{\varphi}:\mathbb{C} \to M\subset\mathbb{C}^{2m}\] \[t \mapsto\left(\mathbf{u}\left(t\right),\mathbf{v}\left(t\right) \right).\] Then the _variational equation_ (VE) along \(\Gamma\) is the linear differential system \[\frac{d\mathbf{y}}{dt}=\frac{\partial X_{H}\left(\boldsymbol{\varphi}\left(t \right)\right)}{\partial\mathbf{x}}\mathbf{y},\quad\mathbf{y}\in T_{\Gamma}M, \tag{2.17}\] where \(T_{\Gamma}M\) is the tangent bundle \(TM\) restricted on \(\Gamma\). Let \(N:=T_{\Gamma}M/T\Gamma\) be the normal bundle of \(\Gamma\)[27], and \(\pi:T_{\Gamma}M\to N\) be the nature projective homomorphism. The _normal variational equation_ (NVE) along \(\Gamma\) has the form \[\frac{d\mathbf{z}}{dt}=\pi_{*}\left(T\left(\mathfrak{u}\right)\left(\pi^{-1} \mathbf{z}\right)\right),\quad\mathbf{z}\in N, \tag{2.18}\] where \(\mathfrak{u}=X_{H}\left(\mathbf{x}\right)\) with \(\mathbf{x}\in M\), and \(T\left(\mathfrak{u}\right)\) is the tangential variation of \(\mathfrak{u}\) along \(\Gamma\), that is, \(T\left(\mathfrak{u}\right)=\partial X_{H}/\partial\mathbf{x}\). Note that the above NVE is a \(2\left(m-1\right)\)-dimensional linear differential system. We can employ a generalization of D'Alambert's method to get the NVE (2.18), see [35]. Briefly speaking, we use the fact that \(X_{H}\left(\boldsymbol{\varphi}\left(t\right)\right)\) is a solution of the VE (2.17) to reduce its dimension by one. In effect, we typically restrict the equation (2.16) to the energy level \(h=H\left(\boldsymbol{\varphi}\left(t\right)\right)\). Then the dimension of the corresponding VE (2.17) also can be reduced. Morales and Ramis [35] proved the following classical theorem, which give a necessary condition for the integrability of Hamiltonian system (2.16) in the Liouville sense. **Theorem 2.1**.: (Morales-Ramis theorem, see [35]) _If Hamiltonian system (2.16) is meromorphically integrable in the Liouville sense in a neighbourhood of a particular solution \(\Gamma\), then the identity component of the Galois group of the_ NVE (2.18) _is Abelian._ The next theorem tells us that the identity component of the differential Galois group is invariant under the covering. **Theorem 2.2**.: ([35]) _Let \(\mathcal{M}\) be a connected Riemann surface and \(\nabla\) be a meromorphic connection over \(\mathcal{M}\). Assume that \(f:\mathcal{M}^{\prime}\longrightarrow\mathcal{M}\) is a finite ramified covering of \(\mathcal{M}\) by a connected Riemann surface \(\mathcal{M}^{\prime}\). Let \(\nabla^{\prime}=f^{*}\nabla\), i.e. the pull back of \(\nabla\) by \(f\). Then there exists a natural injective homomorphism_ \[\operatorname{Gal}\left(\nabla^{\prime}\right)\rightarrow\operatorname{Gal} \left(\nabla\right)\] _of differential Galois groups which induces an isomorphism between their Lie algebras._ ### Hypergeometric equation The hypergeometric equation is a second order differential equation over the Riemann sphere \(\mathbf{P}^{1}\) with three regular singular points [28, 48]. Let us consider the following form of hypergeometric equation with three singular points at \(z=0,1,\infty\) \[\frac{d^{2}\zeta}{dz^{2}}+\left(\frac{1-\alpha-\tilde{\alpha}}{z}+\frac{1- \gamma-\tilde{\gamma}}{z-1}\right)\frac{d\zeta}{dz}+\left(\frac{\alpha\tilde{ \alpha}}{z^{2}}+\frac{\gamma\tilde{\gamma}}{\left(z-1\right)^{2}}+\frac{ \beta\tilde{\beta}-\alpha\tilde{\alpha}-\gamma\tilde{\gamma}}{z\left(z-1 \right)}\right)\zeta=0, \tag{2.19}\] where \((\alpha,\tilde{\alpha})\), \((\gamma,\tilde{\gamma})\) and \(\left(\beta,\tilde{\beta}\right)\) are the exponents at the respective singular points, and meet the Fuchs relation \(\alpha+\tilde{\alpha}+\gamma+\tilde{\gamma}+\beta+\tilde{\beta}=1\). The exponent differences can be defined as \(\varrho=\alpha-\tilde{\alpha}\), \(\varsigma=\gamma-\tilde{\gamma}\) and \(\tau=\beta-\tilde{\beta}\). The following theorem goes back to Kimura [24], whose gave necessary and sufficient conditions for solvability of the identity component of the differential Galois group of (2.19). **Theorem 2.3**.: ([24]) _The identity component of the Galois group of the hypergeometric equation (2.19) is solvable if and only if either_ \begin{table} \begin{tabular}{|c|c|c|c|} \hline 1 & \(1/2+l\) & \(1/2+s\) & Arbitrary complex number & \\ \hline 2 & \(1/2+l\) & \(1/3+s\) & \(1/3+\upsilon\) & \\ \hline 3 & \(2/3+l\) & \(1/3+s\) & \(1/3+\upsilon\) & \(l+s+\upsilon\) even \\ \hline 4 & \(1/2+l\) & \(1/3+s\) & \(1/4+\upsilon\) & \\ \hline 5 & \(2/3+l\) & \(1/4+s\) & \(1/4+\upsilon\) & \(l+s+\upsilon\) even \\ \hline 6 & \(1/2+l\) & \(1/3+s\) & \(1/5+\upsilon\) & \\ \hline 7 & \(2/5+l\) & \(1/3+s\) & \(1/3+\upsilon\) & \(l+s+\upsilon\) even \\ \hline 8 & \(2/3+l\) & \(1/5+s\) & \(1/5+\upsilon\) & \(l+s+\upsilon\) even \\ \hline 9 & \(1/2+l\) & \(2/5+s\) & \(1/5+\upsilon\) & \(l+s+\upsilon\) even \\ \hline 10 & \(3/5+l\) & \(1/3+s\) & \(1/5+\upsilon\) & \(l+s+\upsilon\) even \\ \hline 11 & \(2/5+l\) & \(2/5+s\) & \(2/5+\upsilon\) & \(l+s+\upsilon\) even \\ \hline 12 & \(2/3+l\) & \(1/3+s\) & \(1/5+\upsilon\) & \(l+s+\upsilon\) even \\ \hline 13 & \(4/5+l\) & \(1/5+s\) & \(1/5+\upsilon\) & \(l+s+\upsilon\) even \\ \hline 14 & \(1/2+l\) & \(2/5+s\) & \(1/3+\upsilon\) & \(l+s+\upsilon\) even \\ \hline 15 & \(3/5+l\) & \(2/5+s\) & \(1/3+\upsilon\) & \(l+s+\upsilon\) even \\ \hline \end{tabular} \end{table} Table 2: Schwarz table with \(l,s,\upsilon\in\mathbb{Z}\). * _at least one of the four numbers_ \(\varrho+\tau+\varsigma,-\varrho+\tau+\varsigma,\varrho-\tau+\varsigma,\varrho+\tau-\varsigma\) _is an odd integer, or_ * _the numbers_ \(\varrho\) _or_ \(-\varrho\)_,_ \(\varsigma\) _or_ \(-\varsigma\) _and_ \(\tau\) _or_ \(-\tau\) _belong (in an arbitrary order) to some of the following fifteen families, see Table_ 2_._ ### Kovacic's results Let \(\mathbb{C}\left(z\right)\) be the field of rational functions in the variable \(z\) with complex coefficients. Consider the second order linear differential equation \[\chi^{\prime\prime}=r\left(z\right)\chi,\quad r\left(z\right)\in\mathbb{C} \left(z\right). \tag{2.20}\] It is well known that the differential Galois group \(G\) of equation (2.20) is an algebraic subgroup of \(\mathrm{SL}\left(2,\mathbb{C}\right)\). In 1986, Kovacic [26] characterized all possible types of \(G\) as follows. **Theorem 2.4**.: ([26]) _The differential Galois group \(G\) of equation (2.20) is conjugated to one of the following four types:_ * \(G\) _is conjugated to a subgroup of a triangular group, and equation (_2.20_) admits a solution of the form_ \(\chi=\exp\left(\int\omega\right)\) _with_ \(\omega\in\mathbb{C}\left(z\right)\)_._ * \(G\) _is conjugate to a subgroup of_ \[\mathcal{G}=\left\{\left(\begin{array}{cc}\mathfrak{a}&0\\ 0&\mathfrak{a}^{-1}\end{array}\right)\left|\mathfrak{a}\in\mathbb{C}\setminus \left\{0\right\}\right\}\bigcup\left\{\left(\begin{array}{cc}0&\mathfrak{ a}\\ \mathfrak{a}^{-1}&0\end{array}\right)\left|\mathfrak{a}\in\mathbb{C}\setminus \left\{0\right\}\right.\right.\] _and equation (_2.20_) admits a solution of the form_ \(\chi=\exp\left(\int\omega\right)\)_, where_ \(\omega\) _is algebraic of degree_ \(2\) _over_ \(\mathbb{C}\left(z\right)\)_._ * \(G\) _is finite and all solutions of equation (_2.20_) are algebraic over_ \(\mathbb{C}\left(z\right)\)_._ * \(G=\mathrm{SL}\left(2,\mathbb{C}\right)\) _and equation (_2.20_) does not admit Liouvillian solution._ Let \(r\left(z\right)=\mathfrak{p}\left(z\right)/\mathfrak{q}\left(z\right)\) with \(\mathfrak{p}\left(z\right),\mathfrak{q}\left(z\right)\in\mathbb{C}\left[z\right]\) relatively prime. The _pole_ of \(r\left(z\right)\) is a zero of \(\mathfrak{q}\left(z\right)\) and _the order of the pole_ is the multiplicity of the zero of \(\mathfrak{q}\left(z\right)\). _The order of \(r\left(z\right)\) at \(\infty\)_ is defined by \(\deg\mathfrak{q}-\deg\mathfrak{p}\). Kovacic [26] also provided the necessary conditions for types (i), (ii), or (iii) in Theorem 2.4 to occur. **Proposition 2.5**.: ([26]) _For the first three types in Theorem 2.4, the necessary conditions of occurrence are respectively as follows:_ **Type (i)**: _Each pole of_ \(r\left(z\right)\) _must have even order or else have order_ \(1\)_. The order of_ \(r\left(z\right)\) _at_ \(\infty\) _must be even or else be greater than_ \(2\)_._ **Type (ii)**: _The rational function_ \(r\left(z\right)\) _must have at least one pole that either has odd order greater than_ \(2\) _or else has order_ \(2\) **Type (iii)**: _The order of a pole of_ \(r\left(z\right)\) _cannot exceed_ \(2\) _and the order of_ \(r\left(z\right)\) _at_ \(\infty\) _must be at least_ \(2\)_. If the partial fraction decomposition of_ \(r\left(z\right)\) _is_ \[r\left(z\right)=\sum_{i}\frac{\alpha_{i}}{\left(z-c_{i}\right)^{2}}+\sum_{j} \frac{\beta_{j}}{z-b_{j}},\] _then_ \(\sqrt{1+4\alpha_{i}}\in\mathbb{Q}\) _for each_ \(i\)_,_ \(\sum_{j}\beta_{j}=0\)_, and if_ \(\Delta=\sum_{i}\alpha_{i}+\sum_{j}\beta_{j}\)_, then_ \(\sqrt{1+4\Delta}\in\mathbb{Q}\)_._ **Remark 2.6**.: _For a general second order linear differential equation_ \[y^{\prime\prime}=a_{1}y^{\prime}+a_{2},\quad a_{1},a_{2}\in\mathbb{C}\left(z \right),\] _it can be transformed into the form (2.20) with_ \[r\left(z\right)=\frac{a_{1}^{2}}{4}-\frac{a_{1}^{\prime}}{2}+a_{2}\] _via the change_ \[y=\exp\left(\frac{1}{2}\int a_{1}dz\right)\chi. \tag{2.21}\] ## 3 Deduction of the critical systems To derive the critical system (1.4), we refer to some basic properties of eigenvalues, viewed as functionals of potentials. **Lemma 3.1**.: ([37, 50]) _Given \(m\in\mathbb{N}\), the \(m\)th eigenvalue \(\lambda_{m}(q)\) is continuously Frechet differentiable in \(q\in(\mathcal{L}^{p},\|\cdot\|_{p})\), \(p\in[1,\infty]\). Moreover, the Frechet derivative \(\partial_{q}\lambda_{m}(q)\), considered as an element of the conjugate space \((\mathcal{L}^{p})^{*}\), is_ \[\partial_{q}\lambda_{m}(q)=(E_{m}(\cdot;q))^{2}, \tag{3.22}\] _where \(E_{m}(x)=E_{m}(x;q)\) is an eigenfunction associated with \(\lambda_{m}(q)\) satisfying the following normalization condition_ \[\|E_{m}\|_{2}=\left(\int_{\Omega}E_{m}^{2}(x)\,\mathrm{d}x\right)^{1/2}=1, \qquad\text{and}\qquad E_{m}^{\prime}(0)>0. \tag{3.23}\] Let \(q_{l},\ q\in\mathcal{L}^{p}\) with \(p\in[1,\infty]\). We say that \(q_{l}\) is _weakly convergent_ to \(q\) in \(\mathcal{L}^{p}\) with respect to the weak topology \(w_{p}\) if \[\lim_{l\rightarrow\infty}\int_{\Omega}q_{l}\left(x\right)\xi\left(x\right)dx= \int_{\Omega}q\left(x\right)\xi\left(x\right)dx,\qquad\forall\xi\in\mathcal{L }^{p^{*}}.\] Such a convergence is also denoted by \(q_{l}\rightharpoonup q\) in \(\mathcal{L}^{p}\). The following lemma is the complete continuity of eigenvalues in weak topologies, see [34, 54, 50] for more details. **Lemma 3.2**.: ([34, 54, 50]) _Given \(m\in\mathbb{N}\), the \(m\)th eigenvalue \(\lambda_{m}(q)\) is completely continuous in \(q\in\left(\mathcal{L}^{p},w_{p}\right)\), \(p\in\left[1,\infty\right]\). Here \(w_{p}\) indicates the topology of weak convergence. More precisely, whenever \(q_{l}\rightharpoonup q\) in \(\mathcal{L}^{p}\), one has_ \[\lim_{l\to\infty}\lambda_{m}(q_{l})=\lambda_{m}(q). \tag{3.24}\] **Theorem 3.3**.: _Let \(p\in\left(1,\infty\right)\), \(r\in\left(0,\infty\right)\) and \(m\in\mathbb{N}\) be given with \(m\geq 2\). Then there exist potentials \(q^{\varepsilon}=q_{m,p,r}^{\varepsilon}\in\mathcal{L}^{p}\), \(\varepsilon=+,\)\(-\) such that_ \[\|q^{-}\|_{p}=\|q^{+}\|_{p}=r, \tag{3.25}\] _and_ \[\mathscr{E}_{m}^{-}=\mathscr{E}_{m}\left(q^{-}\right),\qquad\mathscr{E}_{m}^ {+}=\mathscr{E}_{m}^{+}\left(q^{+}\right). \tag{3.26}\] **Proof** Let \[B_{p,r}=\left\{q\in\mathcal{L}^{p}:\|\;q\;\|_{p}\leq r\right\}\] be the ball of the space \(\left(\mathcal{L}^{p},\|\cdot\|_{p}\right)\). As we know, the ball \(B_{p,r}\) is a compact set of \(\left(\mathcal{L}^{p},w_{p}\right)\) with \(p\in\left(1,\infty\right)\). Since the sum of the first \(m\) eigenvalues \(\mathscr{E}_{m}\left(q\right)\) is a finite sum, by the Lemma 3.2, then \(\mathscr{E}_{m}\left(q\right)\) is completely continuous in \(q\in\mathcal{L}^{p}\). So, there exist \(q^{\pm}(x)=q_{m,p,r}^{\pm}(x)\in\mathcal{L}^{p}\) such that \(\|q^{\pm}\|_{p}\leq r\) and (3.26) is confirmed. Obviously, the Frechet derivatives \[\partial_{q}\left(\mathscr{E}_{m}\left(q\right)\right)\big{|}_{q=q^{\pm}}= \sum_{i=1}^{m}\left(E_{i}\left(\cdot;q^{\pm}\right)\right)^{2}\] is a non-zero function. From the Lagrange theorem, it follows that the optimizing potentials \(q^{\pm}\) cannot be such that \(\|q^{\pm}\|_{p}<r\). This implies that \(\|q^{\pm}\|_{p}=r\), that is, equation (3.25). \(\square\) Theorem 3.3 tells us that problems (1.2) are constrained optimization problems, that is, \[\min\left(\max\right)\left(\mathscr{E}_{m}\left(q\right)\right)\;\text{subject to}\;\|q\|_{p}=r. \tag{3.27}\] Note that the Frechet derivatives of \(\mathscr{E}_{m}\left(q\right)\) and the \(L^{p}\) norm are \[\partial_{q}\left(\mathscr{E}_{m}\left(q\right)\right)=\sum_{i=1}^{m}\left(E_ {i}\left(x;q^{\pm}\right)\right)^{2}\] and \[\partial_{q}\|q\|_{p}=\|q\|_{p}^{1-p}|q(x)|^{p-2}q(x),\qquad q\neq 0,\] respectively. One can perform the Lagrange multiplier method to problems (3.27) to obtain that \(q=q_{m,p,r}^{\pm}\) satisfy \[|q(x)|^{p-2}q(x)=c\sum_{i=1}^{m}\left(E_{i}\left(x;q^{\pm}\right)\right)^{2}, \qquad x\in\Omega, \tag{3.28}\] for some \(c\neq 0\). For later convenience, we here write the Lagrangian multiplier \(c\) in the right-hand side. For an exponent \(p\in(1,\infty)\), the increasing homeomorphism \(\phi_{p}:\mathbb{R}\rightarrow\mathbb{R}\) is given by \[\phi_{p}(s):=|s|^{p-2}s\qquad\text{for }s\in\mathbb{R},\] and its inverse is \(\phi_{p^{*}}=\phi_{p}^{-1}\), where \(p^{*}:=p/(p-1)\in(1,\infty)\) is the conjugate exponent of \(p\). **Lemma 3.4**.: _The minimization and maximization problems of (1.2) correspond to the Lagrangian multiplier \(c<0\) and \(c>0\) in (3.28) respectively._ **Proof** For the optimizing potentials \(q=q_{m,p,r}^{\pm}\left(x\right)\), equation (3.28) is equivalent to \[\phi_{p}(q(x))=c\sum_{i=1}^{m}\left(E_{i}\left(x;q^{\pm}\right) \right)^{2}\text{ and }q(x)=\phi_{p^{*}}(c)\phi_{p^{*}}\left(\sum_{i=1}^{m}\left(E_{i} \left(x;q^{\pm}\right)\right)^{2}\right).\] We construct the following parameterized potentials \[Q_{\sigma}:=\sigma\phi_{p^{*}}(c)\phi_{p^{*}}\left(\sum_{i=1}^{m }\left(E_{i}\left(x;q\right)\right)^{2}\right)\in B_{p}[r],\qquad\sigma\in[0,1].\] Clearly, \(Q_{1}=q\). For the minimization problem (1.2), one has \[\mathscr{E}_{m}\left(Q_{\sigma}\right)\geq\mathscr{E}_{m}\left(Q _{1}\right)\qquad\forall\sigma\in[0,1].\] Thereby, the derivative \[0 \geq\left.\frac{\mathrm{d}}{\mathrm{d}\sigma}\left(\mathscr{E}_{ m}\left(Q_{\sigma}\right)\right)\right|_{\sigma=1}\] \[= \int_{\Omega}\left(\sum_{i=1}^{m}\left(E_{i}\left(x;q\right) \right)^{2}\right)\cdot\phi_{p^{*}}(c)\phi_{p^{*}}\left(\sum_{i=1}^{m}\left(E _{i}\left(x;q\right)\right)^{2}\right)\,\mathrm{d}x\] \[= \phi_{p^{*}}(c)\int_{\Omega}\left|\sum_{i=1}^{m}\left(E_{i} \left(x;q\right)\right)^{2}\right|^{p^{*}}\,\mathrm{d}x.\] Since \(c\neq 0\), there must be \(c<0\) for the minimization problem. Analogously, \(c>0\) for the maximization problem. \(\square\) **Proof of Theorem 1.1.** Note that \(E_{i}(x)=E_{i}(x;q)\) are eigenfunctions for \(i=1,\ldots,m\). We define the following \(m\) parameters \[\mu_{i}:=\lambda_{i}(q)\qquad i=1,\ldots,m.\] Thus, \[-E_{i}^{\prime\prime}+q\left(x\right)E_{i}=\mu_{i}E_{i},\;i=1, \ldots,m,\;x\in\Omega. \tag{3.29}\] To simplify the original critical equation (3.28), we need to introduce the next notations \[\varepsilon:=\mathrm{sign}(c)=\pm 1,\qquad u_{i}(x):=\sqrt{|c|}E_{i}(x;q),\quad i= 1,\ldots,m. \tag{3.30}\] Therefore, equation (3.28) is equivalent to \[\phi_{p}(q(x))=\varepsilon\sum_{i=1}^{m}u_{i}^{2}\text{ and }q(x)=\varepsilon\phi_{p^{ \ast}}\left(\sum_{i=1}^{m}u_{i}^{2}\right). \tag{3.31}\] Since \(u_{i}\left(x\right)\) are still eigenfunctions, by equation (3.29), we have \[-u_{i}^{\prime\prime}+q\left(x\right)u_{i}=\mu_{i}u_{i},\qquad i=1,\ldots,m.\] The critical system (1.4) is obtained directly by substituting (3.31) into the above system. From the above analysis, it is not difficult to prove that equalities (1.5)-(1.7) hold. For instance, equality (1.6) is the second equality of (3.31). Equality (1.5) is from the norm \(\|q\|_{p}=r\). The proof is finished. \(\square\) ## 4 Complete integrability In this sections, Propositions 4.1 and 4.3 show that system (1.12) is complete integrability if the parameters \(k\) and \(\mathscr{U}\) belong to Table 1. **Proposition 4.1**.: _For \(\mu_{1}=\mu_{2}=\cdots=\mu_{m}\), the Hamiltonian system (1.12) is completely integrable with \(m-1\) additional functionally independent first integrals_ \[I_{i}=u_{1}v_{i+1}-u_{i+1}v_{1},\quad i=1,\ldots,m-1. \tag{4.32}\] ProofStraightforward calculations show that \(I_{i}=u_{1}v_{i+1}-u_{i+1}v_{1}\) are first integrals of system (1.12) with \(i=1,\ldots,m-1\). Since \[\det\left(\partial_{\mathbf{u}}H,\partial_{\mathbf{u}}I_{1},\partial_{ \mathbf{u}}I_{2},\ldots,\partial_{\mathbf{u}}I_{m-1}\right)=\det\left(\begin{array} []{cccccc}\partial_{u_{1}}H&v_{2}&v_{3}&v_{4}&\cdots&v_{m}\\ \partial_{u_{2}}H&-v_{1}&0&0&\cdots&0\\ \partial_{u_{2}}H&0&-v_{1}&0&\cdots&0\\ \partial_{u_{3}}H&0&0&-v_{1}&\cdots&0\\ \partial_{u_{4}}H&0&0&0&\cdots&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ \partial_{u_{m}}H&0&0&0&\cdots&-v_{1}\end{array}\right)\not\equiv 0,\] then \(\mathrm{rank}\left(\nabla H,\nabla I_{1},\nabla I_{2},\ldots,\nabla I_{m-1} \right)=m\), that is, \(H\) and \(I_{i}\) are functionally independent with \(i=1,\ldots,m-1\). \(\square\) For \(k=2\) and \(\varepsilon=+\), the Hamiltonian system (1.12) becomes a known complete integrability mechanical system, see [14]. **Lemma 4.2**.: ([14]) _For \(k=2\) and \(\varepsilon=+\), the following statements hold._ * _The Hamiltonian system (_1.12_) is completely integrable._ * _Let_ \[\begin{split}\mathcal{I}\left(\mathbf{u},\mathbf{v},\epsilon\right)=& \left(\sum_{j=1}^{m}u_{j}^{2}\right)\left(\sum_{j=1}^{m}\delta_{j}u_{j}^{2} \right)-\left(\sum_{j=1}^{m}\delta_{j}u_{j}^{2}\right)\left(\sum_{j=1}^{m} \delta_{j}v_{j}^{2}\right)+\\ &\left(\sum_{j=1}^{m}\delta_{j}u_{j}v_{j}\right)^{2}+2\sum_{j=1}^ {m}\delta_{j}\left(v_{j}^{2}+\mu_{j}u_{j}^{2}\right)\end{split}\] (4.33) _with_ \(\delta_{j}=1/\left(\epsilon-\mu_{j}\right)\) _for_ \(j=1,\ldots,m\)_. Then_ \(\mathcal{I}\left(\mathbf{u},\mathbf{v},\epsilon\right)\) _is a first integral of system (_1.12_) independent_ \(\epsilon\)_, and_ \(\mathcal{I}\left(\mathbf{u},\mathbf{v},\epsilon\right)\) _after expansion by powers of_ \(\epsilon\) _give_ \(m\) _functionally independent first integrals in involution for (_1.13_)._ **Proposition 4.3**.: _Let \(I\left(\mathbf{u},\mathbf{v}\right)\) be a first integral of system (1.12_) with_ \(k=2\) _and_ \(\varepsilon=+\)_. Then the following statements hold._ * _For_ \(k=2\)_, the Hamiltonian system (_1.12_) is completely integrable._ * _For_ \(k=2\) _and_ \(\varepsilon=+\)_,_ \(m\) _functionally independent first integrals are given by statement (_ii_) of Lemma_ 4.2_._ * _For_ \(k=2\) _and_ \(\varepsilon=-\)_,_ \(I\left(-\mathrm{i}\mathbf{u},\mathrm{i}\mathbf{v}\right)\) _is a first integral of system (_1.12_)._ **Proof** Let \(I\left(\mathbf{u},\mathbf{v}\right)\) be a first integral of system (1.12_) with \(\varepsilon=+\). Doing the linear canonical change of variables \[\left(\mathbf{u},\mathbf{v},t\right)\mapsto\left(\mathrm{i}\mathbf{u},- \mathrm{i}\mathbf{v},-t\right),\] the integrability of the case \(\varepsilon=-\) is equivalent to the case \(\varepsilon=+\). Thus, \(I\left(-\mathrm{i}\mathbf{u},\mathrm{i}\mathbf{v}\right)\) is a first integral of system (1.12) with \(\varepsilon=-\). By Lemma 4.2, this proposition holds. \(\square\) ## 5 Meromorphic non-integrability In this section, our aim is to prove the meromorphic non-integrability of Hamiltonian system (1.12) when the parameters \(k\) and \(\mathscr{U}\) are outside Table 1. **Proposition 5.1**.: _If the parameters \(k\) and \(\mathscr{U}\) are outside Table 1, then Hamiltonian system (1.12) is meromorphic non-integrable._ **Proof** Let the parameters \(k\) and \(\mathscr{U}\) be outside Table 1. Then, \(k\geq 3\) and there exists a positive integer \(j_{0}\in\{2,\ldots,m\}\) such that \(\mu_{1}\neq\mu_{j_{0}}\). We can assume without loss of generality that \(j_{0}=2\), that is, \(\mu_{1}\neq\mu_{2}\), because in the other case one can interchange respectively the roles of \(\mu_{j_{0}}\) and \(\mu_{2}\), and \(u_{j_{0}}\) and \(u_{2}\). To be clear, our analysis is divided into two classes: **Class 1:**\(k\geq 3,\mu_{1}\neq\mu_{2}\) and \(\mu_{1}\mu_{2}\neq 0\) (i.e. Lemma 5.2 below); **Class 2:**\(k\geq 3,\mu_{1}\neq\mu_{2}\) and \(\mu_{1}\mu_{2}=0\) (i.e. Lemma 5.3 below). The following Lemma 5.2 and Lemma 5.3 will complete the proof of Proposition 5.1. **Lemma 5.2**.: _If \(k\geq 3\), \(\mu_{1}\neq\mu_{2}\) and \(\mu_{1}\mu_{2}\neq 0\), then Hamiltonian system (1.12) is meromorphic non-integrable._ **Proof** One can easily observe that system (1.12) has two invariant manifolds \[\mathcal{N}_{1}=\left\{\left(\mathbf{u},\mathbf{v}\right)\in \mathbb{C}^{2m}\mid u_{j}=v_{j}=0,j=2,\ldots,m\right\},\] \[\mathcal{N}_{2}=\left\{\left(\mathbf{u},\mathbf{v}\right)\in \mathbb{C}^{2m}\mid u_{1}=v_{1}=0,u_{j}=v_{j}=0,j=3,\ldots,m\right\}.\] System (1.12) restricted to the first invariant manifold \(\mathcal{N}_{1}\) becomes \[u_{1}^{\prime}=v_{1},\quad v_{1}^{\prime}=-\mu_{1}u_{1}+\varepsilon u_{1}^{2k- 1}, \tag{5.34}\] which has first integral \[h=\frac{1}{2}v_{1}^{2}+\frac{1}{2}\mu_{1}u_{1}^{2}-\frac{\varepsilon}{2k}u_{1 }^{2k}. \tag{5.35}\] Solving equation (5.35), we have \[\frac{du_{1}}{dt}=\pm\sqrt{2h+\frac{\varepsilon}{k}u_{1}^{2k}-\mu_{1}u_{1}^{2}}. \tag{5.36}\] As we know, equation (5.36) for \(k=2\) and \(k\geq 3\) is respectively called _incomplete elliptic integral of first kind_ and _hyperelliptic integral_, whose expressions are not always _elementary functions_, see [6]. Let \(\Theta\left(h\right)\in\mathbb{C}^{2}\) be an integral curve of system (5.34) lying on the energy level \(h\). So, \[\varGamma_{h}:=\left\{\left(u_{1}\left(t\right),v_{1}\left(t\right),0,\ldots, 0\right)\in\mathbb{C}^{2m}\mid\left(u_{1}\left(t\right),v_{1}\left(t\right) \right)\in\Theta\left(h\right)\right\} \tag{5.37}\] is a particular solution of system (1.12). The requirement of Theorem 2.1 is to construct a non-equilibrium particular solution \(\varGamma_{h}\). We fix the energy level \(h=0\). Equation (5.36) has three equilibrium points \(u_{1}=0,\pm\sqrt[2k-2]{\mu_{1}k/\varepsilon}\) in the zero energy level. To exclude these equilibrium points, one can assume that \(u_{1}\left(t\right)\) is not a constant. By this way, we can get a non-equilibrium particular solution \(\varGamma_{0}\in\varGamma_{0}\). Let \(\boldsymbol{\xi}:=\left(\xi_{1},\ldots,\xi_{m}\right)^{T}\) and \(\boldsymbol{\tilde{\xi}}:=\left(\tilde{\xi}_{1},\ldots,\tilde{\xi}_{m}\right)^ {T}\). We obtain that the variational equation (VE) along \(\Gamma_{0}\) is \[\left(\begin{array}{c}\boldsymbol{\xi}^{\prime}\\ \boldsymbol{\tilde{\xi}}^{\prime}\end{array}\right)=\left(\begin{array}{cc} \boldsymbol{0}&\mathbf{I}\\ \boldsymbol{\Lambda}&\boldsymbol{0}\end{array}\right)\left(\begin{array}{c} \boldsymbol{\xi}\\ \boldsymbol{\tilde{\xi}}\end{array}\right), \tag{5.38}\] where \[\boldsymbol{\Lambda}:=\text{diag}\,\left(\varepsilon\left(2k-1\right)u_{1}^{2k -2}\left(t\right)-\mu_{1},\varepsilon u_{1}^{2k-2}\left(t\right)-\mu_{2}, \varepsilon u_{1}^{2k-2}\left(t\right)-\mu_{3},\ldots,\varepsilon u_{1}^{2k-2 }\left(t\right)-\mu_{m}\right)\!.\] The VE (5.38) is composed of \(m\) uncoupled Schrodinger equations \[\boldsymbol{\xi}^{\prime\prime}=\boldsymbol{\Lambda}\boldsymbol{\xi},\] that is, \[\xi_{1}^{\prime\prime} =\left(\varepsilon\left(2k-1\right)u_{1}^{2k-2}\left(t\right)-\mu_{ 1}\right)\xi_{1}, \tag{5.39}\] \[\xi_{j}^{\prime\prime} =\left(\varepsilon u_{1}^{2k-2}\left(t\right)-\mu_{j}\right)\xi_{j },\quad j=2,\ldots,m. \tag{5.40}\] Since \(\xi_{1}=u_{1}^{\prime}\left(t\right)\) is a solution of (5.39), equation (5.39) can be solved by Liouville's formula [28]. Thereby, the normal variational equations (NVE) along \(\Gamma_{0}\) are \[\xi_{j}^{\prime\prime}=\left(\varepsilon u_{1}^{2k-2}\left(t\right)-\mu_{j} \right)\xi_{j},\quad j=2,\ldots,m. \tag{5.41}\] Inspired by Yoshida [52], we introduce the following finite branched covering map \[\begin{split}&\overline{\Gamma}_{0}\to\mathbf{P}^{1},\\ & t\longmapsto z=\frac{\varepsilon}{k\mu_{1}}u_{1}^{2k-2}\left(t \right),\end{split} \tag{5.42}\] where \(\overline{\Gamma}_{0}\) is the compact Riemann surface of the curve \(v_{1}^{2}=\varepsilon u_{1}^{2k}/k-\mu_{1}u_{1}^{2}\) and \(\mathbf{P}^{1}\) is the Riemann sphere. Performing the Yoshida transformation (5.42), the normal variational equations (5.41) can be written as the hypergeometric differential equations in the new independent variable \(z\) \[\frac{d^{2}\xi_{j}}{dz^{2}}+\left(\frac{1}{z}+\frac{1}{2(z-1)} \right)\frac{d\xi_{j}}{dz}-\left(\frac{\mu_{j}}{4\mu_{1}(k-1)^{2}z^{2}}+\frac{ k\mu_{1}-\mu_{j}}{4\mu_{1}(k-1)^{2}z(z-1)}\right)\xi_{j}=0,\] (ANVE \[{\rm ANVE}_{j}\] \[j=2,\ldots,m.\] The above differential system of equations is called the _algebraic normal variational equations_ (ANVE), and is denoted as \[{\rm ANVE}={\rm ANVE}_{2}\oplus{\rm ANVE}_{3}\oplus\cdots\oplus{\rm ANVE}_{m}. \tag{5.43}\] Essentially, equation (5.43) is a direct sum in the more intrinsic sense of linear connections, see Chapter 2 of [35] for more details. From Theorem 2.2, it follows that the identity components of the Galois groups of the NVE (5.41) and the ANVE (5.43) coincide. Obviously, the ANVE (5.43) is integrable if and only if each \({\rm ANVE}_{j}\) is integrable for \(j=1,\ldots,m\). More precisely, the identity component of the Galois group of the ANVE is solvable if and only if the identity component of the Galois group of each \({\rm ANVE}_{j}\) is solvable for \(j=1,\ldots,m\). Now, we consider the \({\rm ANVE}_{2}\): \[\frac{d^{2}\xi_{2}}{dz^{2}}+\left(\frac{1}{z}+\frac{1}{2(z-1)}\right)\frac{d \xi_{2}}{dz}-\left(\frac{\mu_{2}}{4\mu_{1}(k-1)^{2}z^{2}}+\frac{k\mu_{1}-\mu_{ 2}}{4\mu_{1}(k-1)^{2}z(z-1)}\right)\xi_{2}=0 \tag{5.44}\] with three singular points at \(z=0,1,\infty\). Comparing (5.44) with the general form of the hypergeometric equation (2.19), one can see that the exponents of (5.44) at singular points must fulfill the following relations \[\alpha+\tilde{\alpha}=0,\quad\alpha\tilde{\alpha}=-\frac{\mu_{2}}{4 \mu_{1}(k-1)^{2}},\] \[\beta+\tilde{\beta}=\frac{1}{2},\quad\beta\tilde{\beta}=-\frac{k}{ 4(k-1)^{2}},\] \[\gamma+\tilde{\gamma}=\frac{1}{2},\quad\gamma\tilde{\gamma}=0.\] Thus, all the possibilities of the differences of exponents are \[\varrho=\pm\frac{1}{k-1}\sqrt{\frac{\mu_{2}}{\mu_{1}}},\tau=\pm\frac{1}{2} \left(1+\frac{2}{k-1}\right)\text{ and }\varsigma=\pm\frac{1}{2}. \tag{5.45}\] Moreover, we can get all the possibilities of \(\varrho+\tau+\varsigma,-\varrho+\tau+\varsigma,\varrho-\tau+\varsigma\) and \(\varrho+\tau-\varsigma\), see Table 3. If equation (5.44) satisfies statement (i) of Theorem 2.3, by Table 3, then \[\frac{1}{k-1}\left(\sqrt{\frac{\mu_{2}}{\mu_{1}}}+1\right)\text{ or }\frac{1}{k-1}\left(\sqrt{\frac{\mu_{2}}{\mu_{1}}}-1\right)\] must be an integer, that is, \[\frac{\mu_{2}}{\mu_{1}}\in\left\{\left(\left(k-1\right)\ell\pm 1\right)^{2} \left|\ell\in\mathbb{N}\right\}. \tag{5.46}\] The statement (ii) of Theorem 2.3 has 15 possibilities in the Table 2. If the statement (ii) of Theorem 2.3 is fulfilled for equation (5.44), from equation (5.45), we find that only the first row of Table 2 conforms. Note that \(k\geq 3\). Therefore, \[\pm\frac{1}{k-1}\sqrt{\frac{\mu_{2}}{\mu_{1}}}=\frac{1}{2}+\ell,\quad\ell\in \mathbb{Z},\] that is, \[\frac{\mu_{2}}{\mu_{1}}\in\left\{\frac{\left(k-1\right)^{2}\left(2\ell+1 \right)^{2}}{4}\Bigg{|}\ell\in\mathbb{Z}\right\}.\] Based on the analysis above, the parameters \(\mu_{1}\) and \(\mu_{2}\) must satisfy \[\frac{\mu_{2}}{\mu_{1}}\in\left\{\left(\left(k-1\right)\ell\pm 1\right)^{2} \left|\ell\in\mathbb{N}\right\}\bigcup\left\{\frac{\left(k-1\right)^{2}\left( 2\ell+1\right)^{2}}{4}\Bigg{|}\ell\in\mathbb{Z}\right\} \tag{5.47}\] if the identity components of the Galois groups of the NVE (5.41) is Abelian. On the second invariant manifold \(\mathcal{N}_{2}\), system (1.12) is written as \[u_{2}^{\prime}=v_{2},\quad v_{2}^{\prime}=-\mu_{2}u_{2}+\varepsilon u_{2}^{2k -1} \tag{5.48}\] with Hamiltonian \[\tilde{h}=\frac{1}{2}v_{2}^{2}+\frac{1}{2}\mu_{2}u_{2}^{2}-\frac{\varepsilon }{2k}u_{2}^{2k}. \tag{5.49}\] To solve equation (5.49), we \[\frac{du_{2}}{dt}=\pm\sqrt{2\tilde{h}+\frac{\varepsilon}{k}u_{2}^{2k}-\mu_{2} u_{2}^{2}}. \tag{5.50}\] Let \(\widetilde{\Theta}\left(\tilde{h}\right)\in\mathbb{C}^{2}\) be a integral curve of system (5.48) lying on the energy level \(\tilde{h}\). Thus, \[\widetilde{\Gamma}_{\tilde{h}}:=\left\{\left(0,0,u_{2}\left(t\right),v_{2} \left(t\right),0,\ldots,0\right)\in\mathbb{C}^{2m}\mid\left(u_{2}\left(t \right),v_{2}\left(t\right)\right)\in\widetilde{\Theta}\left(\tilde{h}\right)\right\} \tag{5.51}\] is a particular solution of system (1.12). We select the energy level \(\tilde{h}=0\). Equation (5.50) has three equilibrium points \(u_{2}=0,\pm\sqrt[2k-2]{\mu_{2}k/\varepsilon}\) in the zero energy level. In the same way as particular solution \(\Gamma_{0}\), we can find a non-equilibrium particular solution \(\widetilde{\Gamma}_{0}\in\widetilde{\Gamma}_{0}\). Let \(\boldsymbol{\eta}:=\left(\eta_{1},\ldots,\eta_{m}\right)^{T}\) and \(\boldsymbol{\tilde{\eta}}:=\left(\tilde{\eta}_{1},\ldots,\tilde{\eta}_{m} \right)^{T}\). The variational equations (VE) along \(\widetilde{\Gamma}_{0}\) is given by \[\left(\begin{array}{c}\boldsymbol{\eta}^{\prime}\\ \boldsymbol{\tilde{\eta}}^{\prime}\end{array}\right)=\left(\begin{array}{ ccc}\boldsymbol{0}&\mathbf{I}\\ \boldsymbol{\tilde{\Lambda}}&\boldsymbol{0}\end{array}\right)\left(\begin{array} []{c}\boldsymbol{\eta}\\ \boldsymbol{\tilde{\eta}}\end{array}\right), \tag{5.52}\] where \[\mathbf{\tilde{\Lambda}}:=\text{diag}\,\Big{(}\varepsilon u_{2}^{2k-2}\left(t \right)-\mu_{1},\varepsilon\left(2k-1\right)u_{2}^{2k-2}\left(t\right)-\mu_{2},\varepsilon u_{2}^{2k-2}\left(t\right)-\mu_{3},\ldots,\varepsilon u_{2}^{2k-2 }\left(t\right)-\mu_{m}\Big{)}.\] The VE (5.52) is also composed of \(m\) uncoupled Schrodinger equations \[\boldsymbol{\eta}^{\prime\prime}=\mathbf{\tilde{\Lambda}}\boldsymbol{\eta},\] that is, \[\begin{split}\eta_{1}^{\prime\prime}&=\left( \varepsilon u_{2}^{2k-2}\left(t\right)-\mu_{1}\right)\eta_{1},\\ \eta_{2}^{\prime\prime}&=\left(\varepsilon\left(2k- 1\right)u_{2}^{2k-2}\left(t\right)-\mu_{2}\right)\eta_{2},\\ \eta_{j}^{\prime\prime}&=\left(\varepsilon u_{2}^{2 k-2}\left(t\right)-\mu_{j}\right)\eta_{j},\quad j=3,\ldots,m.\end{split} \tag{5.53}\] Using Liouville's formula [28], the second equation of (5.53) is solvable due to the fact that it has a solution \(\eta_{2}=u_{2}^{\prime}\left(t\right)\). Therefore, the corresponding normal variational equations (\(\widetilde{\text{NVE}}\)) along \(\widetilde{\Gamma}_{0}\) are given by \[\begin{split}\eta_{1}^{\prime\prime}&=\left( \varepsilon u_{2}^{2k-2}\left(t\right)-\mu_{1}\right)\eta_{1},\\ \eta_{j}^{\prime\prime}&=\left(\varepsilon u_{2}^{2 k-2}\left(t\right)-\mu_{j}\right)\eta_{j},\quad j=3,\ldots,m.\end{split} \tag{5.54}\] Similarly, we can carry out the following Yoshida transformation \[t\longmapsto z=\frac{\varepsilon}{k\mu_{2}}u_{2}^{2k-2}\left(t\right),\] and transform \(\widetilde{\text{NVE}}\) (5.54) into the algebraic normal variational equations (\(\widetilde{\text{ANVE}}\)): \[\begin{split}\frac{d^{2}\eta_{1}}{dz^{2}}+\left(\frac{1}{z}+ \frac{1}{2(z-1)}\right)\frac{d\eta_{1}}{dz}-\left(\frac{\mu_{1}}{4\mu_{2}(k-1) ^{2}z^{2}}+\frac{k\mu_{2}-\mu_{1}}{4\mu_{2}(k-1)^{2}z(z-1)}\right)\eta_{1}& =0,\quad\text{($\widetilde{\text{ANVE}}_{1}$)}\\ \frac{d^{2}\eta_{j}}{dz^{2}}+\left(\frac{1}{z}+\frac{1}{2(z-1)} \right)\frac{d\eta_{j}}{dz}-\left(\frac{\mu_{j}}{4\mu_{2}(k-1)^{2}z^{2}}+\frac {k\mu_{2}-\mu_{j}}{4\mu_{2}(k-1)^{2}z(z-1)}\right)\eta_{j}&=0, \quad\text{($\widetilde{\text{ANVE}}_{j}$)}\\ j=3,\ldots,m.\end{split}\] The direct sum form of \(\widetilde{\text{ANVE}}\) is \(\widetilde{\text{ANVE}}=\widetilde{\text{ANVE}}_{1}\oplus\widetilde{\text{ ANVE}}_{3}\oplus\widetilde{\text{ANVE}}_{4}\oplus\cdots\oplus\widetilde{\text{ANVE}}_{m}\). For the \(\widetilde{\text{ANVE}}_{1}\), all the possibilities of the differences of exponents are \[\varrho=\pm\frac{1}{k-1}\sqrt{\frac{\mu_{1}}{\mu_{2}}},\tau=\pm\frac{1}{2} \left(1+\frac{2}{k-1}\right)\text{ and }\varsigma=\pm\frac{1}{2}.\] By the same discussions as NVE (5.41), we obtain that the parameters \(\mu_{1}\) and \(\mu_{2}\) must satisfy \[\frac{\mu_{1}}{\mu_{2}}\in\left\{\left(\left(k-1\right)\ell\pm 1\right)^{2} \left|\ell\in\mathbb{N}\right\}\bigcup\left\{\frac{\left(k-1\right)^{2}\left(2 \ell+1\right)^{2}}{4}\middle|\ell\in\mathbb{Z}\right\} \tag{5.55}\] if the identity components of the Galois groups of the \(\widetilde{\mathrm{NVE}}\) (5.54) is Abelian. The conditions (5.47) and (5.55) imply that \[\frac{\mu_{2}}{\mu_{1}}\geq 1\text{ and }\frac{\mu_{1}}{\mu_{2}}\geq 1,\] respectively. This contradicts our assumption \(\mu_{1}\neq\mu_{2}\). Consequently, either the identity components of the Galois groups of the NVE (5.41) or \(\widetilde{\mathrm{NVE}}\) (5.54) is not Abelian. By Theorem 2.1, the Hamiltonian system (1.12) for \(k\geq 3\) is meromorphic non-integrable with \(\mu_{1}\neq\mu_{2}\) and \(\mu_{1}\mu_{2}\neq 0\). The proof is finished. \(\square\) **Lemma 5.3**.: _If \(k\geq 3\), \(\mu_{1}\neq\mu_{2}\) and \(\mu_{1}\mu_{2}=0\), then Hamiltonian system (1.12) is meromorphic non-integrable._ **Proof** Our proof will be distinguished two cases: **Case 1:**\(\mu_{1}=0,\mu_{2}\neq 0\) and **Case 2:**\(\mu_{1}\neq 0,\mu_{2}=0\). **Case 1:**\(\mu_{1}=0\)**and**\(\mu_{2}\neq 0\)**.** For this case, we also restrict system (1.12) on the invariant manifold \(\mathcal{N}_{1}\). Namely, \[u_{1}^{\prime}=v_{1},\quad v_{1}^{\prime}=\varepsilon u_{1}^{2k-1} \tag{5.56}\] with Hamiltonian \[h=\frac{1}{2}v_{1}^{2}-\frac{\varepsilon}{2k}u_{1}^{2k}. \tag{5.57}\] Analogously, we also consider the particular solution \(\Gamma_{0}\) in the proof of Lemma 5.2, and compute the \(\widetilde{\mathrm{NVE}}\) along \(\Gamma_{0}\) \[\xi_{j}^{\prime\prime}=\left(\varepsilon u_{1}^{2k-2}\left(t\right)-\mu_{j} \right)\xi_{j},\quad j=2,\ldots,m. \tag{5.58}\] Doing the change of variable \[t\longmapsto z=\frac{\varepsilon}{2\mu_{2}}u_{1}^{2k-2}\left(t\right),\] we attain the algebraic normal variational equations (\(\widehat{\mathrm{ANVE}}\)): \[\frac{d^{2}\xi_{2}}{dz^{2}}+\frac{3}{2z}\frac{d\xi_{2}}{dz}- \frac{k\left(2z-1\right)}{8(k-1)^{2}z^{3}}\xi_{2}=0,\] ( \[\widehat{\mathrm{ANVE}}_{2}\] ) \[\frac{d^{2}\xi_{j}}{dz^{2}}+\frac{3}{2z}\frac{d\xi_{j}}{dz}- \frac{k\left(2\mu_{2}z-\mu_{j}\right)}{8(k-1)^{2}\mu_{2}z^{3}}\xi_{j}=0,\quad j =3,\ldots,m,\] ( \[\widehat{\mathrm{ANVE}}_{j}\] ) and denote by \[\widehat{\mathrm{ANVE}}=\widehat{\mathrm{ANVE}}_{2}\oplus\widehat{\mathrm{ ANVE}}_{3}\oplus\cdots\oplus\widehat{\mathrm{ANVE}}_{m}. \tag{5.59}\] Making the classical transformation \[\xi_{2}=\chi\exp\left(-\frac{3}{4}\int\frac{dz}{z}\right)=\chi z^{-3/4},\] the \(\widehat{\mathrm{ANVE}}_{2}\) reads \[\chi^{\prime\prime}=r\left(z\right)\chi, \tag{5.60}\] where \[r\left(z\right)=-\left(\frac{(k-3)(3k-1)}{16(k-1)^{2}z^{2}}+\frac{k}{8(k-1)^{2 }z^{3}}\right). \tag{5.61}\] Then, the set of poles of \(r\left(z\right)\) is \(\Upsilon=\{0,\infty\}\). The order of \(z=0\) and \(z=\infty\) is \(o\left(0\right)=3\) and \(o\left(\infty\right)=2\), respectively. Using Proposition 2.5 to equation (5.60), only types (ii) or (iv) of Theorem 2.4 can appear. Working the second part of Kovacic's algorithm (see Appendix A), we obtain that \[\mathcal{E}_{\infty}=\left\{2+\ell\sqrt{1-\frac{(k-3)(3k-1)}{4(k-1)^{2}}}\; \middle|\;\ell=0,\pm 2\right\}\bigcap\mathbb{Z}=\begin{cases}\left\{0,2,4\right\},\; \text{if}\;k=3,\\ \left\{2\right\},\;\text{if}\;k\geq 4.\end{cases}\quad\text{ and }\mathcal{E}_{0}=\left\{3\right\}.\] Straightforward computations show that the number \(d=d\left(\boldsymbol{\varpi}\right)=\left(\varpi_{\infty}-\sum_{c\in\Upsilon }\varpi_{c}\right)/2\) is not a non-negative integer. Therefore, type (iv) of Theorem 2.4 holds. This means that the identity component of the Galois group of the \(\widehat{\mathrm{ANVE}}\) (5.59) is not Abelian. Thereby, the identity component of the Galois group of the \(\widehat{\mathrm{NVE}}\) (5.58) is also not Abelian. From Theorem 2.1, it follows that the Hamiltonian system (1.12) for \(k\geq 3\) is meromorphic non-integrable with \(\mu_{1}=0\) and \(\mu_{2}\neq 0\). **Case 2: \(\mu_{1}\neq 0\) and \(\mu_{2}=0\).** Substituting \(\mu_{2}=0\) into (5.54), we get the normal variational equations along \(\widetilde{\Gamma}_{0}\): \[\begin{split}\eta_{1}^{\prime\prime}&=\left(\varepsilon u _{2}^{2k-2}\left(t\right)-\mu_{1}\right)\eta_{1},\\ \eta_{j}^{\prime\prime}&=\left(\varepsilon u_{2}^{2k- 2}\left(t\right)-\mu_{j}\right)\eta_{j},\quad j=3,\ldots,m.\end{split} \tag{5.62}\] After the change of variable \[t\longmapsto z=\frac{\varepsilon}{2\mu_{1}}u_{2}^{2k-2}\left(t\right),\] equations (5.62) become the algebraic normal variational equations \[\frac{d^{2}\eta_{1}}{dz^{2}}+\frac{3}{2z}\frac{d\eta_{1}}{dz}- \frac{k\left(2z-1\right)}{8(k-1)^{2}z^{3}}\eta_{1}=0,\] \[\frac{d^{2}\eta_{j}}{dz^{2}}+\frac{3}{2z}\frac{d\eta_{j}}{dz}- \frac{k\left(2\mu_{1}z-\mu_{j}\right)}{8(k-1)^{2}\mu_{1}z^{3}}\eta_{j}=0,\quad j =3,\ldots,m.\] The analysis is exactly the same as **Case 1**. Thus, the Hamiltonian system (1.12) for \(k\geq 3\) is meromorphic non-integrable with \(\mu_{1}\neq 0\) and \(\mu_{2}=0\). This lemma holds. \(\square\) **Proof of Theorem 1.2.** By Propositions 4.1, 4.3 and 5.1, the Theorem 1.2 follows. \(\square\) ## 6 Poincare cross section For integrable Hamiltonian systems, Liouville-Arnold theorem [49] exhibits that their dynamical behaviors are ordered and regular. For weakly perturbed (originally integrable) Hamiltonian systems, KAM theorem [49] shows their dynamical behaviors are stochastic and chaotic, such as chaos, Arnold diffusion (at least three degrees of freedom), etc. Roughly speaking, the phase space for integrable Hamiltonian systems is foliated by KAM tori, which obstruct the stochasticity of the trajectories. For some weak perturbations, the KAM tori happen breaking down resulting in chaotic motion. In other words, the chaotic behavior can destroy meromorphic integrability. The classical Poincare cross section technique can intuitively present the above dynamical process: local stability, the trajectories transition from ordered to chaotic, and many other dynamic properties. In the calculation Poincare cross sections below, we focus on Hamiltonian system (1.12) with two degrees of freedom (i.e. \(m=2\)) and fix the parameter \(\varepsilon=-1\). Consider the energy level \[M_{h}:=\left\{(u_{1},u_{2},v_{1},v_{2})\in\mathbb{R}^{4}|H\left(u_{1},u_{2},v_ {1},v_{2}\right)=h,\;h\in\mathbb{R}\right\}.\] On energy level \(M_{h}\), we select \(u_{1}=0\) as a cross section plane with coordinates \((u_{2},v_{2})\). Taking the energy \(h=0.85\), Figure 1 shows the integrable case with \(k=2\), which take the set of parameters: \(\mu_{1}=0.1\), \(\mu_{2}=1\); \(\mu_{1}=0.1\), \(\mu_{2}=-1\). We can see that their dynamical structures are very regular. The integrable case and some weakly perturbed cases with \(k=3\), \(\mu_{2}=1\) and \(h=0.85\) are presented in Figure 2. For integrable case, the dynamical behavior is highly regular, see (1) of Figure 2. For sufficiently small perturbation, KAM tori appear deformation or even breaking down in the fragile top and bottom boundaries, but most of them remain for internal region, as shown in (2) and (3) of Figure 2. For \(\mu_{1}=0.99\) and \(\mu_{1}=0.5\), the major bifurcations occurs in the vertical direction. As the perturbation strength is increased \(\mu_{1}=0.1\), the KAM tori progressively break down resulting in the trajectories complete stochastic motion. Accurately speaking, the central domain in (4) of Figure 2 is a large chaotic zone, that is, chaos. Around this chaotic zone, there are many chain of islands which correspond to quasi-periodic trajectories. Except for periodic trajectories, some KAM tori still remain in the annular area at top and bottom, which can be observed in (4) of Figure 2. Finally, our numerical simulations consider some high degree systems (1.12), that is, \(k=10,20,30,40\), see Figure 3. One can see that the KAM tori at central region boundary disappear with the trajectories escaping to nonclosed areas of the phase space. This trajectories stochastic escaping create complex dynamic phenomena of system (1.12), including chaos, quasi-periodic trajectories and periodic trajectories. ## Acknowledgments Yuzhou Tian wants to express his gratitude to the Department of Mathematical Sciences, Tsinghua University for the hospitality and support during the time period in which this work was completed. The second author is partially supported by the National Natural Science Foundation of China (Grant no. 11790273). ## Appendix A Second part of Kovacic's algorithm Here, we recall the second part of Kovacic's algorithm [26]. Let \(r\left(z\right)\in C\left(z\right)\) and \(\Upsilon\) be the set of poles of \(r\left(z\right)\). Set \(\chi^{\prime\prime}=r\chi\). **Step 1.**: To each pole \(c\in\Upsilon\), we calculate the set \(\mathcal{E}_{c}\) as follows. 1. If the pole \(c\) is of order \(1\), then \(\mathcal{E}_{c}=\{4\}\). 2. If the pole \(c\) is of order \(2\) and \(b\) is the coefficient of \(1/\left(z-c\right)^{2}\) in the partial fraction decomposition of \(r\left(z\right)\), then \[\mathcal{E}_{c}=\left\{2+\ell\sqrt{1+4b}\;\big{|}\;\ell=0,\pm 2\right\} \bigcap\mathbb{Z}.\] (A.1) 3. If the pole \(c\) is of order \(o\left(c\right)>2\), then \(\mathcal{E}_{c}=\{o\left(c\right)\}\). 4. If the order of \(r\) at \(\infty\) is \(o\left(\infty\right)>2\), then \(\mathcal{E}_{c}=\{0,2,4\}\). 5. If the order of \(r\) at \(\infty\) is \(2\) and \(b\) is the coefficient of \(1/z^{2}\) in the Laurent expansion of \(r\left(z\right)\) at \(\infty\), then \[\mathcal{E}_{c}=\left\{2+\ell\sqrt{1+4b}\;\big{|}\;\ell=0,\pm 2\right\} \bigcap\mathbb{Z}.\] (A.2) 6. If the order of \(r\) at \(\infty\) is \(o\left(\infty\right)<2\), then \(\mathcal{E}_{c}=\{o\left(\infty\right)\}\). 2. Let \(\boldsymbol{\varpi}=\left(\varpi_{c}\right)_{c\in\Upsilon}\) be a element in the Cartesian product \(\prod_{c\in\Upsilon}\mathcal{E}_{c}\) with \(\varpi_{c}\in\mathcal{E}_{c}\). Define number \[d:=d\left(\boldsymbol{\varpi}\right)=\frac{1}{2}\left(\varpi_{\infty}-\sum_{c \in\Upsilon}\varpi_{c}\right).\] (A.3) We try to find all elements \(\boldsymbol{\varpi}\) such that \(d\) is a non-negative integer, and retain such elements to perform Step 3. If there is no such element \(\boldsymbol{\varpi}\), then statement (ii) of Theorem 2.4 is impossible. 3. For each \(\boldsymbol{\varpi}\) retained from Step 2, we introduce the rational function \[\theta=\frac{1}{2}\sum_{c\in\Upsilon}\frac{\varpi_{c}}{z-c}.\] Then, we seek a monic polynomial \(P\) of degree \(d\) defined in (A.3) such that \[P^{\prime\prime\prime}+3\theta P^{\prime\prime}+\left(3\theta^{2}+3\theta^{ \prime}-4r\right)P^{\prime}+\left(\theta^{\prime\prime}+3\theta\theta^{ \prime}+\theta^{3}-4r\theta-2r^{\prime}\right)P=0.\] If such polynomial \(P\) does not exist for all elements \(\boldsymbol{\varpi}\) retained from Step 2, then statement (ii) of Theorem 2.4 is untenable. Assume that such a polynomial \(P\) exists. Let \(\phi=\theta+P^{\prime}/P\) and \(\omega\) be a root of \[\omega^{2}-\phi\omega+\left(\frac{1}{2}\phi^{\prime}+\frac{1}{2}\phi^{2}-r \right)=0.\] Then, \(\chi=\exp\left(\int\omega\right)\) is a solution of differential equation \(\chi^{\prime\prime}=r\chi\).
2309.06061
Verifiable Fairness: Privacy-preserving Computation of Fairness for Machine Learning Systems
Fair machine learning is a thriving and vibrant research topic. In this paper, we propose Fairness as a Service (FaaS), a secure, verifiable and privacy-preserving protocol to computes and verify the fairness of any machine learning (ML) model. In the deisgn of FaaS, the data and outcomes are represented through cryptograms to ensure privacy. Also, zero knowledge proofs guarantee the well-formedness of the cryptograms and underlying data. FaaS is model--agnostic and can support various fairness metrics; hence, it can be used as a service to audit the fairness of any ML model. Our solution requires no trusted third party or private channels for the computation of the fairness metric. The security guarantees and commitments are implemented in a way that every step is securely transparent and verifiable from the start to the end of the process. The cryptograms of all input data are publicly available for everyone, e.g., auditors, social activists and experts, to verify the correctness of the process. We implemented FaaS to investigate performance and demonstrate the successful use of FaaS for a publicly available data set with thousands of entries.
Ehsan Toreini, Maryam Mehrnezhad, Aad van Moorsel
2023-09-12T09:00:03Z
http://arxiv.org/abs/2309.06061v1
# Verifiable Fairness: Privacy-preserving Computation of Fairness for Machine Learning Systems ###### Abstract Fair machine learning is a thriving and vibrant research topic. In this paper, we propose Fairness as a Service (FaaS), a secure, verifiable and privacy-preserving protocol to computes and verify the fairness of any machine learning (ML) model. In the design of FaaS, the data and outcomes are represented through cryptograms to ensure privacy. Also, zero knowledge proofs guarantee the well-formedness of the cryptograms and underlying data. FaaS is model-agnostic and can support various fairness metrics; hence, it can be used as a service to audit the fairness of any ML model. Our solution requires no trusted third party or private channels for the computation of the fairness metric. The security guarantees and commitments are implemented in a way that every step is securely transparent and verifiable from the start to the end of the process. The cryptograms of all input data are publicly available for everyone, e.g., auditors, social activists and experts, to verify the correctness of the process. We implemented FaaS to investigate performance and demonstrate the successful use of FaaS for a publicly available data set with thousands of entries. ## 1 Introduction Demonstrating the fairness of algorithms is critical to the continued proliferation and acceptance of algorithmic decision making in general, and AI-based systems in particular. There is no shortage of examples that have diminished trust in algorithms because of unfair discrimination of groups within our population. This includes news stories about the human resource decision-making tools used by large companies, which turn out to discriminate against women [28]. There also are well-understood seminal examples studied widely within the academic community, such as the unfair decisions related to recidivism in different ethnicities [20]. In the UK, most recently the algorithm to determine A-levels substitute scores under COVID-19 was widely found to be unfair across demographics [23]. There has been a surge of research that aims to establish metrics that quantify the fairness of an algorithm. This is an important area of research, and tens of different metrics have been proposed, from individual fairness to group fairness. It has been shown that various expressions for fairness cannot be satisfied or optimised at once, thus establishing impossibility results [11]. Moreover, even if one agrees about a metric, this metric on its own does not provide trust to people. It matters not only what the metrics express, but also who computes the metrics and whether one can verify these computations and possibly appeal against them. At the same time, in situations in which verification by stakeholders is possible, the owner of the data wants to be assured that none of the original, typically sensitive and personal, data is leaked. The system that runs the algorithms (later referred to as Machine Learning system or ML system) may have a valid interest in maintaining the secrecy of the model. In other words, if one wants to establish _verifiable fairness_, one needs to tackle a number of security, privacy and trust concerns. In FaaS, we take a fundamentally different design approach. We leak no data or model information, but the FaaS is still able to calculate fairness for a variety of fairness metrics and independent of the ML model. Thus, replacing the model in the ML system will not impact functionality of FaaS protocol. Moreover, any other party can verify this calculation since all the necessary encrypted information is posted publicly, on a 'fairness board'. Summarising, our contributions are: * We propose FaaS, a model-agnostic protocol to compute different fairness metrics without accessing sensitive information about the model and the dataset. * FaaS is universally verifiable so everyone can verify the well-formedness of the cryptograms and the steps of the protocol. * We implement a proof-of-concept of the FaaS architecture and protocol using off-the-shelf hardware, software, and datasets and run experiments to demonstrate the practical feasibility of FaaS. ## 2 Background and Related Work One of the benefits of auditing ML-based products relates to trust. Trust and trustworthiness (in socio-technical terms) are complicated matters. Toreini et. al [32] proposed a framework for trustworthiness technologies in AI-solutions based on existing social frameworks on trust (i.e. demonstration of Ability, Benevolence and Integrity, a.k.a. ABI and ABI+ frameworks) and technological trustworthiness [30]. They comprehensively reviewed the policy documents on regulating AI and the existing technical literature and derived any ML-based solution needs to demonstrate fairness, explainability, auditability, and safety and security to establish social trust. When using AI solutions, one cannot be assured of the fairness of such systems without trusting the reputation of the technology provider (e.g., datasets and ML models). It is commonly believed that leading tech companies do not make mistake in their implementation [8]; however, in practice, we often witness that such products indeed suffer from bias in ML [28, 23]. ### Fairness Metrics There exist several fairness definitions in the literature. Designing a fair algorithm requires measuring and assessment of fairness. Researchers have worked on formalising fairness for a long time. Narayanan [24] lists at least 21 different fairness definitions in the literature and this number is growing, e.g., [5, 6]. Fairness is typically expressed as discrimination in relation to data features. These features for which discrimination may happen are known as _Protected Attributes_ (PAs) or sensitive attributes. These include, but are not limited to, ethnicity, gender, age, scholarly, nationality, religion and socio-economic group. The majority of fairness definitions expresses fairness in relation to PAs. In this paper, we consider Group Fairness, which refers to a family of definitions, all of which consider the performance of a model on the population groups level. The fairness definitions in this group are focused on keeping decisions consistent across groups and are relevant to both disparate treatment and disparate impact notions, as defined in [9, 15]. For the following definitions, let \(U\) be an individual in the dataset, where each individual has data features \((X,A)\). In this context, \(A\) denotes the PA and in what follows \(A=1\) and \(A=0\) express membership of a protected group or not. \(X\) constitutes the rest of attributes that are available to the algorithm. \(Y\) denotes the actual label of \(U\) while \(\hat{Y}\) would be the predicted label by the model: (1) Demographic Parity (DP) A classifier satisfies DP when outcomes are equal across groups\(F_{DP}=\frac{Pr\left(\hat{Y}=1|A=0\right)}{Pr\left(\hat{Y}=1|A=1\right)}\) (2) _Equalised Odds (EOd)_ A classifier satisfies EO if equality of outcomes happens across both groups and true labels: \(F_{EOd}=\frac{Pr\left(\hat{Y}=1|A=0,Y=\gamma\right)}{Pr\left(\hat{Y}=1|A=1,Y= \gamma\right)}\) where \(\gamma\in\{0,1\}\). (3)Equality of Opportunity (EOp) \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline Work & Universal & Ind. of & Ind. of & User & Model & Off-the-shelf \\ & Verifiability & metric & ML model & Privacy & Confidentiality & Hardware \\ \hline Veal \& Binns [33] & ✗ & ✗ & ✗ & ✗ & ✓ \\ Kilbertus et al. [19] & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ \\ Jagielski et al. [17] & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\ Hu et al. [16] & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\ Segal et al. [29] & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ \\ Park et al. [27] & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ \\ \hline FaaS (this paper) & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Features of FaaS and comparison with other privacy–oriented fair ML proposals (support: full: ✓, partial: \(\clubsuit\), none: ✗) is similar to EO, but only requires equal outcomes across subgroups for _true positives_:\(F_{EOp}=\frac{Pr\left(\hat{Y}=1|A=0,Y=1\right)}{Pr\left(\hat{Y}=1|A=1,Y=1\right)}\) In this paper, we will focus on the computations based on the above three fairness metrics. For this computation, the auditor requires to have access to the three pieces of information for each elements in the dataset: (1) the sensitive group membership (binary value for \(A\) demonstrating if a sample does or does not belong to a group with PAs) (2) the actual labelling of the sample (binary value for \(Y\)) (3) the predicted label of the sample (binary value for \(\hat{Y}\)). The ML system transfers this information for each sample from their test set. Then, the auditor uses this information to compute the above fairness metrics. Note that while we consider the above metrics for our protocol and proof-of-concept implementation in next sections, our core architecture is independent of metrics, and the metric set can be replaced by other metrics too (Fig. 1). ### Auditing ML Models for Fairness The existing research in fair ML normally assumes the computation of the fairness metric to be done locally by the ML system, with full access to the data, including the private attributes [15, 6, 5]. However, there is a lack of verifiability and independence in these approaches which will not necessarily lead to trustworthiness. To increase trust in the ML products, the providers might make the trained model self-explaining (aka transparent or explainable). There is also the transparent-by-design approach [12, 2, 34]. While this approach has its benefits, it is both model-specific and scenario-specific [25]; thus it cannot be generalised. There is also no trusted authority to verify such claims and explanations. Moreover, in reality, the trained model, datasets and feature extraction mechanisms are company assets. Once exposed, it can make them vulnerable to the competitors. Another approach to provide transparency to the fairness implementation comes through the black-box auditing, also known as adhoc [12, 22, 26]. In this way, the model is trained and audited for different purposes [1]. This solution is similar to tax auditing and financial ledgers where accountants verify and ensure these calculations are legitimate.However, unlike the well-established body of certifications and qualifications for accountants in tax auditing and financial ledgers; there does not exist any established processes and resources for fairness computation in AI and ML. The concept of a service that calculates fairness has been proposed before, e.g., in [33]. The authors introduced an architecture to delegate the computation of fairness to a trusted third party that acts as a guarantor of its algorithmic fairness. In this model, the fairness service is trusted both by the ML system and the other stakeholders (e.g. users and activists). In particular, the ML system must trust the service to maintain the privacy of data and secrecy of its model, whilst revealing to the trusted third party the algorithm outcome, sensitive input data and even inner parameters of the model. This is a big assumption to trust that the third party would not misuse the information and hence the leakage of data and model information is not a threat. To address these limitations, Kilbertdus et al. [19] proposed a system known as 'blind justice', which utilises multi-party computation protocols to enforce fairness into the ML model. Their proposal considers three groups of participants: User (data owner), Model (ML model owner) and the Regulator (that enforces a fairness metric). These three groups collaborate with each in order to train a fair ML model using a federated learning approach [35]. The outcome is a fair model that is trained with the participation of these three groups in a privacy-preserving way. They only provide a limited degree of verifiability in which the trained model is cryptographically certified after training and each of the participants can make sure if the algorithm has not been modified. It should be noted that since they operate in the training stage of the ML pipeline, their approach is highly dependent on the implementation details of the ML model itself. Jagielski et al. [17] proposed a differential privacy approach in order to train a fair model. Similarly, Hu et al. [16] used a distributed approach to fair learning with only demographic information. Segal et al. [29] used similar cryptographic primitives but took a more holistic approach towards the computation and verification of fairness. They proposed a data-centric approach in which the verifier challenges a trained model via an encrypted and digitally certified dataset using merkle tree and other cryptographic primitives. Furthermore, the regulator will certify the model is fair based on the data received from the clients and a set of dataset provided to the model. Their approach does not provide universal verifiability as the regulator is the only party involved in the computation of fairness. More recently, Park et al. [27] proposed a Trusted Execution Environment (TEE) for the secure computation of fairness. Their proposal requires special hardware components which are cryptographically secure and provide enough guarantees and verification for the correct execution of the code. The previous research generally has integrated fairness into their ML algorithms; therefore, such algorithms should be redesigned to use another fairness metric set. As it can be seen in Table 1, FaaS is the only work which is independent of the ML model and fairness metric with universal verifiability, and hence, can be used as a service. Figure 1: FaaS Architecture FaaS Architecture In this Section, we present the architecture of our system (Fig. 1) and describe its features. The FaaS architecture includes stakeholders in three roles: A) **ML System:** a system that owns the data and the ML algorithm, B) **Fairness Auditor Service:** a service that computes the fair performance of the ML system, and C) **Universal Verifier:** anyone who has the technical expertise and motivation to verify the auditing process. ### Threat Model The design and implementation of the security of parties implementing the respective protocol roles (ML system, Fairness Auditor Service, and Universal Verifier) (Fig. 1) are independent of each other. The inter-communications that happen between the roles assumes no trust between parties; thus, all their claims must be accompanied with validation proofs (for which we will use ZKP). We assume the Auditor System is vulnerable to different attacks and not trustworthy. Thus, the data stored on the Fairness Auditor System must be encrypted, tamper-proof and verifiable at all stages. Moreover, we assume the communication channel between the ML system and fairness auditor is not protected. Therefore, the sensitive data must be encrypted before the transmission starts. However, there will be an agreement on the cryptographic primitives at the pre-setting stage in the protocol sequence. In FaaS, we assume that the ML system is honest in sending the cryptograms of the original labels of the dataset samples. One might argue against such assumption and discuss that the ML system might intend to deceive the Auditor Service, and by extension the verifiers, by modifying the actual labels of the dataset. For instance, the ML system would provide the cryptograms of the actual labels and the predicted ones as similar to each other as possible so that the auditor concludes the algorithms are fair. This is an interesting area for further research. For instance, it may be addressed by providing the cryptograms of the actual labels to the Auditor Service independently e.g. the verifier may own a dataset it provides to a ML system. The verifier then separately decides the desired values for the actual labels and feeds these to the Auditor service. In this way, it is far less clear to the ML system how to manipulate the data it sends to the auditor, since some of the labels come from elsewhere. The internal security of the roles is beyond FaaS. The ML system itself needs to consider extra measures to protect its data and algorithms. We assume the ML system does present the data and predictions honestly. This is a reasonable assumption since the incentives to perform _ethically_ is in contrast to being dishonest when participating in fairness auditing process. This is discussed more in the Discussion Section. ### Protocol Overview The main security protocol sequence is between the ML system and Fairness Auditing Service or _auditor_ in short form. Note that although we suggest three roles in our architecture, the communications are mainly between the above two roles, and any universal verifier can turn to the auditor service (which represents the fairness board), if they want to challenge the computations. The ML system is responsible for the implementation and execution of the ML algorithm. It has data as input and performs some prediction (depending on the use case and purpose) that forms the output (Fig. 1). The Fairness Auditor Service receives information from the ML system, evaluates its fairness performance by computing a fairness metric. Then, it returns the result for the metric back to the ML system. It also publishes the calculations in a _fairness board_ for public verification. The public fairness board is a publicly accessible, read-only fairness board (e.g. a website). The auditor only has the right to append data (and the sufficient proofs) to the fairness board. Also, the auditor verifies the authenticity, correctness and integrity of data before publishing it. ### Protocol Sequence This protocol has three stages: setup, cryptogram generation and fairness metric computation. #### 3.3.1 Phase I: Setup In this phase, the ML System and Auditor agree on the initial settings. We assume the protocol functions in multiplicative cyclic group setting (i.e. Digital Signature Algorithm (DSA)-like group [18]), but it can also function in additive cyclic groups (i.e. Elliptic Curve Digital Signature Algorithm (ECDSA)-like groups [18]). The auditor and ML system publicly agree on \((p,q,g)\) before the start of the protocol. Let \(p\) and \(q\) be two large primes where \(q|(p-1)\). In a multiplicative cyclic group (\(\mathbb{Z}_{p}^{*}\)), \(G_{q}\) is a subgroup of prime order \(q\) and \(g\) is its generator. For simplicity, we assume the Decision Diffie-Hellman (DDH) problem is out of scope [31]. \begin{table} \begin{tabular}{c|c|c|c|c} \hline Membership & Actual & Predicted & Encoded & Permutation \\ of Sensitive Group & Label & Label & Permutation & \# \\ \hline No & 0 & 0 & 000 & \#1 \\ No & 0 & 1 & 001 & \#2 \\ No & 1 & 0 & 010 & \#3 \\ No & 1 & 1 & 011 & \#4 \\ Yes & 0 & 0 & 100 & \#5 \\ Yes & 0 & 1 & 101 & \#6 \\ Yes & 1 & 0 & 110 & \#7 \\ Yes & 1 & 1 & 111 & \#8 \\ \hline \end{tabular} \end{table} Table 2: Possible permutations of 3-bit representation of an entry in the original data. Next, the ML system generates a public/private pair key by using DSA or ECDSA and publishes the public keys in the fairness board. The protection the private key pair depends on the security architecture of the ML system and we assume the private key is securely stored in an industrial standard practice (e.g. using the secure memory module on board). **Cryptogram Table:** After initial agreements, the ML system produces a cryptogram table with \(n\) rows corresponding to the number of samples in their test dataset. We will refer to this table as _cryptogram table_ in the rest of this paper. In case the ML system does not want to reveal the number of the samples in the test set, the auditor and the ML system can publicly agree on \(n\). In this case, \(n\) must be big enough so that the universal verifiers are satisfied with the outcome. Each row in the cryptogram table summarises three parameters: (1) protected group membership status, (2) its actual label and (3) predicted label by the ML model. Each row contains the encrypted format of the three parameters along with proofs of its correctness. A cryptogram table in the setup phase is shown in Table 3. In the simplest case, each parameter is binary. Therefore, the combined parameters will generate eight permutations in total. In the setup phase, the table is generated to contain all eight possible permutations and their proofs for each data sample. The total structure of the permutations are shown in Table 2. Each row will satisfy four properties: (a) one can easily verify if a single cryptogram is the encrypted version of one of the eight possible permutations, (b) while verifiable, if only one single cryptogram selected, one cannot exert which permutations the current cryptogram represents, (c) for each two cryptograms selected from a single row, anyone will be able to distinguish each from one another, and (d) given a set of cryptograms arbitrarily select from each row as a set, one can easily check how many cases for each "permutation" are in the set. The generation of the cryptogram table functions are based on the following sequence: Step (1): For each of the \(n\) samples, the system generates a random public key \(g^{x_{i}}\) where \(x_{i}\) is the private key and \(x_{i}\in[1,q-1]\). Step (2): Once computation of public keys is finished for all samples, the system will compute another number \(g^{y_{i}}\) where computed using Equation below. We refer to as _reconstructed public key_ as it is computed using a combination \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline Sample & \begin{tabular}{c} Random \\ Public Key \\ \end{tabular} & \begin{tabular}{c} Reconstructed \\ Public Key \\ \end{tabular} & \begin{tabular}{c} Cryptogram \\ of Permutation \#1 \\ \end{tabular} & \begin{tabular}{c} Cryptogram \\ of Permutation \#2 \\ \end{tabular} &... & \begin{tabular}{c} Cryptogram \\ of Permutation \#8 \\ \end{tabular} \\ \hline 1 & \(g^{x_{1}}\) & \(g^{y_{1}}\) & \(g^{x_{2}\cdot y_{2}}\), \(g^{x_{3}\cdot y_{2}}\), \(g^{y_{4}\cdot y_{1}}\), \(g^{y_{5}\cdot y_{2}}\),... &... & \(g^{x_{1}\cdot y_{1}}g^{x_{2}\cdot y_{2}}\),... \\ & & & 1-of-8 ZKP & 1-of-8 ZKP &... & 1-of-8 ZKP \\ \hline 2 & \(g^{x_{2}}\) & \(g^{y_{2}}\) & \(g^{x_{3}\cdot y_{2}}\),... &... &... &... \\ \hline... &... &... &... &... &... &... \\ \hline n & \(g^{x_{n}}\) & \(g^{y_{n}}\) & \(g^{x_{n}\cdot y_{n}}\),... &... &... &... &... \\ \hline n & \(g^{x_{n}}\) & \(g^{y_{n}}\) & \(g^{x_{n}\cdot y_{n}}\),... &... &... &... \\ & & & 1-of-8 ZKP & 1-of-8 ZKP &... &... & 1-of-8 ZKP \\ \hline \end{tabular} \end{table} Table 3: Cryptogram Table for \(n\) data samples of public keys of all the rows, except for the current one.\(g^{y_{i}}=\frac{\prod_{j=1}^{i}g^{x_{j}}}{\prod_{j=i+1}^{n}g^{x_{j}}}\). Step (3): At this step, the ML system computes the cryptograms and zero knowledge proofs for all the possible parameter permutations. This step occurs before the ML system is trained and deployed to predict data samples. Therefore, it considers all the permutation for minimising the overhead in the next protocol sequence stages (as we discuss later). **Cryptograms:** Each permutation is encoded into a \(C_{i}=g^{x_{i}.y_{i}}.g^{p_{i}}\) which are computed based on the multi-option voting schemes introduced in [4] and applied in [13, 14]. In their method, \(p_{i}\) is computed based on the \(n\) (number of samples which already have been publicly agreed) and \(m\) as the smallest integer such that \(2^{m}>n\). For each of the eight permutations, the \(p_{i}\) is computed using the following equation: \[p_{i}=\left\{\begin{array}{ll}2^{0}&for\ permutation\ \#1\\ 2^{m}&for\ permutation\ \#2\\ \ldots&\ldots\\ 2^{7.m}&for\ permutation\ \#8\end{array}\right. \tag{1}\] **Zero Knowledge Proofs:** In addition to cryptograms, the ML system also generates 1-out-of-8 ZKP for each of the permutations. This proof ensure the values presented as \(C_{i}\) in the cryptogram table is indeed the production of \(g^{x_{i}.y_{i}}\) and \(g^{p_{i}}\) where \(p_{i}\in\left\{2^{0},2^{m},\cdots,2^{7.m}\right\}\). As shown in Table 3, each of the computed columns for permutation contains a ZKP to guarantee it is one of the _valid_ values for evaluating the fairness metric in next stages. We use the widely used 1-out-of-n interactive ZKP technique [7], where \(n=8\) in our protocol. Moreover, by application of Fiat-Shamir heuristics [10], this ZKP can be converted into non-interactive which makes the verification of proofs simpler [14]. #### 3.3.2 Phase II: Parameter Assignment This stage starts when the ML system's training and testing. The output of this stage is a table with \(n\) rows, each containing a cryptogram of the encoded permutation parameters with the required ZKPs, public key (\(g^{x_{i}}\)) and reconstructed key (\(g^{y_{i}}\)). The outcome of this stage is the final variant of the cryptogram table which we will call _fairness auditing table_. **Fairness Auditing Table:** This is derived from the previously computed _cryptogram table_. This table combines the outcome of the ML model (as shown in encoding format) with the cipher-text created in Phase I and form a ciphered version of the test dataset with \(n\) samples. This table is generated based on the following steps: Step (1): First, the ML system and fairness service properly authenticate each other to ensure they are communicating to the intended party. The ML system determines the permutation combination based on the three items parameters explained before. For that, ML system generates binary encoding for each of the data samples in the test dataset (i.e. the sensitive group membership, actual label and the predicted labels respectively as explained in Table 3). Step (2): The ML system generates ZKP for the knowledge of the encoding as commitment to its choice (\(p_{i}\) as in Equation 1). The ZKP for the proof of knowledge can be converted to non-interactive using Fiat-Shamir heuristic [10]. Step (3): The corresponding column number that equals the decimal value of the binary encoding is selected from the cryptogram table to complete the fairness auditing table( as shown in Table 2). Finally, the generated fairness auditing table is digitally signed by the ML system and then is sent over the Fairness auditing service. #### 3.3.3 Phase III: Fairness Evaluation First, the fairness auditing service receives the fairness auditing table, verifies the digital signature and the ZKPs, and publishes the contents in the fairness board. Then, it starts the process of computing the fairness metric. For this, the auditor service multiplies all the cryptograms (\(C_{i}\)) received in the cryptogram table together. Therefore, we have \(\prod_{i}C_{i}=\prod_{i}g^{x_{i}.y_{i}}.g^{p_{i}}\). At this stage, the key point is the consideration of the effect \(y_{i}\) and \(x_{i}\) have on each other; know as "Cancellation Formula" (Lemma 1 and [14, 13, 3]). **Lemma 1**.: Cancellation Formula: _for \(x_{i}\) and \(y_{i}\), \(\sum_{i}x_{i}.y_{i}=0\)_ Proof.: From reconstructed keys equation, one can deduce \(y_{i}\) is as \(\sum_{i}=\sum_{j<i}x_{j}-\sum_{j>i}x_{j}\), hence: \[\begin{split}\sum_{i}x_{i}.y_{i}&=\sum_{i=1}^{i=n} x_{i}.(\sum_{j=1}^{j=i-1}x_{j}-\sum_{j=i+1}^{j=n}x_{j})\\ &=\sum_{i=1}^{i=n}\sum_{j=1}^{j=i-1}x_{i}.x_{j}-\sum_{i}^{i=n} \sum_{j=i+1}^{j=n}x_{i}.x_{j}\\ &=\sum_{j=1}^{j=n}\sum_{i=j+1}^{i=n}x_{i}.x_{j}-\sum_{i}^{i=n} \sum_{j=i+1}^{j=n}x_{i}.x_{j}\\ &=\sum_{i=1}^{i=n}\big{(}\sum_{j=1}^{j=i-1}x_{j}-\sum_{j=i+1}^{j =n}\sum_{j=i+1}^{j=n}x_{j}\big{)}x_{i}\\ &=0\end{split} \tag{2}\] At this point, we expand each of these equation components to compare them together. Considering the Cancellation Formula, we can conclude multiplication of all cryptograms into \(\prod_{i}C_{i}=\prod_{i}g^{x_{i}.y_{i}}.g^{p_{i}}=\prod_{i}g^{p_{i}}=g^{\sum_{ i}p_{i}}\). The result is total sum of permutations (\(p\#1\) to \(p\#8\)) as \(\sum_{i}p_{i}=a.2^{0}+b.2^{m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{ 2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2 ^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2 ^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2 ^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2 ^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2 ^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2 ^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2m}+c.2^{2 m}+c.2^{2m}+c. \(d.2^{3m}+e.2^{4m}+f.2^{5m}+g.2^{6m}+h.2^{7m}\) where \(a,b,c,d,e,f,g,h\) are the number of each permutation respectively (Permutation #1, Permutation #2,..., Permutation #8). The search space for such combination depends on the number of samples sent from the ML system to the auditor (the size of the test set is \(n\) for 8 permutations is \(\binom{n+8-1}{8-1}\)[14]). As described in Phase I, the size of \(n\) (the total number of samples) can be agreed with consideration of the computational capacity of the auditor service. In the simplest setting where \(n\) is small, the auditor will determine the overall number of permutations (as in \(\sum p_{i}\), where \(i\in\{1,2,\cdots,8\}\)) by performing an exhaustive search in all possible combinations until it finds the correct one. This process is computationally heavy especially when the number of data samples in the fairness auditing table is large. In this case, the fairness auditor can delegate the declaration of the permutation number to the ML system. The auditor still receives the fairness auditing table and the relevant ZKPs. It can store the fairness auditing table to the fairness board, compute the fairness, and verify the correctness of the declared permutation numbers. The universal verifier can follow the same steps to verify the fairness metric computations through the fairness auditing table that is publicly accessible via fairness board. At the end of this stage, the auditor uses the acquired numbers to compute the fairness metric and release the information publicly. The number of each permutation denotes the overall performance of the ML algorithm for each of the groups with protected attribute. Table 4 demonstrates the permutations and how it relates to the fairness metric of the ML system. The cryptogram table and the results will be published on the fairness board (Fig. 1). ## 4 Implementation and Performance Analysis ### Proof-of-Concept Implementation **Tools and Platform:** The back-end is implemented in Python v3.7.1 and the front-end is implemented with Node.js v10.15.3. In our evaluations, the computations required for generation of the cryptogram table (in the ML system) is developed with Python. The elliptic curve operations make use of the Python package _tinyec_ and the conversion of Python classes to a JSON compatible format uses the Python package _JSONpickle_. All the experiments are conducted on a MacBook pro laptop with the following configurations: CPU 2.7 GHz Quad-Core Intel Core i7 with 16 GB Memory running MacOS Catalina v.10.15.5 for the Operating System. **Case-Study Dataset:** We use a publicly available dataset from Medical Expenditure Panel Survey (MEPS) [21] that contains 15830 data points about the healthcare utilization of individuals. We developed a model (Logistic Regression) that determines whether a specific patient requires health services, such as additional care. This ML system assigns a score to each patient. If the score is above a preset threshold, then the patient requires extra health services. In the MEPS dataset, the protected attribute is "race". A fair system provides such services fairly independent of the patient's race. Here, the privileged race group in this dataset is "white ethnicity". We have used 50% of the dataset as training, 30% as validation and the remaining 20% as test dataset. We set the number of cryptogram table samples to equal the size of test set (\(N=3166\)). In this example we include three attributes in the cryptogram to represent the binary values of \(A\), \(Y\) and \(\hat{Y}\) (section 2.1), thus leading to 8 permutations for each data sample. In our experiment, where \(N=3166\), the total size of the search space is \(\left(\begin{smallmatrix}3166+8-1\\ 8-1\end{smallmatrix}\right)\approx 2^{69}\). The exhaustive search approach is computationally expensive for our experimental hardware configurations, so we decided to use the approach suggested in Section 3.3.3. Here, the permutation numbers are declared by the ML system and the auditor service verified the claims by comparing the computations done by the auditor (as in \(\prod_{i}C_{i}=\prod_{i}g^{x_{i}.y_{i}}.g^{p_{i}}=\prod_{i}g^{p_{i}}=g^{\sum_{ i}p_{i}}\)) with the total sum of the received permutations (\(p\#1\) to \(p\#8\)) as \(\sum_{i}p_{i}=a.2^{0}+b.2^{m}+c.2^{2m}+d.2^{3m}+e.2^{4m}+f.2^{5m}+g.2^{6m}+h.2^ {7m}\). This is a reasonable approach since we assumed that the ML system will not attempt to deceive the auditor for its outcome (section 3.1). ### Performance This section presents the execution time per data point for each of the main computational tasks, in each protocol stage. Recall that phase I was executed before the ML system's training and testing. This stage can be developed (and stored separately) in parallel to the implementation of the model in order to mitigate the performance challenge of Phase I. In our implementation, the output of this stage (cryptogram table) is stored in a separate file in JSON format and can be retrieved at the beginning of the phase II. Phase II begins after the ML model is trained, tested, and validated. This stage uses the output of the ML model to generate the fairness auditing table from the cryptogram table as well as ZKP for knowledge of the permutation. The output of this phase is transmitted to the Fairness Auditor Service in JSON format for phase III. At this stage, first the ZKPs are verified and then, the summation of the cryptograms determines the number of permutations for each of the sensitive groups. Once the auditing service has these numbers, it can compute the fairness of the ML system. \begin{table} \begin{tabular}{c c c} \hline \hline Fairness & Corresponding & Computation \\ Component & Permutation \# & \\ \hline \(Pr(\hat{Y}\mid A=0)\) & \#2, \#4 & \((\#2+\#4)/n\) \\ \(Pr(\hat{Y}\mid A=1)\) & \#6, \#8 & \((\#6+\#8)/n\) \\ \(Pr(\hat{Y}\mid A=0,y=0)\) & \#2 & \#2/n \\ \(Pr(\hat{Y}\mid A=1,y=0)\) & \#6 & \#6/n \\ \(Pr(\hat{Y}\mid A=0,y=1)\) & \#4 & \#4/n \\ \(Pr(\hat{Y}\mid A=1,y=1)\) & \#8 & \((\#8)/n\) \\ \hline \hline \end{tabular} \end{table} Table 4: The required permutations to compute the fairness metrics of an ML system In our evaluations (where \(N=3166\)), public/private key pair generation completes in 60 milliseconds (ms) on average with standard deviation of 6ms. The execution time for ZKP of private key was roughly the same (60ms on average with standard deviation of 6ms). The generation of reconstructed public key took around 450ms with standard deviation of 8ms. The most computationally expensive stage in phase I was the 1-out-of-8 ZKP for each of the permutations. This stage took longer than the other ones because first, the algorithm is more complicated and second, it should be repeated 8 times (for each of the permutations separately) for every row in cryptogram table. The computation of 1-out-of-8 ZKPs takes 1.7 seconds for each data sample with STD of 0.1 seconds. Overall, phase I took around 14 seconds with STD of 1 second for each data sample in the test set. In our experiments (where \(N=3166\) samples), the total execution of phase I took roughly 12 hours and 54 minutes. Phase II consists of creation of the auditing table and generation of the ZKP for knowledge of the permutation. The fairness auditing table is derived from the cryptogram table (as it is mapping the encoding to the corresponding permutation number in the cryptogram table). The elapsed time for such derivation is negligible (total: 1ms). The generation of ZKP for knowledge of the permutation executed less than 60ms on average with standard deviation of 3ms for each data sample. The completion of both stages took less than 3 minutes. The fairness auditing table is sent to the Fairness Auditor Service for Phase III. The verification of ZKPs in the last phase (Phase III) is a computationally expensive operation. The ZKP for the ownership of the private key took around 260ms on average with standard deviation of 2ms. The verification of 1-out-of-8 ZKP for each data point roughly took 2.5 seconds on average with 20ms standard deviation. The verification of the ZKP for knowledge of permutation executed in 100ms with standard deviation of 5ms. The summation of the cryptograms after verification took 450ms overall for \(N=3166\) items. In our experiment, completion of the stages in phase III took around 2 hours and 30 minutes in total. In summary, the experimental setup for our architecture, where we computed the required cryptograms and ZKPs for \(N=3166\) data points in a real-world dataset, overall time was around 15 hours on the laptop specification given earlier. The main part of the time is consumed by the computation required for phase I (12 hours and 54 minutes). However, as we noted before, Phase I can be executed before the ML model setup and is stored in a separate JSON file and will be loaded at the beginning of stage II (after the training and validation of the ML model is complete). The other main computational effort, which can only be done after the ML system's outcomes have been obtained, is in Phase III. For our example, actual computation of fairness takes two and a half hours. In summary, the creation and handling of cryptograms takes considerable computational effort for realistic datasets and for the fairness metrics that require three attributes. In what follows we analyse how performance scales with respect to the number of data points as well as with the number of attributes represented in the cryptograms. Conclusion This paper proposes Fairness as a Service (FaaS), a trustworthy service architecture and secure protocol for the calculation of algorithmic fairness. FaaS is designed as a service that calculates fairness without asking the ML system to share the original dataset or model information. Instead, it requires an encrypted representation of the values of the data features delivered by the ML system in the shape of cryptograms. We used non-interactive Zero Knowledge Proofs within the cryptogram to assure that the protocol is executed as it should. These cryptograms are posted on a public fairness board for everyone to inspect the correctness of the computations for the fairness of the ML system. This is a new approach in privacy-preserving computation of fairness since unlike other similar proposals that use federated learning approach, our FaaS architecture does not rely on a specific machine learning model or a fairness metric definition for its operation. Instead, one have the freedom of deploying their desired model and the fairness metric of choice. In this paper we proved that the security protocol guarantees the privacy of data and does not leak any model information. Compared to earlier designs, trust in our design is in the correct construction of the cryptogram by the ML system. Arguably, this is more realistic as a solution than providing full access to data to the trusted third party, taking into account the many legal, business and ethical requirements of ML systems. At the same time, this provides a new challenge in increasing the trust one has in the ML system. Increasing trust in the construction of the cryptograms remains an interesting research challenge following from the presented protocol. We implemented a proof-of-concept of FaaS and conducted performance experiments on commodity hardware. The protocol takes seconds per data point to complete, thus demonstrating in performance challenges if the number of data points is large (tens of thousands). To mitigate the performance challenge, the security protocol is staged such that the construction of the cryptogram can be done off-line. The performance of the calculation of fairness from the cryptogram is a challenge to address in future work. All together, we believe FaaS and the presented underlying security protocol provide a new and promising approach to calculating and verifying fairness of AI algorithms. ## Acknowledgement The authors in this project have been funded by UK EPSRC grant "FinTrust: Trust Engineering for the Financial Industry" under grant number EP/R033595/1, and UK EPSRC grant "AGENCY: Assuring Citizen Agency in a World with Complex Online Harms" under grant EP/W032481/1 and PETRAS National Centre of Excellence for IoT Systems Cybersecurity, which has been funded by the UK EPSRC under grant number EP/S035362/1.
2309.10603
Asteroids co-orbital motion classification based on Machine Learning
In this work, we explore how to classify asteroids in co-orbital motion with a given planet using Machine Learning. We consider four different kinds of motion in mean motion resonance with the planet, nominally Tadpole, Horseshoe and Quasi-satellite, building 3 datasets defined as Real (taking the ephemerides of real asteroids from the JPL Horizons system), Ideal and Perturbed (both simulated, obtained by propagating initial conditions considering two different dynamical systems) for training and testing the Machine Learning algorithms in different conditions. The time series of the variable theta (angle related to the resonance) are studied with a data analysis pipeline defined ad hoc for the problem and composed by: data creation and annotation, time series features extraction thanks to the tsfresh package (potentially followed by selection and standardization) and the application of Machine Learning algorithms for Dimensionality Reduction and Classification. Such approach, based on features extracted from the time series, allows to work with a smaller number of data with respect to Deep Learning algorithms, also allowing to define a ranking of the importance of the features. Physical Interpretability of the features is another key point of this approach. In addition, we introduce the SHapley Additive exPlanations for Explainability technique. Different training and test sets are used, in order to understand the power and the limits of our approach. The results show how the algorithms are able to identify and classify correctly the time series, with a high degree of performance.
Giulia Ciacci, Andrea Barucci, Sara Di Ruzza, Elisa Maria Alessi
2023-09-19T13:19:31Z
http://arxiv.org/abs/2309.10603v1
# Asteroids co-orbital motion classification based on Machine Learning ###### Abstract In this work, we explore how to classify asteroids in co-orbital motion with a given planet using Machine Learning. We consider four different kinds of motion in mean motion resonance with the planet, nominally _Tadpole_, _Horseshoe_ and _Quasi-satellite_, building 3 datasets defined as Real (taking the ephemerides of real asteroids from the JPL Horizons system), Ideal and Perturbed (both simulated, obtained by propagating initial conditions considering two different dynamical systems) for training and testing the Machine Learning algorithms in different conditions. The time series of the variable \(\theta\) (angle related to the resonance) are studied with a data analysis pipeline defined _ad hoc_ for the problem and composed by: data creation and annotation, time series features extraction thanks to the _tsfresh_ package (potentially followed by selection and standardization) and the application of Machine Learning algorithms for Dimensionality Reduction and Classification. Such approach, based on features extracted from the time series, allows to work with a smaller number of data with respect to Deep Learning algorithms, also allowing to define a ranking of the importance of the features. Physical Interpretability of the features is another key point of this approach. In addition, we introduce the SHapley Additive exPlanations for Explainability technique. Different training and test sets are used, in order to understand the power and the limits of our approach. The results show how the algorithms are able to identify and classify correctly the time series, with a high degree of performance. keywords: co-orbital motion - machine learning - asteroids + Footnote †: journal: Computer Science and Technology ## 1 Introduction In the last decades, the use of Artificial Intelligence (AI) for data analysis has significantly increased in scientific applications, in particular thanks to its sub-field known as Machine Learning (ML), where an algorithm is said to improve its performance on a specific task by experience (e.g., Hastie et al., 2009; Jordan & Mitchell, 2015). More recently, many authors started to use such methods in astronomy and solar system science (e.g., Ball & Brunner, 2010; Ivezic et al., 2014). Although well-known and broadly applied in several contexts, we recall here the general concepts of AI and ML, for the sake of completeness. With AI we mean methods by which a computer makes decisions or discoveries that would usually require human intelligence, while with ML we mean automated processes that learn by examples in order to classify, predict, discover or generate new data. Part of ML is the class of algorithms known as _Deep Learning_ (DL) which is based on artificial neural networks (e.g., LeCun et al., 2015; Goodfellow et al., 2016). ML and DL are the key of the success of AI nowadays. There are three classes of ML algorithms (see, for example, Hastie et al. (2009) for more details): _supervised learning_, where a labeled dataset is used to help to train and tune the algorithm, with the goal to create a map that links inputs to outputs; _unsupervised learning_, where no labels are provided and the goal is to discover hidden patterns allowing the data to speak for itself; _reinforcement learning_, where an agent learns by interacting with an environment and modifying its behavior to maximize its reward. It is important to keep in mind that this line between classes can occasionally become hazy and fluid because numerous applications frequently combine them in inventive and unique ways (e.g. self-supervised learning, see Liu et al. (2021)). These approaches are firmly established in astronomy and an important survey of the state of art can be found in Fluke & Jacobs (2020), who analyse the published articles in the last years. They highlight applications in many sub-fields of astronomy where ML could be used for several activities, as classification, regression, clustering, forecasting, generation of data, discovering, development of new scientific insights. Fluke & Jacobs (2020) also classify the different fields of astronomy where ML is used as "emerging", "progressing" and "established", depending on the progress of its use. The first approach in astronomy to Principal Component Analysis (PCA), an algorithm devoted to Dimensionality Reduction, which is nowadays a standard technique, was introduced in the 1980s for morphological classification of spiral galaxies (e.g., Whitmore, 1984), in the 1990s for quasar detection (e.g., Francis et al., 1992) and spectral classification (e.g., Singh et al., 1998), while more recent applications with ML have been done for discovering extrasolar planets (e.g., Pearson et al., 2018; Shallue and Vanderburg, 2017), for studying gravitationally lensed systems (e.g., Jacobs et al., 2019; Lanusse et al., 2017; Pourrahmani et al., 2018) and for discovering and classifying transient objects (e.g., Connor and van Leeuwen, 2018; Farah et al., 2018). For a complete and detailed bibliography about all the ML applications in the astronomical fields we suggest a careful reading of Fluke and Jacobs (2020). The analysis of motion of the solar system bodies is considered one "progressing" field of application of ML. Several authors in the last years studied problems related to solar system objects as, for example, applications to TransNeptunian objects (e.g., Chen et al., 2018), or detection and classification of asteroids through taxonomies of spectrophotometry, as studied in Erasmus et al. (2017, 2018). One "emerging" field concerns asteroid dynamics (e.g., Carruba et al., 2022). Indeed, the numerical propagation of asteroids' orbits, based on continuous improved information, implies a large volume of data, that requires fast and novel methods to be analyzed. For example, in Smirnov and Markov (2017), the authors use ML methods to identify three-body mean motion resonance asteroids in the main belt without requiring numerical integration. They use proper elements which are quasi-integral of motion that are stable for a long time (e.g., Knezevic and Milani, 1994; Knezevic et al., 2002), and use four different supervised ML methods as reported in Hastie et al. (2009a). The authors compare their results with the ones of the previous paper by Smirnov and Shevchenko (2013) remarking that, with the new approach, the identification of the objects trapped in mean motion resonance is very good and the procedure requires few seconds, while the numerical integration requires days and weeks. Very recently, Smirnov (2023) provides a new open-source package for identifying objects trapped in mean motion resonances (MMR). The main objective they have is to distinguish resonant and non-resonant orbits, but they do not aim at distinguishing different classes of 1:1 MMR, like we will do here. Other new works comparing results from ML algorithms with previous known asteroid classifications are, for example, Smullen and Volk (2020), where the authors classify objects of the Kuiper belt into four classes based on their dynamics; Carruba et al. (2019), where hierarchical clustering algorithms for supervised learning are applied to identify 6 new families and 13 new clustering of asteroids; Carruba et al. (2020), where ML classification algorithms are used to identify new families of asteroids based on the orbital distribution in the parameters \((a,e,\sin(i))\) (where \(a,e,i\) are, respectively, the semi-major axis, the eccentricity and the inclination of the asteroid orbit) of previous known family objects. Some other very interesting and recent works explore the use of ML to classify regular or chaotic motions. For example, Kamath (2022) studies and classifies orbits in Poincare maps: the major challenge of this problem is solved by creating high-quality training sets with few mislabeled orbits and converting the coordinates of the points into features that are discriminating, despite the apparent similarities between orbits of different classes. Celletti et al. (2022) use DL methods, such as convolutional neural networks (CNNs), to show how it is possible to classify different types of motion, starting from time series, without any prior knowledge of the dynamics. Indeed, the identification of a motion usually requires a knowledge and the solution of the differential equations governing the dynamical system. Instead using CNNs trained on one dynamical model, the type of motion could be predicted, for example, from observational data. All these examples show how ML algorithms are increasingly used in astronomy, as well as in dynamical systems and in particular in celestial mechanics. In this paper, leveraging on the recent work Di Ruzza et al. (2023), we focus on asteroids that are in co-orbital motion (1:1 Mean Motion Resonance) with a planet of the solar system. We apply ML methods to classify the various types of co-orbital motion that can arise in the planar case, through features derived from time series corresponding to the evolution of a specific variable - the angle \(\theta\), that we will define in the following. The current paper is organized as follows. In Section 2, we recall the averaged problem of circular restricted three-body problem for the co-orbital motion in the planar case and how the approximation can be applied to classify co-orbital objects in the solar system. In Section 3, it is explained how the training and testing data are generated. In Section 4, the whole algorithmic pipeline is detailed, while in Section 5 the results are given together with a critical analysis on the procedure. In Sections 6 and 7 a possible future direction is proposed and the conclusions are drawn. ## 2 Co-planar co-orbital asteroids in the solar system The main idea considered by Di Ruzza et al. (2023) was to show how an integrable approximation of the restricted three-body problem can be applied to describe the dynamics of real natural objects and the goal was to provide a general catalogue of co-orbital objects in the solar system in the co-planar case and a tool to visualize them. We recall here the general setting and main features that will be important for the present work. More details can be found in Pousse and Alessi (2022) and Di Ruzza et al. (2023). The theoretical model is the Planar Circular Restricted Three-Body Problem (PCR3BP) where a massless body is interacting by gravitational attraction with two massive bodies. The Hamiltonian describing the motion of the massless body can be written as \[\mathcal{H}\left(\mathbf{r},\mathbf{r},\lambda_{p}\right)=\frac{\|\mathbf{r} \|^{2}}{2}-\frac{\mu}{\|\mathbf{r}\|}-\frac{\left(\mu+\mu_{p}\right)\varepsilon }{\left\|\mathbf{r}-\mathbf{r}_{p}\left(\lambda_{p}\right)\right\|}+\left( \mu+\mu_{p}\right)\varepsilon\,\mathbf{r}\cdot\mathbf{r}_{p}\left(\lambda_{p }\right)\,, \tag{1}\] where \(\mathbf{r},\mathbf{r}\in\mathbb{R}^{2}\) are, respectively, the heliocentric position and velocity vectors of the massless body (the asteroid); \(\mu,\mu_{p}\) are the mass parameters of the massive primary body (the Sun) and of the massive secondary body (the planet), respectively; \[\varepsilon:=\frac{\mu_{p}}{\mu+\mu_{p}}\] is a dimensionless parameter characterizing the mass ratio of the Sun-planet system; the heliocentric vector \(\mathbf{r}_{p}\left(\lambda_{p}\right)\) denotes the position of the planet, for a given value of the mean longitude \(\lambda_{p}\), which follows the solution of the two-body problem for the Sun-planet system. Usually, the Hamiltonian (1) is analyzed in the synodic reference frame rotating with the planet. It is well-known that the problem admits 5 equilibrium points, called Lagrangian points and denoted by \(L_{j}\) for \(j=1,\ldots,5\). If \(\varepsilon\) is small enough, we could rewrite the Hamiltonian (1) as \[\mathcal{H}\left(\mathbf{r},\dot{\mathbf{r}},\lambda_{p}\right)=\mathcal{H}_{ \mathrm{K}}\left(\mathbf{r},\dot{\mathbf{r}}\right)+\left(\mu+\mu_{p}\right) \varepsilon\,\mathcal{H}_{\mathrm{P}}\left(\mathbf{r},\lambda_{p}\right)\,,\] where \(\mathcal{H}_{\mathrm{K}}\) is the unperturbed Kepler motion of the massless body (around the Sun) and \(\mathcal{H}_{\mathrm{P}}\) is the perturbation depending on the gravitational influence of the planet and, then, we consider the averaged problem with respect to the fast angle \(\lambda_{p}\) obtaining the new Hamiltonian \[\overline{\mathcal{H}}=\mathcal{H}_{\mathrm{K}}+\overline{\mathcal{H}}_{ \mathrm{P}}\,,\] where \(\overline{\mathcal{H}}_{\mathrm{P}}\) is the average over the period of revolution of the planet with respect to the fast angle \(\lambda_{p}\). We assume that the particle and the secondary are in a \(1:1\) Mean Motion Resonance (MMR), that is, their orbits have the same value of semi-major axis. Within this approximation, the problem can be studied by means of the action-angle variables \((\theta,u)\), defined as follows: \[\theta:=\lambda-\lambda_{p}\] is the resonant angle (being \(\lambda\) the mean longitude of the asteroid) and \[u:=\sqrt{\frac{a}{a_{p}}}-1\] is its conjugated action whose modulus measures the distance to the exact Mean Motion Resonance, with \(a\) and \(a_{p}\) being the semi-major axis of the asteroid and of the planet orbit, respectively; the exact \(1:1\) MMR is obtained for \((\dot{\theta},u)=(0,0)\). In this system, the quantity \[\Gamma=\sqrt{a_{p}}\left(1-\sqrt{1-e^{2}}\right)\] is a first integral of the problem, being \(e\) the eccentricity of the asteroid orbit. For different values of \(\Gamma\in[0:\sqrt{a_{p}}]\), the phase portrait in resonant variables \((\theta,u)\) allows to understand the whole co-orbital motion structure. In the planar circular case we can have three types of co-orbital motion, depicted in Fig. 1 in the synodic reference system. The tadpole (TP) motion (on the left) stemming from \(L_{j}\) with \(j=4,5\) is such that \(\theta\) experiences a periodic oscillation around a given \(\theta_{j}(\Gamma)\) satisfying \(23.9^{\circ}<(-1)^{j}\theta_{j}(\Gamma)<180^{\circ}\); the horseshoe (HS) motion (in the middle), stemming from \(L_{3}\) is such that \(\theta\) oscillates around \(180^{\circ}\) with a large amplitude that decreases as long as \(\Gamma\) increases; the quasi-satellite (QS) regime (on the right) is such that \(\theta\) librates around zero for \(\Gamma>0\). In the given phase space, the co-orbital trajectories are solutions located in the neighborhood of \(u=0\) and such that \(\theta\) oscillates around the given value. The crossing with the section \(u=0\), that corresponds to \(a=a_{p}\), provides a way to understand the global evolution of the dynamics at varying \(\Gamma\), or equivalently, the eccentricity \(e\) of the asteroid's orbit. In this way it is possible to derive a \((\theta,e)\)-map, represented in Fig. 2, that allows to classify the different domains of co-orbital motion. We remark that, in first approximation, this map is invariant with respect to the mass parameter \(\varepsilon\), so it has the same features for all the planets. In the upper panels of Fig. 3, the graphs of the evolution of the time series \((t,\theta)\) of the three real examples of asteroids in the different regimes TP, HS, QS are plotted. In these cases, the evolution appears very regular, while in bottom panels, three less regular cases are reported for comparison. It is important to underline that the analysis done in the current work, and described in the next Sections, takes specifically into account the time evolution of the resonant angle \(\theta\). Subsequently, we will exploit the time series \((t,\theta)\) in order to recognize the different kinds of co-orbital regime as shown in Fig. 3. In Di Ruzza et al. (2023), co-orbital asteroids of Venus, Earth and Jupiter have been analyzed to show a practical application of the \((\theta,e)\)-map just explained. After a suitable filtering on the asteroid orbital elements in order to fulfill the resonance condition and the quasi-coplanar configuration at a given epoch, the ephemerides of asteroids have been computed by means of JPL HORIZONS API service (NASA, 2022) for an interval of time of about 900 years. The real data have been compared with the theoretical model and a very good correspondence has been found. Asteroids in quasi-coplanar co-orbital motion with Venus, Earth and Jupiter have been cataloged according to their co-orbital dynamics and their representation can be seen in Fig. 4. A very refined analysis has been done checking Figure 1: In red, a sketch of the tadpole motion (left), horseshoe motion (center), quasi-satellite motion (right), in the synodic reference system. The yellow circle represents the Sun and the green one the planet. Figure 2: The \((\theta,e)\)-map of the co-orbital motion defined by the section \(u=0\). The black and red thick curves stand, respectively, for the singularity of collision and the crossing of the separatrices that originate from \(L_{3}\) (thick red curve). They divide the map in three regions. The QS domain is between the dark curves; the HS region, split in two parts, is between the separatrix (red curve) and the dark curve; the TP regions are inside the separatrices (respectively, TPL4 for positive values of the angle \(\theta\) and TPL5 for negative values of the angle \(\theta\)). by hands_ if the time series \((t,\theta)\) of each asteroid (as represented in Fig. 3) was in agreement with its position in the \((\theta,e)\)-map (Fig. 4). The results presented in Di Ruzza et al. (2023) are very promising for TP, HS and QS motion: under given assumptions, data of real observations fit very well with theory. The analyzed series comprised also transitions (TR) between different co-orbital regimes as well as the compound (CP) motion (a particular combination between QS and HS dynamics)1. In this case, the map was not able to accurately catch the behaviour, as expected, since TR and CP are proper of the three-dimensional model, not of the planar one. Footnote 1: We refer to Namouni (1999); Namouni et al. (1999) for more details about the appearance of these kinds of motion. At this point, an automatic tool capable of distinguishing the different co-orbital regimes becomes essential in order to improve our study. Indeed, in the future we aim to extend the analysis for a longer time span (order of thousands of years or more), to consider the spatial problem including asteroids with very high inclination and to understand better and classify TR and CP motions. All these information would be desirable to create a complete catalogue of asteroids in co-orbital motion with all the planets in the solar system. For these reasons, a ML approach in this problem is highly recommended in order to deal with a huge number of very long time series that can exhibit very rich dynamical behaviors. The aim of the present and coming works is to become able to manage any kind of real data, for short, medium and long timescales also when transitions between different co-orbital motions occur or when new kinds of motion appear, as, for example, the compound motions. In what follows, we will consider only TP, HS and QS orbits since the foundations of the work are the results obtained in Di Ruzza et al. (2023). In particular, we will classify co-orbitals motions belonging to the four classes QS, HS, TPL4 (a tadpole around the equilibrium Figure 4: The \((\theta,e)\)-maps for the three planets; from left to right, respectively, Venus, Earth and Jupiter. The points in magenta represent the distribution of co-orbital asteroids in the \((\theta,e)\)-map at a reference date, while the two horizontal lines stand for the eccentricities of an object in co-orbital motion with the considered planet \(P\) when it crosses the orbit of the inner and the outer planet (respectively in green and purple) with respect to \(P\). The figures are already used in Di Ruzza et al. (2023). Figure 3: Upper: evolution of the angle \(\theta\) versus time of three real asteroids in a regular co-orbital motion; from left to right, respectively, TP with Jupiter, HS with Earth, QS with Jupiter. Bottom: evolution of the angle \(\theta\) versus time of three real asteroids in co-orbital motion with non-regular oscillations; from left to right, respectively, TP with Earth, HS with Jupiter, QS with Venus. position \(L_{4}\)) and TPL5 (a tadpole around the equilibrium position \(L_{5}\)). ## 3 Data Let us underline that our final goal is to be able to recognize, through the use of ML, co-orbital dynamics of real asteroids for short, medium and long timescales also when transitions between different co-orbital motions occur or when new kinds of motion appear, as, for example, the compound motions. The data described in this section are the basis to outline the work done by the ML algorithms. As mentioned before, the information used in this work is the time evolution of the angle \(\theta\), computed considering three different sources of data, as summarized in Tab. 1. In general, training a ML algorithm requires large amounts of data in order to provide accurate predictions. In our case, obtaining numerous time series of real asteroids with regular trends and clearly attributable to a single class (QS, HS, TPL4, TPL5) is not straightforward as real cases may present some complex behaviors, sometimes making labeling difficult and unclear. In particular, a high number of asteroids among those considered can escape from the given resonance or experience a co-orbital transitions. We start our work by using the time series of asteroids reported in Table 3, 4, 5 of the paper Di Ruzza et al. (2023). Looking at those tables, it is evident that most of the asteroids exhibit motions with different co-orbital dynamics and, as previously stated, these cases must be excluded so that, as shown in Tab. 1, the real cases dataset used in the current work turns out to be composed by only 50 series, that is an absolutely insufficient number for a training set. To overcome this issue, a dataset containing simulated data of ideal cases is introduced. This kind of data can be produced by using suitable model and initial conditions (as depicted in the following) in order to get the four desired classes. It is possible to obtain as many cases as we need and we produced a total number of 1999 time series of ideal cases. This dataset allows us to train the ML models with a consistent number of cases with well-known labels (i.e., motion clearly attributable to a single class), leaving the real cases dataset for testing purposes. On the other hand, to have more data to evaluate the performance of our pipeline, we decided to increase the number of cases that can be used. To this aim, we generated time series deviating from the ideal ones by perturbing the model used to generate ideal cases. This process only partially enlarges the number of cases to be used; in fact, by adding perturbations, the time series become more similar to real cases and most of them must be eliminated because escapes from the resonance or transitions between different co-orbital regimes appear. For this reason, the number of perturbed cases can not be as large as the ideal ones. As reported in the last row of Tab. 1, the total number of produced perturbed series is 347. A detailed description of how the data are obtained is provided below. 1. Real ephemerides are obtained from the JPL HORIZONS system (NASA, 2022), following the approach adopted in Di Ruzza et al. (2023). In this case, from the database analyzed in Di Ruzza et al. (2023), we have selected 50 asteroids that exhibit a regular tadpole, horseshoe, quasi-satellite behavior, that is, we excluded the compound motions and transitions. In this case, the simulated data cover an interval of time equal at most to 900 years. We refer to these data as _real data_. 2. Ideal cases of TP, HS, QS motions are generated by propagating the equations of motion of the Circular Restricted Three-Body Problem (CR3BP) with initial conditions obtained from the \((\theta,e)\)-map in the corresponding orbital domain (see Fig. 2). In this case, the initial condition in the synodic reference system is computed starting from the heliocentric orbital elements (\(a,e,i,\omega,\Omega,M\)) in the inertial system, by assuming the initial semi-major axis \(a\) equal to 1, the eccentricity \(e\) given by the map, the initial inclination \(i\), the longitude of the ascending node \(\Omega\) and the mean anomaly \(M\) equal to 0 and the argument of pericenter \(\omega\) equal to \(\theta\). In this case, the simulated data cover an interval of time equal to 3000 years. We refer to these data as _ideal simulated data_ and we produced a total number of 1999 time series of such cases. 3. Perturbed cases from the ideal cases are computed by propagation of initial conditions obtained from the \((\theta,e)\)-map, considering a dynamical model that accounts for Sun, Moon and the planets from Mercury to Mars. The propagation is performed by means of REBOUND Rein & Liu (2012), taking the initial states for the massive bodies from NASA (2022) assuming as initial epoch \(t_{0}=JD\) 2305537.5. The initial orbital elements for the asteroids are taken as above, except that now the argument of pericenter is set as \(\omega=\theta+\lambda_{Earth}\), where the mean longitude of the Earth \(\lambda_{Earth}\) is given by \(\lambda_{Earth}=\omega_{Earth}+\Omega_{Earth}+M_{Earth}\) with \(\omega_{Earth}\), \(\Omega_{Earth}\), \(M_{Earth}\) being, respectively, the argument of pericenter, the longitude of the ascending node and the mean anomaly of the Earth at \(t_{0}\). Also in this case, the simulated data cover an interval of time equal to 3000 years. We refer to these data as _perturbed simulated data_ and we produced a total number of 347 time series for this dataset. They present variations to the ideal cases that resemble the behavior of real objects, although no further perturbations have been added otherwise the motion more frequently escapes from the resonance. However, we consider this dataset to test algorithms trained on ideal simulated data. We note that data produced as described in point 2. and 3. above could be also interpreted as a good test of the results obtained in the previous paper Di Ruzza et al. (2023). Indeed, we have chosen initial conditions \((\theta,e)\) in the \((\theta,e)\)-map and propagated them in order to obtain the desired kind of co-orbital motion. ## 4 Data analysis workflow As shown in Fig. 5, our data analysis workflow can be conceptually divided in three macro blocks. The first step consists in preparing and labelling the data described in Sec. 3, i.e., the output of the propagation of orbital elements of the asteroids. The data are collected in.out format files: each file is associated with a single asteroid and it contains 7 columns corresponding, respectively, to time (in Julian date), elapsed time in years (starting from \(t_{0}\)), semi-major axis \(a\), eccentricity \(e\), inclination \(i\), resonant angle \(\theta\) and associated action \(u\). The filenames contain acronyms useful to recognize the name of the asteroid, the kind of co-orbital motion, the planet that the asteroid is in resonance with and the kind of \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Series** & HS & QS & TPL4 & TPL5 & Total \\ \hline Real & 14 & 15 & 11 & 10 & 50 \\ \hline Ideal Simulated & 668 & 528 & 581 & 222 & 1999 \\ \hline Perturbed Simulated & 61 & 54 & 147 & 85 & 347 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the data available. propagation used to get the data (points 1., 2., 3. in Sec. 3). In this way, files can be easily shared if required. It is important to stress that in this work we focus only on the time evolution of the variable angle \(\theta\), but the other information can turn out to be useful for future analysis. These tabular data are passed to the next block, where the _tsfresh_ python package (e.g., Christ et al., 2018) provides a systematic time series feature extraction thanks to the combination of established algorithms from statistics, time series analysis, signal processing and non-linear dynamics. Before giving the extracted features to the Machine Learning classification algorithms, two additional steps can be applied: selection and standardization. Selection can be performed thanks to _tsfresh_, which represents a robust feature selection algorithm (e.g., Li et al., 2017), while standardization can be obtained by any kind of library such as Scikit Learn pre-processing functions (e.g., Pedregosa et al., 2011). The final classification step (last two blocks in Fig. 5) is performed in two parallel branches, with two classes of ML algorithms involved, namely, Dimensionality Reduction and classification algorithms. Before moving into a deeper explanation of all the details regarding the steps involved in the data analysis workflow, it is worth noting how our approach based on features extraction and standard Machine Learning algorithms is very well suited for our case where we have two constraints: data numerosity and physical interpretability. Both these constraints encourage an approach based on Machine Learning algorithms where the requirement on the number of data to train the algorithm is less tight with respect to Deep Learning. At the same time, thanks to the features extraction, a time series of any length can be converted into a finite number of features, all of them holding a physical meaning. This physical meaning is deeply important, because not only at the end of the whole data analysis workflow it is possible to identify the most important features responsible for a good time series classification (Feature Importance), but in addition we can look at the discriminating features between the different classes of signals, recovering a physical understanding of such processes. ### Features extraction and selection: the _tsfresh_ open-source package In order to train a ML model, features need to be extracted from the data. In our case a total of 789 features are extracted from each time series representing the time evolution of the angle \(\theta(t)\) by the Python package _tsfresh_(e.g., Christ et al., 2018). For a detailed description of the meaning of each feature please refer to Christ et al. (2023). After feature extraction, usually, it is worth to introduce a step of _Feature Selection_. This step can be performed in different ways or not performed at all. However, in general, it has been demonstrated (e.g., Guyon and Elisseeff, 2003) that Feature Selection can improve ML performances. Therefore, we decided to implement such step in our workflow using a built-in function of _tsfresh_, which provides a feature selection method based on Mann-Whitney Test. In our case, this step reduces the number of features to 239. ### Features standardization Again, pre-processing data is an essential step to achieve good classification performance, with the importance of data standardization (or normalization) for improving the performance of ML algorithms described in many studies as stated in Singh and Singh (2020). In our study, features are standardized using the Scikit Learn function StandardScaler (e.g., Pedregosa et al., 2011). ### Dimensionality Reduction The process of transforming data from a high-dimensional space into a low-dimensional space with the goal of keeping the low-dimensional representation as close as possible to the inherent dimension of the original data is known as _Dimensionality Reduction_. There exist many different ML algorithms able to perform such transformation on data. In this work, we focus on two of them, namely, _Principal Components Analysis_ (PCA) (e.g., Cozzolino et al., 2019) and _t-distributed Stochastic Neighbor Embedding_ (t-SNE) (e.g., Van der Maaten and Hinton, 2008; Arora et al., 2018; Kobak and Berens, 2019). PCA and t-SNE operate in two different ways: PCA is a linear method that seeks to preserve as much variance as possible and the global structure of the data, while t-SNE is a non-linear optimized technique that concentrates on preserving local similarities between data points. Additionally, PCA uses a well-known transformation Figure 5: Data Analysis Workflow. The first step is the time series preparation, followed by the _tsfresh_ python package block where features are extracted and possibly selected and standardized. The final step regards the Machine Learning analysis performed using Dimensionality reduction algorithms (PCA and t-SNE) and classification algorithms (SVM, Random Forest and XGBoost). making it a deterministic technique. On the other hand, t-SNE is a stochastic optimized method, which tend to preserve points which are close to each other. However, the method doesn't construct an explicit function that maps high dimensional points to a low dimensional space, but it just optimizes low dimensional positions of the data points directly. Since it does not define a data transformation function, the method cannot be applied to newer data, but a newer optimization must run. Both algorithms are Dimensionality Reduction techniques particularly well suited for the visualization of high-dimensional datasets as in this case, where, after the feature selection step, the number of features is still above 200. The utility of such kind of algorithms is twofold: on the one hand they can be used as unsupervised learning methods which allow to visualize the data distribution in two dimension, providing a deep insight on whether and, in case, how the data can be divided in the higher dimensional space. Moreover, they usually can give an idea of how the classifiers will perform. Indeed, well clustered data visualized by Dimensionality Reduction methods are usually well classified by ML algorithms, whereas the contrary is not necessarily true, meaning there could be data with a low degree of clustering where the classification algorithms still perform very well. ### ML classification We use three ML algorithms: Support Vector Machine (SVM) (e.g., Cervantes et al., 2020), Random Forest (RF) (e.g., Biau and Scornet, 2016) and XGBoost (XGB) (e.g., Chen and Guestrin, 2016). We evaluate the performances of these algorithms with different combinations of training and test sets, as reported here: 1. trained on real data and tested on real data; 2. trained on ideal simulated data and tested on real data; 3. trained on ideal simulated data and tested on perturbed simulated data; 4. trained on ideal simulated data and tested on real and perturbed simulated data. #### 4.4.1 Cross-Validation When evaluating the performances of a ML model, it is highly important to validate its stability. This step is called _validation_ and it consists in making sure that the model has learned the right patterns of the data and it is not picking up too much noise. In other words, it evaluates the model's ability to generalize on unseen data. In Machine Learning, the most used validation technique is _Cross-Validation_ (CV). It consists in splitting the dataset into multiple subsets, usually called "folds", then training the model on some of the folds and evaluating it on the remaining fold. This process is repeated multiple times, each time changing the remaining fold. The result is the mean score of all the performed tests. This allows to train and test the model on different data partitions, providing a robust and unbiased estimate of a model's performance. There are many types of Cross-Validation; for this work we use a technique named _k-folds Cross-Validation_(e.g., Fushiki, 2011), where the dataset is divided in \(k\) folds and \(k-1\) folds are used as training set and the remaining one as test set. #### 4.4.2 Hyperparameters Tuning When dealing with a ML model, one of the main aspects of designing the structure is a step called _Hyperparameters Tuning_, which consists in finding the best combinations of hyperparameters' models in order to achieve the best performance. Unfortunately, there are no rules or formulas to calculate these parameters, and an approach based on an extensive exploration of the hyperparameters' space along with some experience is the only way to find them, making hyperparameters tuning a computationally long and tedious process. In Python, many techniques have been developed to automate the tuning of hyperparameters and in this work we apply two of them: _GridSearchCV_ and _RandomizedSearchCV_. Both these techniques make use of _k-fold Cross-Validation_. #### 4.4.3 SHAP: features interpretability Machine Learning models are frequently considered "black boxes", which make their interpretation challenging. In order to understand the main features that affect the output of the model, we can leverage on Explainable Machine Learning techniques that can unravel some of these aspects (e.g., Roscher et al., 2020). One very promising technique is the SHapley Additive exPlanations, more commonly known as SHAP (e.g., Lundberg and Lee, 2017; Lundberg et al., 2018, 2020; Van den Broeck et al., 2022; Mitchell et al., 2022). It is based on Shapley values, which use game theory to assign credit for a model's prediction to each feature or feature value, increasing the transparency and the interpretability of Machine Learning models (e.g., Molnar, 2022). In particular SHAP is known for its "Consistency" property. SHAP values do not change when the model changes unless the contribution of a feature changes. This means that even when the model architecture or parameters change, SHAP values still offer a coherent interpretation of the behaviour of the model. In our case, SHAP is applied to the ML models used for time series classification. ## 5 Results The results are presented in the following, according to the considered techniques. ### Unsupervised ML: PCA and t-SNE As stated in Sec. 4.3, Dimensionality Reduction techniques can be used to discover whether a high dimensional dataset presents separate clusters when projected in lower dimensional space (e.g., bi-dimensional). Therefore, the first step of our analysis has been to perform PCA and t-SNE on the features extracted from the real time series (real data) to see if they would cluster into four separated groups corresponding to four classes: QS, HS, TPL4, TPL5 (described in Sec. 2). PCA and t-SNE visualizations show four well separated clusters, as can be appreciated in Fig. 6 (a), (b), respectively, where real data are considered. Next, we performed PCA and t-SNE on the ideal simulated data to determine whether the trend of clustering in the four groups was also present in this dataset. As it can be appreciated in Fig. 6 (c), (d), clusters are still well visible. Finally, given the positive results of the previous tests, we have applied the Dimensionality Reduction techniques on a dataset containing both the real and ideal simulated data expecting an overlap between the real and simulated clusters for each class. The encouraging results of this analysis are reported in Fig. 6 (e), (f). It is worth to observe that in these plots, PCA and t-SNE show the overlapping between real and simulated data clusters. In Figure 6: PCA and t-SNE of selected and standardized features extracted from: real data (a) and (b); ideal simulated data (c) and (d); overlapping between ideal simulated and real data clusters (e) and (f). In this last case it is worth to note as the orange points representing the real TPL4 cases overlap the yellow points representing the simulated TPL4 cases; the red points representing the real TPL5 cases overlap the light-red points representing the simulated TPL5 cases; the purple points representing the real HS cases overlap the violet points representing the simulated HS cases; the blue points representing the real QS cases overlap the light-blue points representing the simulated QS cases. particular, the orange points representing the real TPL4 cases overlap the yellow points representing the simulated TPL4 cases; the red points representing the real TPL5 cases overlap the light-red points representing the simulated TPL5 cases; the purple points representing the real HS cases overlap the violet points representing the simulated HS cases; finally, the blue points representing the real QS cases overlap the light-blue points representing the simulated QS cases. This overlapping between clusters of real and simulated data in the reduced space confirms that the features extracted from these two datasets are similar and meaningful. In particular, these results confirm our expectations that both datasets are extracted from the same data distribution, making them suitable for the deeper machine learning analysis shown hereafter. ### Supervised ML While Dimensionality Reduction techniques allow to visualize high-dimensional data and eventual clusters within them, supervised ML algorithms provide an actual classification of the data. In our case, six classification metrics are considered to evaluate the supervised ML algorithms performances: _Accuracy, Balanced Accuracy, ROC AUC, Recall, Precision, f1_. A full description of the metrics can be found in Scikit-Learn (2023a) It is worth to note how some ML algorithms do not require features normalization, such as Random Forest, while for some others, such as Support Vector Machine, the normalization step strongly improves the classification performances (e.g., Singh & Singh 2020; Ozsahin et al. 2022). This peculiarity can be ascribed to the intrinsic differences in the working principles at the basis of each algorithm. As was already noted, another crucial step that is typically (but not always) necessary to enhance classification performances is features selection. Our data shows that this is not the case; the outcomes are unaffected by the pre-processing stage. It should be highlighted, nevertheless, that this step generally needs to be preserved in the data analysis workflow. This is not the case for our data, results not being affected by this pre-processing step. However, it should be noted that in general such step must be kept in the data analysis workflow, evaluating its importance case by case. Concerning our work, the results reported in this section are then relative to datasets containing all the extracted features. #### 5.2.1 Test results The classification performances of the three used supervised ML algorithms (SVM, RF and XGB, see Sec. 4.4) are reported in Tab. 2 for four different combinations of training and test sets. Although the motivations behind the chosen approach have already been partially described above, we remark the following observations. First of all, the real cases dataset is limited, therefore it is impossible to give a clear answer regarding the generalization capability of our models to unseen data when trained and tested on real data. For this particular reason we introduced the ideal and perturbed simulated datasets, where the ideal one is intended for training purposes leaving the perturbed one to testing ones. The hypothesis regarding the use of the ideal simulated as training set is confirmed by the fact that the classifiers trained in this way classify correctly the real series with an accuracy that reach 98%. Lastly, classifiers trained on ideal simulated data and tested on perturbed simulated data obtain an accuracy of 100% for all algorithms, while a slightly lesser accuracy is achieved testing on real and perturbed data. All classification results are reported in Fig. 7, where confusion matrices for each performed test are presented. A _Confusion Matrix_ is a type of visualization particularly well suited for evaluating the performance of a ML algorithm. The rows of the matrix represent the actual labels of the test set while the columns represent the labels predicted by the algorithm. Accordingly, the corrected predictions can be found along the diagonal of the matrix and the wrong ones outside of it. In Tab. 3 they are reported all the selected hyperparameters for each performed test divided by algorithm. #### 5.2.2 Cross-Validated results As introduced in Sec. 4.4.1 Cross-Validation is a crucial step to evaluate the model's ability to generalize on unseen data and it provides a more accurate evaluation of the model's performance. Results obtained with a 5-fold Cross-Validation are reported in Tab. 4, where we test on different combinations of the three datasets described in Sec. 3. The mean accuracy relative to the real cases dataset is quite high, but as already mentioned in the previous paragraph this may be due to the very limited dimensions of the dataset. In fact, this case is the one with the highest CV error score (4%) appearing on the table. Adding the ideal simulated dataset, not only increases the mean accuracy (up to 99.9% for XGB) but it also decreases the CV error score by an order of magnitude (0.09% for XGB). The third row of Tab. 4 is relative to the combination of the two simulated datasets, where we reach extremely high accuracy and quite low CV error score for all algorithms. Finally, the algorithms' performances is cross-validated using all the available data. Although this is the case with the highest number of series and highest variability we still achieve remarkably good results with a mean accuracy that reaches 99.9 % (for RF and XGB) and overall low CV error score. It is important to note how in the current section we report extremely good results, sometimes reaching up to 100% accuracy, but these high numbers should not mislead the reader. The main purpose of this work is to demonstrate that our approach based on features extraction and Machine Learning algorithms works. For this reason, we have considered about 2400 series with quite regular trends and belonging to only 4 possible classes. Increasing the number of series, the number of classes or the irregularity of the series trends may lead to a worsening of the performances. In other words, in this work we establish that our approach perfectly works in the most basic settings and, considering the extremely satisfactory results obtained, we plan to extend our goal to a more complete analysis increasing the complexity of the data in future works. #### 5.2.3 Features Importance Features Importance is one of the key points when using a Machine Learning algorithm for an application, where the interpretation and/or explanation of the results are as much important as finding good classification/regression results. The term _Features Importance_ relates to methods for scoring each input feature given to the model based on how useful they are when predicting a target variable; the scores indicate what we call "importance" of each feature. A higher score indicates that the particular feature will have a greater impact on the model. There are many ways to assign scores to the features; in our case we have used two different approaches: one based Figure 7: Confusion matrix for SMV (a), RF (b) and XGB (c) algorithms when trained and tested on real data. Confusion matrix for SMV (d), RF (e) and XGB (f) algorithms when trained on ideal simulated data and tested on real data. Confusion matrix for SMV (g), RF (h) and XGB (i) algorithms when trained on ideal simulated data and tested on perturbed simulated data. Confusion matrix for SMV (j), RF (k) and XGB (l) algorithms when trained on ideal simulated data and tested on real and perturbed simulated data. on a function provided by the algorithm library (e.g., Scikit-Learn 2023d,b; xgboost 2023b) and the other based on Shapley Values calculated by the SHAP package. It is important to keep in mind that each algorithm has a tendency to weight features in a different way, even though some of them may be the same across all algorithms. In our case, it appears that there are no features common to all three algorithms, although we can find some common ones when comparing the algorithms two at a time. These common features are reported in Fig. 8. Let us recall that, in this work, we have used three different classification algorithms: Random Forest, Support Vector Machines and XGBoost. Our results, reported in Fig. 9 (a), (b), (c), (d) show that, for RF and SVM, most features are quite difficult to interpret, while the features ranking provided by XGBoost (Fig. 9 (e), (f)) propose a more straightforward and interpretable explanation of the model. For XGBoost in particular, the two approaches for Features Importance point out two similar pools of features, where 7 out of 10 are the same. In addition, as shown in Fig. 9 (e) and (f), both approaches rank in the top positions features whose physical meaning is quite easy to deduct from their name, such as _theta sum values, theta standard deviation, theta mean_ and _theta variance_. Additionally, for XGBoost in Fig. 10, two other SHAP plots are shown: a _summary plot_ where each feature's bar has a division into colors based on importance for each class and a _beeswarm plot_. A beeswarm plot is a data visualization tool used to display a summary of how the top features impact the model's output. Each point in the scatterplot represents a data point from the dataset, the vertical line represents the baseline value, which may be the model's average prediction or the expected value of the output. The position of the point in relation to the vertical line reveals whether a feature makes a positive (increasing the prediction) or negative (decreasing the prediction) contribution to the prediction and this position is determined by the Shapley value of the data point. What is important to understand is that the farther a point is from the vertical line, the higher its impact \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Training set & Test set & Accuracy (\%) & Balanced Acc. (\%) & “ovo” Average AUC & Average Recall & Average Precision & Average f1 \\ \hline \multicolumn{8}{c}{**Support Vector Machine**} \\ \hline Real & Real & 100 & 100 & 1.0 & 1.0 & 1.0 & 1.0 \\ Ideal & Real & 98.0 & 98.3 & 0.995 & 0.980 & 0.981 & 0.980 \\ Ideal & Perturbed & 100 & 100 & 1.0 & 1.0 & 1.0 & 1.0 \\ Ideal & Real+Perturbed & 99.7 & 99.7 & 0.999 & 0.997 & 0.998 & 0.997 \\ \hline \multicolumn{8}{c}{**Random Forest**} \\ \hline Real & Real & 100 & 100 & 1.0 & 1.0 & 1.0 & 1.0 \\ Ideal & Real & 98.0 & 98.3 & 0.998 & 0.980 & 0.981 & 0.980 \\ Ideal & Perturbed & 100 & 100 & 1.0 & 1.0 & 1.0 \\ Ideal & Real+Perturbed & 99.5 & 99.2 & 1.0 & 0.995 & 0.995 & 0.995 \\ \hline \multicolumn{8}{c}{**XGBoost**} \\ \hline Real & Real & 100 & 100 & 1.0 & 1.0 & 1.0 & 1.0 \\ Ideal & Real & 98.0 & 97.7 & 1.0 & 0.980 & 0.981 & 0.980 \\ Ideal & Perturbed & 100 & 100 & 1.0 & 1.0 & 1.0 \\ Ideal & Real+Perturbed & 99.7 & 99.8 & 1.0 & 0.997 & 0.998 & 0.997 \\ \hline \hline \end{tabular} \end{table} Table 2: Machine Learning multi-class classifiers results obtained with different combinations of training and test sets divided by algorithm. Because this is a multi-class classification, AUC, Recall, Precision and f1 are averaged. In the Average AUC the acronym “ovo” stands for One-vs-one and it computes the average AUC of all possible pairwise combinations of classes. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{4}{c}{Training set – Test set} \\ Algorithm Hyperparameters & Real-Real & Ideal-Real & Ideal-Pert. & Ideal-Real+Pert. \\ \hline \multicolumn{4}{c}{**Support Vector Machine**} \\ \hline C & 0.0001 & 1 & 0.001 & 1 \\ gamma & 0.0001 & 0.001 & 0.1 & 0.001 \\ kernel & linear & linear & linear & linear \\ \hline \multicolumn{4}{c}{**Random Forest**} \\ \hline n\({}^{\circ}\) estimators & 190 & 100 & 300 & 300 \\ \hline \multicolumn{4}{c}{**XGBoost**} \\ \hline colsample bytree & 0.668 & 0.668 & 0.668 & 0.668 \\ learning rate & 0.0765 & 0.0765 & 0.0765 & 0.0765 \\ max depth & 5 & 5 & 5 & 5 \\ min child weight & 1 & 1 & 1 & 1 \\ n\({}^{\circ}\) estimators & 70 & 70 & 70 & 70 \\ subsample & 0.409 & 0.409 & 0.409 & 0.409 \\ \hline \hline \end{tabular} \end{table} Table 3: Machine Learning selected hyperparameters. A full description of their meaning can be found, for instance, in Scikit-Learn (2023b,c); xgboost (2023a). will be on the output of the model, regardless of whether it is on the left or on the right side of the plot. For a more detailed explanation of the plot please refer to SHAP (2023). Future perspective: time series with transition between trends, an approach based on sliding windows We are aware that the general case of time series observed could comprise different kinds of motion (such as the ones described and used in this work) due to transitions. In order to move towards this more complex real scenario, we have begun to work to identify regions in the time series where the kind of motion is of the same type. This capability would allow our data analysis pipeline to deal with any kind of scenario. As first approach, we have decided to leverage on standard packages for time series data analysis in the case of segmentation of non-stationary signals (e.g., Truong et al., 2020) and anomaly detection (e.g., Gensler and Sick, 2018).We have performed some preliminary tests and some results are reported in this section and in the figure below. Our aim here is to give a possible direction for the next works. The results show that it is possible to arrange a semi-automatic division of the time series in the different trends, looking for example at the average over a fixed window length (in this case made of 8500 points) sliding over the \(|\theta(t)|\) signal. The signal's mean of a window is compared to the mean of the following window; if the difference between those two values exceeds a certain threshold (empirically determined), a transition is detected. However, despite the results can be useful and sometimes impressive (see Fig. 11), we have to investigate further how to generalize the definition of the time windows. This will be left to a future work. ## 7 Conclusions This work deals with the problem of classification of asteroids in co-orbital motion with a given planet using a Machine Learning approach. The main parameter analysed to determine the type of co-orbital motion is a suitable angle \(\theta\), that is defined following the assumption of the Planar Circular Restricted Three-Body Problem (PCR3BP) and its averaged approximation. The time evolution of \(\theta\) allows to identify if the asteroid is in Tadpole motion, distinguishing between TPL4 (around the equilibrium point \(L_{4}\)) and TPL5 (around the equilibrium point \(L_{5}\)), Horssohe (HS) motion or Quasi-satellite (QS) motion. We produce three different kinds of datasets called real, ideal simulated and perturbed simulated in order to apply Machine Learning algorithms. The datasets are formed by time series of the angle \(\theta\), that consist in its evolution in time for short and medium timescale (about 900 years for real asteroid cases and 3000 years for simulated cases). The Python package _tsfresh_ is applied to such time series, extracting meaningfully features, which are selected and, if needed, standardized. Then, a Machine Learning pipeline based on \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Dataset & Train & Test & Accuracy (\%) & Balanced Acc. (\%) & \({}^{\prime}\)ow\({}^{\prime}\) AUC & Precision & Recall & f1 \\ \hline \multicolumn{10}{c}{**Support Vector Machine**} \\ \hline Real & 40 & 10 & 98.0(\(\pm\)4.0) & 98.3(\(\pm\)3.3) & 0.994(\(\pm\)0.011) & 0.987(\(\pm\)0.027) & 0.980(\(\pm\)0.040) & 0.980(\(\pm\)0.040) \\ Real+Ideal & 1639 & 410 & 99.3(\(\pm\)1.3) & 99.4(\(\pm\)1.0) & 0.999(\(\pm\)0.001) & 0.993(\(\pm\)0.012) & 0.993(\(\pm\)0.013) & 0.993(\(\pm\)0.014) \\ Ideal+Pert. & 1877 & 469 & 99.95(\(\pm\)0.09) & 99.97(\(\pm\)0.07) & 0.999(\(\pm\)0.001) & 0.999(\(\pm\)0.001) & 0.999(\(\pm\)0.001) & 0.999(\(\pm\)0.001) \\ Real+Ideal+Pert. & 1917 & 179 & 99.42(\(\pm\)1.17) & 99.53(\(\pm\)0.94) & 0.999(\(\pm\)0.001) & 0.994(\(\pm\)0.010) & 0.994(\(\pm\)0.010) & 0.994(\(\pm\)0.010) \\ \hline \multicolumn{10}{c}{**Random Forest**} \\ \hline Real & 40 & 10 & 98.0(\(\pm\)4.0) & 98.3(\(\pm\)3.3) & 0.995(\(\pm\)0.009) & 0.985(\(\pm\)0.030) & 0.980(\(\pm\)0.040) & 0.979(\(\pm\)0.041) \\ Real+Ideal & 1639 & 410 & 99.9(\(\pm\)0.2) & 99.9(\(\pm\)0.2) & 1.0(\(\pm\)0.0) & 0.999(\(\pm\)0.002) & 0.999(\(\pm\)0.002) & 0.999(\(\pm\)0.002) \\ Ideal+Pert. & 1877 & 469 & 100.0(\(\pm\)0.0) & 100.0(\(\pm\)0.0) & 1.0(\(\pm\)0.0) & 1.0(\(\pm\)0.0) & 1.0(\(\pm\)0.0) & 1.0(\(\pm\)0.0) \\ Real+Ideal+Pert. & 1917 & 179 & 99.92(\(\pm\)0.17) & 99.92(\(\pm\)0.17) & 1.0(\(\pm\)0.0) & 0.999(\(\pm\)0.002) & 0.999(\(\pm\)0.002) & 0.999(\(\pm\)0.002) \\ \hline \multicolumn{10}{c}{**XGBoost**} \\ \hline Real & 40 & 10 & 98.0(\(\pm\)4.0) & 98.3(\(\pm\)3.3) & 1.0(\(\pm\)0.0) & 0.985(\(\pm\)0.030) & 0.980(\(\pm\)0.040) & 0.979(\(\pm\)0.041) \\ Real+Ideal & 1639 & 410 & 99.95(\(\pm\)0.10) & 99.96(\(\pm\)0.08) & 1.0(\(\pm\)0.0) & 0.999(\(\pm\)0.001) & 0.999(\(\pm\)0.001) & 0.999(\(\pm\)0.001) \\ Ideal+Pert. & 1877 & 469 & 100.0(\(\pm\)0.0) & 100.0(\(\pm\)0.0) & 1.0(\(\pm\)0.0) & 1.0(\(\pm\)0.0) & 1.0(\(\pm\)0.0) & 1.0(\(\pm\)0.0) \\ Real+Ideal+Pert. & 1917 & 179 & 99.96(\(\pm\)0.08) & 99.97(\(\pm\)0.07) & 1.0(\(\pm\)0.0) & 0.999(\(\pm\)0.001) & 0.999(\(\pm\)0.001) & 0.999(\(\pm\)0.001) \\ \hline \hline \end{tabular} \end{table} Table 4: Machine Learning multi-class classifiers results obtained in 5-fold Cross Validation. Training sets and test sets contain, respectively, 80% and 20% of the dataset. Standard deviation reported in parentheses. In the Average AUC the acronym \({}^{\prime}\)ow\({}^{\prime}\) stands for One-vs-one and it computes the average AUC of all possible pairwise combinations of classes Figure 8: Common important features of the three supervised ML algorithms ranked by SHAP and Feature Importance tools. Figure 10: SHAP results for XGBoost algorithm. On the left the summary plot while in the right the beeswarm plot. Figure 9: Feature Importances for the three different Machine Learning Algorithms, evaluated with Scikit Learn packages and SHAP. algorithms for Dimensionality Reduction and Classification, is built, with the features extracted as input. The results show the power of such approach, with very well evident clusters in Dimensionality Reduction visualization plot and classification accuracy above 99%. This paper aims to define a methodological approach to such kind of data, serving as a backbone model for further studies, where more and more complex cases are faced. ## Acknowledgements Authors express their gratitude to Tiago Azevedo and Pietro Lio from the University of Cambridge (UK), Michela Baccini, Chiara Marzi, Fabrizio Argenti and Simone Marinai from the University of Florence (Italy), Stefano Diciotti from the University of Bologna (Italy), Alessandro Mecocci from the University of Siena (Italy), for fruitful discussion and advices on data analysis and to Lona Ceccherini for her support. ## Data Availability The data underlying this article will be shared on request to the corresponding author. Figure 11: Three cases of real time series data: evolution of the resonant angle \(\theta\) versus time; in the three cases several transitions between QS and HS regimes occur.
2309.13322
From Text to Source: Results in Detecting Large Language Model-Generated Content
The widespread use of Large Language Models (LLMs), celebrated for their ability to generate human-like text, has raised concerns about misinformation and ethical implications. Addressing these concerns necessitates the development of robust methods to detect and attribute text generated by LLMs. This paper investigates "Cross-Model Detection," by evaluating whether a classifier trained to distinguish between source LLM-generated and human-written text can also detect text from a target LLM without further training. The study comprehensively explores various LLM sizes and families, and assesses the impact of conversational fine-tuning techniques, quantization, and watermarking on classifier generalization. The research also explores Model Attribution, encompassing source model identification, model family, and model size classification, in addition to quantization and watermarking detection. Our results reveal several key findings: a clear inverse relationship between classifier effectiveness and model size, with larger LLMs being more challenging to detect, especially when the classifier is trained on data from smaller models. Training on data from similarly sized LLMs can improve detection performance from larger models but may lead to decreased performance when dealing with smaller models. Additionally, model attribution experiments show promising results in identifying source models and model families, highlighting detectable signatures in LLM-generated text, with particularly remarkable outcomes in watermarking detection, while no detectable signatures of quantization were observed. Overall, our study contributes valuable insights into the interplay of model size, family, and training data in LLM detection and attribution.
Wissam Antoun, Benoît Sagot, Djamé Seddah
2023-09-23T09:51:37Z
http://arxiv.org/abs/2309.13322v2
# From Text to Source: Results in Detecting ###### Abstract The widespread use of Large Language Models (LLMs), celebrated for their ability to generate human-like text, has raised concerns about misinformation and ethical implications. Addressing these concerns necessitates the development of robust methods to detect and attribute text generated by LLMs. This paper investigates "Cross-Model Detection," evaluating whether a classifier trained to distinguish between source LLM-generated and human-written text can also detect text from a target LLM without further training. The study comprehensively explores various LLM sizes and families, and assesses the impact of conversational fine-tuning techniques on classifier generalization. The research also delves into Model Attribution, encompassing source model identification, model family classification, and model size classification. Our results reveal several key findings: a clear inverse relationship between classifier effectiveness and model size, with larger LLMs being more challenging to detect, especially when the classifier is trained on data from smaller models. Training on data from similarly sized LLMs can improve detection performance from larger models but may lead to decreased performance when dealing with smaller models. Additionally, model attribution experiments show promising results in identifying source models and model families, highlighting detectable signatures in LLM-generated text. Overall, our study contributes valuable insights into the interplay of model size, family, and training data in LLM detection and attribution. ## 1 Introduction Large Language Models (LLMs), characterized by their ability to generate human-like text Dou et al. (2022), have found applications in various domains, including content generation, chatbots, and language translation. However, as the use of LLMs becomes more widespread, concerns about their misuse, misinformation, and ethical implications have surfaced McGuffie and Newhouse (2020); Bender et al. (2021); Chiesurin et al. (2023). One of the ways to address these concerns is with robust methods that are able to detect and attribute text generated by LLMs Jawahar et al. (2020), allowing us to differentiate between human-authored and machine-generated content, identify the source model, or even the model creator. Such capabilities are crucial for maintaining trust in online communication platforms, content moderation, and ensuring responsible AI deployment. Our motivation for this research stems from real-life scenarios where we often lack knowledge of the specific model used to generate a piece of text. These scenarios can be formulated as a "Cross-Model Detection", where we investigate whether a classifier originally trained to distinguish between text generated by one LM and human-written text, can also identify text generated by a different LM without requiring fine-tuning or training on the text it produces. Our contribution to this area is characterized by the comprehensiveness of our study. While previous works in the literature have been limited in their exploration of a few model sizes and families, we take a more expansive approach. We systematically examine a wide range of LLM sizes, spanning from base models to exceptionally large ones, and encompassing diverse model families such as GPT-2, LLaMA, Pythia, OPT and others Zhao et al. (2023). Additionally, we explore the impact of conversational fine-tuning techniques, including Chat, Instruct Mishra et al. (2022); Wei et al. (2022), and Reinforcement Learning from Human Feedback (RLHF) Christiano et al. (2017); Ziegler et al. (2020), on the generalization and transferability of the classifier across this wide array of models. This comprehensive investigation enables us to gain a deeper understanding of the generalization and transferability of the classifier across a diverse array of models, thus eliminating a potential source of bias in our results. It also allows us to identify how factors like model size and family impact the detection and attribution of generated text. Our contributions in this study can be summarized as follows: * A comprehensive investigation into cross-model detection, evaluating the classifier's ability to detect text generated by different LLMs, and in model attribution, encompassing a broad range of sizes and model families. * We highlight the role of both model size and family in the detection of text generated by Language Model Models (LLMs). We observed an inverse relationship between classifier effectiveness and LLM size. Detecting larger models can be challenging, but training on similarly sized LLMs can improve performance. * Our experiments in model attribution reveal the potential for identifying the source model of generated text. While human-generated text is distinguishable, confusion primarily occurs between models from the same family or with adjacent sizes. This suggests that LLMs leave distinct signatures, enabling source model identification and model family classification, further enhancing our understanding of how different LLMs generate text. In the subsequent sections, we present a summary of relevant related works followed by the details of our methodology, experiments, and results, shedding light on the interplay between model size, family, and training data in the context of LLM detection and attribution in the ever-evolving landscape of Large Language Models. ## 2 Related Works Detecting AI-generated text is a recent and rapidly growing area of research (Jawahar et al., 2020). Although Sadasivan et al. (2023) demonstrated a theoretical impossibility of distinguishing between human-written and machine-generated when the total variation (TV) norm between the two is low, a more recent study by Chakraborty et al. (2023) showed that detection is still possible given enough samples. Popular methods to detect AI-generated text can be grouped into three categories: 1) Using statistical features of text such as perplexity, n-grams, entropy, etc. (Gehrmann et al., 2019; Mitchell et al., 2023). 2) Watermarking generated text which was first demonstrated by Atallah et al. (2001) who embedded a watermark bit in the syntactic structure of the text. More recently, Kirchenbauer et al. (2023) used the LLM's output log probability at each generation step to embed a watermark based on "green/red" toke list where an LLM will have an artificially increased likelihood of selecting tokens from the "green" list. Other work on watermarking include (Fernandez et al., 2023; Christ et al., 2023). 3) Classifier-based approaches which use a classifier trained on a dataset containing both human-written and machine-generated text to detect LM-generated text (Zellers et al., 2019; Solaiman et al., 2019; Uchendu et al., 2020; Fagni et al., 2021; Antonu et al., 2021; Guo et al., 2023; Mitrovic et al., 2023). This approach is vulnerable against adversarial text mimicking, among others, Wikipedia style and informative (Antoun et al., 2023). We highlight recent work by Mireshghallah et al. (2023) that studies cross-model detection and detector transferability by examining the effect of using classifier models other than the generator itself to detect machine-generated text. The authors studied training LMs from 5 different model families with sizes ranging from 70M to 6.7B and trained the generator LMs to detect machine-generated text generated by other LMs. They demonstrated that using smaller language models for detection can lead to a higher performance. Our work differs from Mireshghallah et al. (2023) in that we assume we don't have access to the underlying model but only to a set of text generated by the model. We hence use an separate encoder classifier to detect generated text instead of using the generator. We also extend the study to more model families, sizes and further finetunings, while also studying model attribution. ## 3 Methodology Cross-Model DetectionOur objective is to evaluate whether a classifier, initially trained to distinguish text produced by a source LLM from human-written text, can also detect text generated by a target LLM. We conduct a comprehensive evaluation, by using LLMs with a range of sizes (base models to up to very large LLMs) from different families. We consider a model's family as a proxy for pretraining dataset variation, since apart from slight changes in model architecture, namely positional embeddings type, layer-norm order, or activation type, the only difference between the models from different families is the dataset used for pretraining. We also investigate the effect of Chat, Instruct and Reinforcement Learning from Human Feedback (RLHF) which we refer to as conversational finetuning. This enables us to measure the generalization and transferability of the classifier across a diverse array of models. Model AttributionWe divide this task into three subtasks: * Source Model Identification: We first investigate the ability to identify the source for a given piece of text, the source being either a human-written text or a text generated by an LLM. * Model Family Classification: In the second investigation, we classify the source model into its corresponding family. This classification helps us understand how good a text can be attributed to a specific model family, and identifies instances where confusion arises between different model families. This task is a higher-level generalization of the Source Model Identification task. * Model Size Classification: Lastly, we examine the ability to determine the model size responsible for generating a given text. This experiment aims to determine whether it is feasible to discern whether the text was generated by a small or large LLM. This information is valuable for understanding how model size impacts the generated content. These investigations collectively contribute to a comprehensive understanding of model attribution in the context of our study. Our research methodology for investigating cross-model detection and model attribution involves synthetic text generated using Large Language Models (LLMs) selected from diverse families, sizes, and architectures. ## 4 Experimental Protocol ### LLM Choice We chose the following model families and sizes for our experiments for a total of 55 models: * **BLOOM**(Scao et al., 2022): 560M, 1.1B, 1.7B, 3B, 7.1B. * **Cereberas-GPT**(Dey et al., 2023): 111M, 256M, 1.3B, 2.7B, 6.7B, 13B. * **Falcon**(Almazrouei et al., 2023; Penedo et al., 2023): 7B, 40B. * **GPT-2**(Radford et al., 2019): 124M, 355M, 774M, 1.5B. * **LLaMA**(Touvron et al., 2023a): 7B, 13B, 30B, 65B. * **LLaMA-v2**(Touvron et al., 2023b): 7B, 13B, 70B. * **MPT**(MosaicML, 2023): 7B, 30B. * **OPT**(Zhang et al., 2022): 125m, 350m, 1.3B, 2.7B, 6.7B, 13B, 30B, 66B. * **OpenLLaMA**(Geng and Liu, 2023): 3B, 13B. * **OpenLLaMA-v2**(Geng and Liu, 2023): 3B, 7B. * **Pythia**(Biderman et al., 2023): 70m, 160m, 410m, 1B, 1.4B, 2.8B, 6.9B, 12B. We select the following conversationally fine-tuned models to compare with their corresponding foundation models: * **Falcon-Instruct**(Almazrouei et al., 2023; Penedo et al., 2023): 7B and 40B. * **Alfred-0723**: 40B, an RLHF finetuned version of Falcon-40B. * **LLaMA-v2-Chat**(Touvron et al., 2023b): 7B, 13B and 70B, an RLHF finetuned version of LLaMA-v2 * **MPT-Chat**(MosaicML, 2023): 7B, 30B, based on MPT finetuned on a large selection of chat datasets. * **Vicuna-v1.3**(Zheng et al., 2023): 7B, 13B, 33B, based on LLaMA fine-tuned on user-shared conversations. ### Data-Generation We generate our data by prompting the different LLMs with the first 10 words of documents sampled from the OpenWebText dataset (Gokaslan et al., 2019). For conversational models, in addition to each model's specific prompt, we explicitly instruct the model to continue generation with the following prompt: _"Give the best continuation of the following text:"_, followed by the first 10 words from the sampled document. We use the HuggingFace Text Generation Inference server 1 to load all models using up to 4 48GB NVIDIA GPUs for the largest models with float16 precision. The same set of hyper-parameters is used for all models, a maximum 256 tokens per generation, with beam-search of size 5, repetition penalty of 1.0, temperature of 1.0, top-k of 10, top-p of 0.9 and typical sampling with 0.9. ### Data Splitting and Filtering We first split our data into 80% for training and 20% for validation. Then we filter each split to remove bad generations, by filtering (i) generations that are too short, (ii) generations that are repetitive and (iii) generations that contain apologies or sentences similar to "As an AI language model". To ensure a fair comparison between classifier trainings, we sample 800 samples for training and 200 samples for validation from all models, except both _pythia-70m_ models, _Cereberas-GPT-110m & 256m_ and _OPT-350m_. For negative human-generated samples, we sample a new set of texts (800 samples for training and 200 for validation) from the same OpenWebText dataset. #### 4.3.1 Cross-Model Detection Training Data For each LLM we merge its own train and test sets with the negative examples sets for a total of 1600 training samples and 400 validation samples. To quantify classifier performance, and following [16, 17], we utilize the Area Under the Receiver Operating Characteristic Curve (AUC score). The AUC score provides a robust measure of the classifier's ability to distinguish between different models, taking into account both true positive and false positive rates. #### 4.3.2 Model Attribution Training Data We conduct three distinct investigations as mentioned earlier. We use F1 score to evaluate classifier performance in all three tasks due to the imbalanced nature of our data. Source Model IdentificationThis task involves classifying the text into one of the 50 LLMs used in our experiments, spanning various families and sizes. We also include a class for human written text for a total of 51 classes. Family ClassificationWe group models into 12 classes including one for human written text, and then sub-sample the data to have a balanced 1600 training samples and 400 validation samples for each class. Model Size IdentificationWe bin the models into 6 bins: <1B, 1-5B, 5-10B, 10-20B, 20-50B, >50B. Refer to Appendix A for the class distribution. ### Classifier Our classifier of choice for all experiments uses the transformer encoder architecture namely Figure 1: 5-seed averaged AUC scores for a classifier trained on text from a source model (_Side axis_) and tested on text from a target model (_Top axis_). DeBERTaV3-base [14, 20]. All models are trained with a batch size of 32, learning rate of 2e-5 for 5 epochs. The classification experiments were conducted using five different random seeds, and the resultant scores were averaged to enhance the robustness of our findings, as this approach helps mitigate the potential impact of seed-specific variations on the results. ## 5 Results ### Cross-Model Detection Results Figure 1 presents a heatmap of the AUC scores for the cross-model detection experiments. The side axis represents the classifier's source model, and the top axis represents the classifier's target model. We sort the models by their size (from left to right, top to bottom). From Figure 1, we observe several interesting patterns in the cross-model detection results: Model Size InfluenceIn general, our findings suggest a clear inverse relationship between the classifier's effectiveness and the size of the test models. The pattern is showcased better in Figure 2, which plots the self-detection and average AUC scores trend lines against the model size. This pattern indicates that larger LLMs tend to pose a greater challenge for the classifier, particularly when the classifier is trained on data from a smaller source model. Notably, the detection performance on very large Language Models (LMs) tends to improve when the model is trained on data sourced from similarly sized large LMs. However, it is essential to highlight the trade-off that training only on very large LMs leads to, results in decreased performance in detecting smaller-sized models. Model Family InfluenceWe observe that performance on detecting GPT2 and LLaMA generated text tends to be slightly lower than other model families _(Refer to the corresponding heatmap columns and their means in Figure 1 to observe the corresponding data patterns)_. This pattern suggests that the two model families are harder to detect relative to their similar-sized counterparts due to their superior language modeling capabilities and hence "closer" to human written text. We can also observe that the performance of a classifier trained on text sourced from _pythia-160m_ and _opt-2.7b_ tends worse overall, while a classifier trained on text sourced from _Cereberas-GPT-6.7B_ is performing better than similarly sized models _(Refer to the corresponding heatmap rows in Figure 1)_. **The lack of a discernible pattern in the cross-model detection performance across different model Figure 3: Conversational models cross-model detection with their foundation LLM.AUC scores (5-seed averaged) for a classifier trained on text from a source model (_Side axis_) and tested on text from a target model (_Top axis_). Figure 2: Average target AUC scores vs model size. OLS Trend lines are drawn for each set of model family families may be attributed to the extensive overlap in their pretraining data**, with a predominant reliance on ThePile Gao et al. (2020) dataset or its subsets across most models, supplemented by Common Crawl as the primary data source. Consequently, the primary distinguishing factor among these models lies in their respective data cleaning pipelines. Influence of Conversational FinetuningOur experiments reveal a clear pattern in the cross-model detection results, as shown in Figure 3. Specifically, a classifier trained on text generated by chat models exhibits limited capability in detecting normal language models (LMs). However, it demonstrates improved performance when tasked with detecting other chat models. Notably, when trained on LLAMA 2 70b chat data, the classifier achieves the highest scores, albeit with a slight decline in detection accuracy when tested on chat models. This observation suggests that the LLAMA 2 70b chat model effectively follows instructions to continue input text. Surprisingly, training the classifier on vanilla LM output also yields commendable results in detecting these distinct model categories. These findings underscore the nuanced relationship between chat models and traditional language models in the context of detection. ### Model Attribution Results Source Model IdentificationIn the Model Attribution experiments, our objective was to investigate the ability of our classifier to identify the source model of generated text accurately. Figure 4 displays the confusion matrix for the Model Attribution experiments, where rows represent the true source models, and columns represent the predicted source models. We can draw the following conclusions from our results: * Human-generated text proved to be the most easily distinguishable source, as it exhibited minimal confusion, primarily with a few 30B+ Large Language Models (LLMs). * The majority of confusions occurred between models from the same family. We also notice that within a model family, the confusions tend to happen between models with adjacent sizes. * An interesting case was the confusion between GPT-2 models and Cereberas-GPT models. It's worth noting that both models share the same GPT-2 architecture but differ in their pretraining data, with Cereberas-GPT being Figure 4: Normalized confusion matrix for Source Model Identification. 5-seed averaged and normalized by the predicted class support. trained on ThePile, which includes an open replication of the OpenWebText dataset. Overall, our classifier achieved an F1-score of 17.7% across 44 distinct labels, indicating that LLMs leave detectable signatures, thus enabling source model identification. Model Family ClassificationIn the Model Family Classification experiments, our primary aim was to evaluate the classifier's efficacy in identifying the model family responsible for generating a given text. This assessment allows us to determine if distinct signatures emerge from the diverse pretraining data and architectural choices utilized by different language models. Figure 5 provides an overview of the Model Family Classification results. Notably, we observe that human-generated text exhibits the highest distinguishability from other model families, followed by the OPT model family. It's worth noting that this distinction might be partly influenced by the subpar generation quality of the OPT-125m model, which stands out and can be easily identified among the models as seen in Section 5.2. Furthermore, we notice a consistent confusion pattern between GPT-2 models and Cereberas-GPT models. These two model families, sharing the GPT-2 architecture but differing in their pretraining data sources, appear to exhibit a higher degree of similarity in their generated text, leading to increased misclassifications. The overall F1-score across 12 distinct model family labels was 37%, underscoring the potential for detecting model family signatures. ### Model Size Classification In the Model Size Classification experiments, we aimed to assess the classifier's ability to determine the size category of the model responsible for generating a given text. This evaluation allows us to discern whether the differences in model sizes translate into detectable signatures in the generated text. As depicted in Figure 6, the results of the Model Size Classification experiment reveal a discernible pattern. Larger models consistently exhibit the least amount of confusion, while models with sizes that are closely related tend to be more frequently misclassified. An interesting exception to this pattern is observed in the case of the 10-20B model, where the classifier tends to confuse other smaller models with it. In summary, the classifier achieves an overall F1-score of 38% across six distinct model size categories. ## 6 Discussion The experiments and results presented in this study provide valuable insights into the challenges and nuances of detecting and attributing text generated by different LLMs. In the cross-model detection experiments, we observed a clear inverse relationship between the effectiveness of the classifier and the size of the test models. Larger LLMs tend to be more challenging to detect, especially when the classifier is trained on data from smaller models. However, training on similarly sized LLMs can improve detection performance on larger models, although it may lead to decreased performance on smaller models. Interestingly, the performance varied across LLM families, with GPT2 and LLaMA-generated text proving harder to detect due to their advanced language modeling capabilities. These findings emphasize the importance of considering both model size and family when developing detection strategies. In addition to the observations made in the cross-model detection experiments, we also conducted experiments to assess the influence of finetuned chat Figure 5: Normalized confusion matrix for model family classification. 5-seed averaged and normalized by the predicted class support. Figure 6: Normalized confusion matrix for model size classification. 5-seed averaged and normalized by the predicted class support. models, shedding light on the relationship between chat models and traditional language models in the context of detection. In the Model Attribution experiments, our classifier demonstrated the ability to identify the source model of generated text to a certain extent. Human-generated text was the most distinguishable, while confusions mainly occurred between models from the same family and between models with adjacent sizes. Furthermore, in Model Family Classification, the classifier showed promise in identifying the model family responsible for generating text, highlighting the potential for detecting distinct signatures arising from diverse pretraining data and architectural choices. This indicates that LLMs leave detectable signatures, enabling source model identification, and model family classification. In Model Size Classification, we observed that larger models were less frequently misclassified, emphasizing the influence of model size on detection. Building upon the findings of (Antoun et al., 2023) which demonstrated the challenging nature of identifying adversarial text composed in an academic, pedagogic, or encyclopedic style for state-of-the-art classifiers trained on a mixture of text generated by LLMs and human content, we also investigated how the detection of adversarial content text could influence the trends we exposed earlier. As shown in Figure 7, in the Adversarial column, the results are massively inferior to the ones reported in our main experimental setting. The inherent out-of-domain distribution of this content2 compared to our main experiment setting may have indeed contributed significantly to this performance degradation. Nevertheless, it is worth noting that the top-five detection models, with F1-scores ranging from 80 to 72, mostly consist of models trained on text generated by smaller models and primarily from the BLOOM family. Footnote 2: We translated the original data set from French to English using google translate. As shown by Antoun et al. (2023), using translated text from English to French has no effect in the detectability of automatically generated content. We believe this result holds in the French to English direction. This observation suggests that these detectors are likely taking advantage of relevant textual features to distinguish between automatically generated text of lower quality and human-produced content. However, it is important to acknowledge that the results exhibit variability across models, with models of similar size encountering difficulties in this task, while larger model-trained classifiers also face challenges in this specific context. Further work is required to investigate the precise factors at play in this scenario. Our key takeaway is that our study was conducted within a controlled environment, aiming to single-out variable influences. Therefore, the level of performance we demonstrated should not be interpreted as indicative of real-world expectations for this task. Overall, our results underscore the complex interplay between model size, family, and training data in the context of LLM detection and attribution. We provide all our experiments results in an interactive online repository [https://huggingface.co/spaces/wissamantoun/LLM_Detection_Attribution](https://huggingface.co/spaces/wissamantoun/LLM_Detection_Attribution). ## Acknowledgements This work was partly funded by Benoit Sagot's chair in the PRAIRIE institute funded by the French national research agency (ANR as part of the "Investissements d'avenir" programme under the reference ANR-19-P3IA-0001. This work also received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 101021607. The authors are grateful to the OPAL infrastructure from Universite Cote d'Azur for providing resources and support. We would also like to thank Francis Kulumba, Arij Riabi, and Roman Castagne for the productive discussions.
2309.05540
On the triviality of the shocked map
The (non-spanning) tree-decorated quadrangulation is a random pair formed by a quadrangulation and a subtree chosen uniformly over the set of pairs with prescribed size. In this paper we study the tree-decorated quadrangulation in the critical regime: when the number of faces of the map, $f$, is proportional to the square of the size of the tree. We show that with high probability in this regime, the diameter of the tree is between $o(f^{1/4})$ and $f^{1/4}/\log^\alpha(f)$, for $\alpha >1$. Thus after scaling the distances by $f^{-1/4}$, the critical tree-decorated quadrangulation converges to a Brownian disk where the boundary has been identified to a point. These results imply the triviality of the shocked map: the metric space generated by gluing a Brownian disk with a continuous random tree.
Luis Fredes, Avelio Sepúlveda
2023-09-11T15:24:08Z
http://arxiv.org/abs/2309.05540v1
# On the triviality of the shocked-map ###### Abstract. The (non-spanning) tree-decorated quadrangulation is a random pair formed by a quadrangulation and a subtree chosen uniformly over the set of pairs with prescribed size. In this paper we study the tree-decorated quadrangulation in the critical regime: when the number of faces of the map, \(f\), is proportional to the square of the size of the tree. We show that with high probability in this regime, the diameter of the tree is between \(o(f^{1/4})\) and \(f^{1/4}/\log^{\alpha}(f)\), for \(\alpha>1\). Thus after scaling the distances by \(f^{-1/4}\), the critical tree-decorated quadrangulation converges to a Brownian disk where the boundary has been identified to a point. These results imply the triviality of the shocked map: the metric space generated by gluing a Brownian disk with a continuous random tree. ## 1. Introduction The (non-spanning) tree-decorated map was introduced in [10]. It consists in a uniformly chosen couple \((\mathbf{q},\mathbf{t}^{\mathsf{M}})\), where \(\mathbf{q}\) is a planar quadrangulation with \(f\) faces and \(\mathbf{t}^{\mathsf{M}}\) is a subtree of \(\mathbf{q}\) with \(k\) edges containing the root of \(\mathbf{q}\). The main reason for its introduction was to propose a model that interpolates between the uniformly chosen planar quadrangulation and the spanning-tree Figure 1. A simulation, based on the bijection introduced in [10], of a uniformly chosen tree decorated quadrangulation. decorated quadrangulation. This is interesting as these two models belong to different universality classes, as shown in [11, 12, 13, 14]. In this work, we discuss the scaling limit of the tree-decorated map when \(k\propto\sqrt{f}\). Let us first justify why this regime should be critical. In [11], we introduced a bijection between the set of tree-decorated quadrangulations with \(f\) faces decorated by a tree with \(k\) edges, and the Cartesian product of two sets: the set of planar trees of size \(k\) and the set of planar quadrangulation with a simple boundary with \(f\) internal faces and boundary of size \(2k\). The bijection is simple and can be informally understood as a "gluing" of the boundary using the equivalence relationship defined by the tree. This bijection is close to the one found in [1, 1, 1] to study the planar maps decorated by a self-avoiding walk. The bijection of [11] gives interesting information about the possible scaling limits of tree-decorated maps. For example, as the tree chosen is always uniform, it is clear that the tree \(\mathbf{t}^{\mathsf{M}}\) (properly normalised) converges to a CRT as long as its size, \(k\), goes to infinity. The planar map \(\mathbf{q}\), however, is trickier. In this case, it is interesting to study the case of uniform quadrangulation with a simple boundary \(\mathbf{q}_{s}\) that appears in the bijection. We start with a more elementary case : the uniform quadrangulation \(\mathbf{q}_{b}\) with general boundary of size \(2k\). It is known thanks to[1, 1], that this model undergoes a phase transition when \(k\asymp\sqrt{f}\) \[f^{-\alpha}\mathbf{q}_{b}\asymp\begin{cases}\text{Brownian map}&\text{if }k \ll\sqrt{f}\\ \text{Brownian disk}&\text{if }k\asymp\sqrt{f}\\ \text{Continuum random tree}&\text{if }k\gg\sqrt{f},\end{cases}\] where we write \(cM=(M,c\mathbf{d}_{M})\) for a rescaled discrete metric space \((M,\mathbf{d}_{M})\). The simple boundary case is expected to be similar. The first two points are treated in [1, 1], however the third point is still not proven. The phase transition of quadrangulations with a simple boundary \(\mathbf{q}_{s}\), allows us to conjecture that the bijection of [11] also works in the scaling limit when \(k\approx\sigma\sqrt{f}\). In fact, we call _the shocked map_ the result of the bijection of [11] in the continuum (see Definition 2.6). Thus, a shocked map is composed by a pair \((\mathcal{S},\mathbf{T}^{\mathsf{S}})\) where \(\mathcal{S}\) is a continuous compact metric space and \(\mathbf{T}^{\mathsf{S}}\) is a subset of \(\mathcal{S}\). ### Main results The main result of this paper is that the shocked map is trivial. More precisely after doing the glueing, the metric space \(\mathcal{S}\) does not reduce to one-point, but the set \(\mathbf{T}^{\mathsf{S}}\) does; i.e. after the glueing the tree contracts into a point. This result is summarized in the following theorem. **Theorem 1.1**.: _Let \((\mathcal{S},\mathbf{T}^{\mathsf{S}})\) be a shocked map. One has that \(\mathcal{S}\) is the Brownian disk where the boundary is identified to a point and \(\mathbf{T}^{\mathsf{S}}\) corresponds to that point._ As a consequence of this theorem, we can describe the scaling limit of the critical tree-decorated map and obtain an upper bound for the asymptotic behavior of the diameter of the tree with respect to the metric of the map. This upper bound is tight up to a log correction as seen in the following theorem. **Theorem 1.2**.: _Let \((\mathbf{q}_{f},\mathbf{t}^{\mathsf{M}}_{\sigma})\) be a tree decorated quadrangulation where \(\mathbf{q}_{f}\) has \(f\) faces and the tree \(\mathbf{t}^{\mathsf{M}}_{\sigma}\) is of size \(\sigma\sqrt{f}\), with \(0<\sigma<\infty\). Then, \((8f/9)^{-1/4}(\mathbf{q}_{f},\mathbf{t}^{\mathsf{M}}_{\sigma})\) converges in law for the Gromov-Hausdorff-Uniform topology 1 towards \((\mathcal{S},\mathbf{T}^{\mathsf{S}})\). Furthermore, for any \(\alpha>1\) and \(\varepsilon>0\), such that with high probability as \(f\to\infty\)._ Footnote 1: The definition of this topology is given in Section 2.3.2. For more information see for example [1] \[\frac{f^{1/4}}{(\log(f))^{\alpha}}\leq diam(\mathbf{t}^{\mathsf{M}}_{\sigma}) \leq\varepsilon f^{1/4}. \tag{1.1}\] The bounds in (1.1) are proven in Proposition 5.4. Theorem 1.2 follows directly from this. Proof of the convergence assuming (1.1).: The sequence is pre-compact thanks to Remark 2.3. Now, notice that when renormalising by \(f^{1/4}\), the diameter of the tree \(\mathfrak{t}_{\sigma}^{\mathsf{M}}\) converges to \(0\) meaning that it contracts to one point as \(f\to\infty\) and notice that every path having empty intersection with the decoration keeps its length. This let us upper and lower bound the distances by the distances of the shock map as \(f\to\infty\), giving the equality. To prove Theorem 1.1, we first work in the continuous infinite volume version of the model. We start with \(\mathbf{H}\) a Brownian half-plane and \(\mathbf{T}^{\mathsf{S}}\) a bi-infinite tree and we glue them according to boundary length. Then, we show, using Kingman's subadditive ergodic theorem, that there exists a deterministic constant so that a.s. \(\lim n^{-1}d(0,\tau_{n})=c\). Here, \(\tau_{n}\) is the point in the right infinite branch at distance \(n\) from \(0\). Afterward, we show that \(c\) is equal to \(0\) by studying how this distance behaves on an event with positive probability. We conclude that the diameter is \(0\) by using the rerooting invariance of our objects and that \(d(0,\tau_{n})n^{-1}\) is equal in law to \(d(0,\tau_{1})\). As a consequence, we see that when one glues \(\mathbf{H}\) with an infinite CRT the diameter of the image of CRT is \(0\), as the distance in this case are smaller than in the former one. To obtain the results in the finite volume case, we use the ideas of [1] to see that when one only explores a part of the infinite CRT and of the boundary of the Brownian half-plane, their laws are absolutely continuous with respect to those of the CRT and the Brownian disk respectively. This shows that the diameter of the explored parts of the trees are also zero. The upper bound of Theorem 1.2 is a direct consequence of Theorem 1.1, as discrete distances are smaller than their continuum counterparts. However, the lower bounds can only be obtained from the discrete. To do this, we work in the infinite volume limit of the tree-decorated map. In this case, one can see that the furthest point in the branch of the infinite tree that belongs to the ball of centre \(0\) and radius \(n\) is stochastically dominated by the sum of \(n\) i.i.d. positive random variables with tail decreasing as \(x^{-1}\). This implies that this furthest point has distance smaller than \(n\log^{\alpha}(n)\). Again, the finite volume case follows by absolute continuity. The paper is organised as follows, we start with the preliminaries, where we introduce all the necessary objects and results for what follows. Then, in Section 3 we work on the case where the map is continuous and the volume is infinite to see that the diameter of the tree is \(0\). In Section 4, we work in the case where the map is discrete and the volume is infinite, to obtain lower bounds on the distances in the infinite spine. In Section 5 and 6, we work in the case where the volume of the map is finite and the map is discrete and continuous respectively. **Acknowledgements**.: We thank Armand Riera for fruitful discussions. The research of A.S is supported by Grant ANID AFB170001, FONDECYT iniciacion de investigacion N\({}^{*}\) 11200085 and ERC 101043450 Vortex. Part of this work was done while L.F was working at University Paris-Saclay, he acknowledges support from ERC 740943 GeoBrown. ## 2. Preliminaries ### Planar maps In this section, we present the elementary concepts that appear in this work. For an introduction to planar maps, we recommend [1, 1, 2]. A rooted planar map, or map for short, is a finite connected graph embedded in the sphere that has a marked oriented edge. We consider that two embeddings as the same map if there is an homeomorphism between them that preserves the orientation (i.e. respecting a cyclic order around every vertex). We call root edge the marked oriented edge and root vertex its starting point. We denote \(v_{0}\) the root vertex. A map \(\mathfrak{m}_{1}\) is said to be a submap of \(\mathfrak{m}_{2}\) (with the notation \(\mathfrak{m}_{1}\subset_{M}\mathfrak{m}_{2}\)) if \(\mathfrak{m}_{1}\) can be obtained from \(\mathfrak{m}_{2}\) by suppressing edges and vertices. This definition implies that \(\mathfrak{m}_{1}\) respects the cyclic order of \(\mathfrak{m}_{2}\) in the vertices and edges remaining. A decorated map is a map with a special submap. The faces of a map are the connected components of the complement of the edges in the embedding. The degree of a face is the number of oriented edges for which the face lies at its left. In this work, we only work with quadrangulations where all faces (except maybe on one) have degree equal to \(4\). The face to the left of the root edge is called the root face. In what follows, maps with a boundary are maps, \(\mathfrak{m}^{b}\), where the root face plays a special role: it has arbitrary degree \(2m\). The set of oriented edges that have the root face to its left are called the boundary. The number of oriented edges in the boundary will be called its size. The boundary will be seen as \(\gamma^{\mathbf{b}}:\mathbf{S}^{1}\mapsto\mathfrak{m}^{b}\) a cyclic path of vertices mapping each of the \(2m\) roots of the unity to each vertex appearing in counter-clockwise sense starting on the root vertex. All faces different from the root face are called internal faces. When the boundary of the map is simple, i.e., the boundary is not vertex-intersecting, the curve \(\gamma^{\mathbf{b}}\) is a bijection. We call the label of a vertex \(v\) of the boundary the appearance number while going counter-clockwise on the boundary starting by the root vertex. We also label the boundary edges as the label of the vertex where they start from. A rooted plane tree of size \(m\), or tree for short, is a planar map with only one face and \(m\) edges. We will encode plane trees using walks. In the literature, one can find several of these codings, see for example [15, Section 1]. Here we are interested in the contour function. This is a bijection that associates to each rooted plane tree with \(m\) edges a Dyck path \(C\) indexed by \(\llbracket 0,2m\rrbracket\). For a more detailed description we refer to [15, Section 1.1] and [16, Section 2]. A tree \(\mathfrak{t}\) has an intrinsic way of visiting all of its oriented-edges. This visit can be represented by a cyclic path \(\gamma^{\mathfrak{t}}:\mathbf{S}^{1}\to\mathfrak{t}\) that represents a walker that starts from the root vertex and turns around the tree (see Figure 2) associating with each of the \(2m\) roots of unity the vertices that he visits. The walker follows the direction of the root edge touching the tree with his left hand2 as long as it walks. The walker, then, continues until he returns to the root edge. Note that this walk visits every oriented edge only once. Now, we define the contour function of \(\mathfrak{t}\) as \(C^{\mathfrak{t}}:\llbracket 0,2m\rrbracket\to\mathbb{N}\) the function for which \(C^{\mathfrak{t}}(n)\) is the distance to the root vertex (height) of the vertex visited at time \(n\) by the walker (time \(0\) for the root vertex). In this case, the inverse is given by the pseudo-distance in \(\llbracket i,j\rrbracket\) Footnote 2: Note that, in the literature, the walker usually walks following its right hand. In this work, the left hand convention simplify some statements. \[\mathsf{d}_{C}(i,j)=C(i)+C(j)-2\min_{\ell\in\llbracket i,j\rrbracket}C(\ell). \tag{2.1}\] Finally, we define a tree-decorated map as a pair \((\mathfrak{m},\mathfrak{t}^{\mathsf{M}})\) where \(\mathfrak{m}\) is a map (without a boundary) and \(\mathfrak{t}^{\mathsf{M}}\subset_{M}\mathfrak{m}\) is a tree of size \(m\). In this work we will study tree-decorated quadrangulations \((\mathbf{q},\mathfrak{t}^{\mathsf{M}})\), which are tree-decorated maps where \(\mathbf{q}\) is a quadrangulation. Furthermore, for this work we require that the root edge of \(\mathfrak{t}^{\mathsf{M}}\) coincides with the edge-root of \(\mathbf{q}\). This allows us to represent \((\mathbf{q},\mathfrak{t}^{\mathsf{M}})\) as a pair \((\mathbf{q},\gamma^{\mathbf{q}})\), where \(\gamma^{\mathbf{q}}:\mathbf{S}^{1}\to\mathfrak{t}^{\mathsf{M}}\subseteq_{M} \mathbf{q}\) is the cyclic path that starts from the root and represents the walker following the contour of the tree \(\mathfrak{t}^{\mathsf{M}}\) (again it associates to the \(2m\) roots of the unity the vertices appearing on the path starting from the root vertex and following the root edge). In words, when the objects are decorated in our setting they define curves implicitly as explained before, i.e. \(\gamma:E\to M\), with \(E\) equal to \(\mathbf{S}^{1}\) or \(\mathbb{R}\) plus the condition that the curve extends continuously to \([-\infty,\infty]\). **Remark 2.1**.: _All the curves previously introduced are curves from discrete spaces, but we can transform them into curves of continuous spaces as follows. We linearly interpolate the discrete graph metric spaces along the edges by identifying each edge with a copy of the interval \([0,1]\). We extend the graph metric in such a way that a path \(\gamma\) in \([\![a,b]\!]\) is linearly interpolated to \([a-1,b]\) by traversing each edge \(\gamma(i)\) in the path at unit speed during \([i-1,i]\)._ Rigorously for a metric space \((M,d)\), we consider \(C_{0}(E,M)\) the space of continuous curves \(\gamma:E\to M\). Notice that each curve defined on an interval \([a,b]\) can be seen as an element of \(C_{0}(\mathbb{R},M)\) by considering \(\gamma(t)=\gamma(a)\) for \(t<a\) and \(\gamma(t)=\gamma(b)\) for \(t>b\). To compare two curves \(\gamma_{1},\gamma_{2}\in C_{0}(E,M)\) we use the uniform metric \(\mathsf{d}_{U}\) defined as \[\mathsf{d}_{U}(\gamma_{1},\gamma_{2})=\sup_{t\in E}d(\gamma_{1}(t),\gamma_{2} (t))\] In the case of trees, we decorate the metric space by \(\gamma^{\mathbf{t}}\). For the case of maps with a boundary, we decorate them by \(\gamma^{\mathbf{b}}\) the curve that starts from the root and follows the boundary at constant speed. In the case of tree-decorated maps, we decorate-them with the "Peano"-type curve \(\gamma^{\mathbf{q}}\) associated to the contour exploration of the tree in the map. In all of these cases \(E=\mathbf{S}^{1}\). In this work, we also need to work in cases where \(E=\mathbb{R}\). In these cases, we need to work with the truncation of the curve. To do define this truncation first take \(r>0\), and define \[\underline{\tau}_{r}^{\gamma}=(-r)\vee\sup\{t<0:d(\gamma(0),\gamma(t))=r\}\ \text{ and }\ \overline{\tau}_{r}^{\gamma}=(-r)\wedge\inf\{t<0:d(\gamma(0),\gamma(t))=r\} \tag{2.2}\] where \(d\) is the metric associated to the metric space where \(\gamma\) is embedded in. Then, the \(r\)-truncated of \(\gamma\) is the curve \[\gamma_{r}(t)=\begin{cases}\gamma(\underline{\tau}_{r}^{\gamma})&\text{ if }t< \underline{\tau}_{r}^{\gamma}\\ \gamma(t)&\text{ if }t\in[\underline{\tau}_{r}^{\gamma},\overline{\tau}_{r}^{ \gamma}]\\ \gamma(\overline{\tau}_{r}^{\gamma})&\text{ if }t>\overline{\tau}_{r}^{\gamma}.\end{cases}\] We denote by \(\mathcal{B}_{r}(M,u)\) the ball of radius \(r\) centred at \(u\) in \(M\) and we set \(\mathcal{B}_{r}(M)\) as the ball centred at the root vertex. We also define the \(r\)-truncation of \((M,\gamma)\) as the curve-decorated metric space \(\mathcal{R}_{r}(M,\gamma):=(\mathcal{B}_{r}(M),\gamma_{r})\), where the metric in \(\mathcal{B}_{r}(M)\) is the infimum with respect to the metric of \(M\) over paths completely contained on \(\mathcal{B}_{r}(M)\). Finally, let us make a remark regarding the notation. At the discrete level, we will only work with quadrangulations and because of this the notation of some of the elements associated to them have a superscript \(\mathfrak{q}\). Since in the scaling limit settings these maps converge to metric spaces \(M\) that have no "quadrangular" nature, we change the superscript \(\mathfrak{q}\) by \(M\) in the notation of the continuum objects. Additionally, in our notation to disambiguate some cases we use lower case letters for discrete object and upper case letters for continuous objects. Figure 2. Tree with part of the contour in green, the corners are numbered as \(c_{i}\) and are shown in gray. All the corners belonging to the same circle belong to the same equivalence class associated with a vertex, for example \(v_{2}=[c_{2}]_{c}=[c_{4}]_{c}=[c_{6}]_{c}\). ### The tree-decorated map and its bijection In this subsection, we will describe the so-called gluing procedure introduced in [10]. We direct the reader to Section 3 of that paper to see the proofs and details. Here we give a short summary of the bijection. Take a couple \((\mathfrak{q}^{b},\mathfrak{t})\) of a quadrangulation with a simple boundary of size \(2k\) and \(f\) faces and a tree of size \(k\). We want to construct \((\mathfrak{q},\mathfrak{t}^{\mathsf{M}})\) a tree-decorated quadrangulation with \(f\) faces and a tree of size \(k\). Recall that the vertices of the external face of \(\mathfrak{q}^{b}\) are indexed from \(0\) to \(2k-1\), and call \(C\) the contour function of \(\mathfrak{t}\). The function \(C\) induces an equivalent relation on vertices via the zeros of equation (2.1), and define \(V^{\prime}\) as the set of equivalence classes. Let us now construct \(\mathfrak{q}\). The vertex set is made by the union of \(V^{\prime}\) with all vertices of \(\mathfrak{q}^{b}\) that do not belong to the exterior face. The edge set of \(\mathfrak{q}\) is constructed from those of \(\mathfrak{q}^{b}\) in the following way. Let \((x,y)\) be an oriented edge of \(\mathfrak{q}^{b}\), then the edge \((G(x),G(y))\) is in \(\mathfrak{q}^{b}\), where \[G(x):=\begin{cases}[l]&\text{ if }x\in V^{\prime},\\ x&\text{ else,}\end{cases} \tag{2.3}\] where \(l\) is the label of \(x\) in the boundary, and \([l]\) is the equivalence class of \(l\) under the equivalence relation defined by \(C\). **Remark 2.2**.: _Note that because the boundary of \(\mathfrak{q}^{b}\) is simple, the vertices \(V^{\prime}\) have the same tree-structure in \(\mathfrak{q}\) as in \(\mathfrak{t}\)._ ### Topologies and convergences Now, we describe the topologies that will be used throughout this paper. In this work, we will deal with two types of limits: local limits and scaling limits. For the local limits we use the Benjamini-Schramm topology, and for the scaling limits we use the Gromov-Hausdorff topology in the compact case and the local Gromov-Hausdorff in the non-compact case. When object are decorated, we will consider them decorated by curves, and therefore we will use a strengthen version of these topologies. #### 2.3.1. The Benjamini-Schramm Uniform topology The Benjamini-Schramm uniform topology allows us to describe the local limit of a sequence of curved decorated maps \((\mathfrak{m}_{n},\gamma_{n})\). We say that \((\mathfrak{m}_{n},\gamma_{n})\) converges to \((\mathfrak{m},\gamma)\) if for any \(R>0\), we have that \[\mathcal{R}_{r}(\mathfrak{m},\gamma)=\mathcal{R}_{r}(\mathfrak{m}_{n},\gamma_ {n})\] for all \(n\) big enough. This topology is metrizable, nevertheless we will not need the exact description of its metric. The (undecorated) Benjamini-Schramm topology arises as the decorated Benjamini-Schramm topology when the curve is constant equal to the root. Figure 3. A simple sketch of the gluing procedure. #### 2.3.2. The Gromov-Hausdorff Uniform topology This topology will be used to describe the limit of curve-decorated sequence of maps \((\mathfrak{m}_{n},\gamma_{n})\) in the compact case. We define the following distance between two decorated metric spaces \((\mathfrak{m}_{1},\gamma_{1})\) and \((\mathfrak{m}_{2},\gamma_{2})\), \[\mathsf{d}_{GHU}((\mathfrak{m}_{1},\gamma_{1}),(\mathfrak{m}_{2},\gamma_{2}))= \inf_{\phi_{1},\phi_{2}}\{\mathsf{d}_{Haus}(\phi_{1}(\mathfrak{m}_{1}),\phi_{ 2}(\mathfrak{m}_{2}))+\mathsf{d}_{Unif}(\phi_{1}(\gamma_{1}),\phi_{2}(\gamma_{ 2})),\}\] where the infimum is taken over all isometries \(\phi_{1}\) and \(\phi_{2}\) taking \(\mathfrak{m}_{1}\) and \(\mathfrak{m}_{2}\) (respectively) to a common metric space. Here, the Haussdorf distance \(\mathsf{d}_{Haus}\) is a distance between closed set of a given metric space \(E\) and is defined as \[\mathsf{d}_{Haus}(C,C^{\prime})=\inf\{\varepsilon>0:C\subseteq C^{\prime}+B_{ \varepsilon}(E),C^{\prime}\subseteq C+B_{\varepsilon}(E)\}.\] The (undecorated) Gromov-Hausdorff topology appears in this context as the decorated Gromov-Hausdorff topology when the curve is constant equal to the root. **Remark 2.3** (**GhU Compactness**).: _From Lemma 2.6 in [1], if a set \(S\) satisfies: \(S\) compact in the Gromov-Hausdorff sense and \(S\) equicontinuous, then \(S\) is pre-compact in the Gromov-Hausdorff Uniform topology. Here equicontinuous applies for the curves; and of course it depends on \(E\) and its topology._ #### 2.3.3. The Local Gromov-Hausdorff uniform topology This topology will be used to describe the limit of curve-decorated sequence of maps \((\mathfrak{m}_{n},\gamma_{n})\) in the non-compact case. We define the following distance between two decorated metric spaces \((\mathfrak{m}_{1},\gamma_{1})\) and \((\mathfrak{m}_{2},\gamma_{2})\), \[\mathsf{d}_{LGHU}((\mathfrak{m}_{1},\gamma_{1}),(\mathfrak{m}_{2},\gamma_{2}) )=\int_{0}^{\infty}e^{-r}(1\wedge d_{GHU}(\mathcal{R}_{r}(\mathfrak{m}_{1}, \gamma_{1}),\mathcal{R}_{r}(\mathfrak{m}_{2},\gamma_{2})))dr\] The (undecorated) local Gromov-Hausdorff topology appears in this context as the decorated Gromov-Hausdorff topology when the curve is constant equal to the root. ### Infinite trees We discuss now about the limit of plane trees in different topologies. The first limit that one can study is the local limit. One object that arise naturally as this type of limit is the infinite critical geometric tree. **Definition 2.4**.: _The infinite critical geometric tree \(\mathbf{t}_{\infty}\) is defined by the following construction_ 1. _Take a copy of the graph given by the natural numbers_ \(\mathbb{N}\)_, this is called the spine. We denote the elements of the spine_ \(\tau_{-n}\)_._ 2. _Associate to each vertex_ \(v\) _of the spine two critical geometric GW trees and hang one to the positive and one to the negative half-plane with_ \(v\) _as the root of both trees._ 3. _Root the tree in the edge_ \(\overline{\tau_{0}\tau_{-1}}\)_._ The way to obtain this object as limit is presented in the following theorem. **Theorem 2.5** (Lemma 1.14 [11] and Proposition 5.28 [12]).: _Let \(\mathbf{t}_{m}\) be a uniform tree with \(m\) edges. One has that \(\mathbf{t}_{m}\to\mathbf{t}_{\infty}\) in law for the Benjamini-Schramm topology as \(m\to\infty\). The resulting random object \(\mathbf{t}_{\infty}\) is an a.s. one-ended tree called the infinite critical geometric tree._ **Definition 2.6**.: _For an infinite critical geometric tree \(\mathbf{t}_{\infty}\) and \(m\in\mathbb{R}\) we define \(\mathbf{t}_{\infty}(m)\) the finite tree created by all the vertices of the spine that are at distance smaller than or equal to \(m\) to the root and all the trees attached to them._ It is useful to us to construct trees using the so-called contour function. **Definition 2.7** (Real trees).: _Let \(f:I\subseteq\mathbb{R}\to\mathbb{R}\) be a continuous function. For \(s,t\in I\) we define the following pseudo-distance_ \[\mathsf{d}_{f}(t_{1},t_{2})=f(t_{1})+f(t_{2})-2\inf_{s\in[\![t_{1},t_{2}]\!]} f(s).\] _We define the tree coded by \(f\) as the metric space \((\mathcal{T}_{f},\mathsf{d}_{f})\) consisting of \(I/\{\mathsf{d}_{f}=0\}\) and we associate as the root the equivalent class of \(0\) under \(\mathsf{d}_{f}\). Notice that \(\mathsf{d}_{f}\) in this space induces a metric (that we also call \(\mathsf{d}_{f}\))._ We can use this to write the infinite critical geometric tree by means of a simple random walk3. Footnote 3: In this paper, all discrete functions will be interpolated linearly so they are continuous. **Proposition 2.8**.: _Let \(X^{+}\) and \(X^{-}\) be two independent simple random walks taking values \(0\) at \(0\). Define_ \[X(t)=\begin{cases}X^{+}(t)&\text{ if }t\geq 0,\\ X^{-}(-t)&\text{ if }t<0.\end{cases}\] _Then, \((\mathcal{T}_{X},\mathsf{d}_{X})\) is the infinite critical geometric tree4._ Footnote 4: Here we represent the infinite critical geometric tree as a metric space where the edges are replaced by copies of \([0,1]\). First of all notice that the pseudo-distance gives an infinite tree with a unique infinite branch. To show that the distribution of the simple random walk description coincides with the one given in Definition 2.4 we use the decomposition of the simple random walk by records. Starting from \(0\) consider \(X_{t}\) up to the first time \(\tau_{-1}\) it hits \(-1\). It is well known5 that \(\mathbb{P}(\tau_{-1}=2n+1)\) is equal to the probability of a critical geometric GW tree having size \(n\). We define for \(i>2\) the records as the fits time \(\tau_{-i}\) that \(X_{\tau_{-i-1}+t}\) hits \(-i\). We identify that the pseudo-distance associate to each record of the simple random walk a tree in the positive half-plane with the same distribution as the critical geometric GW tree. It is clear that one can do the same for the negative part giving trees in the negative half-plane distributed as the critical geometric GW tree. Footnote 5: Consult [15, Prop. 1.5.] together with the bijection between the Lukasiewicz function and the contour function of a tree. Now, we introduce the bi-infinite critical geometric tree, which can roughly be seen as unzipping the infinite critical geometric tree along the spine. **Definition 2.9**.: _The bi-infinite critical geometric tree \(\mathfrak{t}^{\infty}_{-\infty}\) is constructed as follows_ 1. _Take a copy of the graph given by the integer numbers_ \(\mathbb{Z}\)_, we called it the bi-infinite spine. We denote the element associated to_ \(n\in\mathbb{Z}\) _in the bi-infinite spine_ \(\tau_{-n}\)_._ 2. _Associate to each vertex_ \(v\) _of the spine one critical geometric GW tree and hang it in the positive half-plane with respect to the line_ \(\mathbb{Z}\)_._ 3. _Root the tree in the edge_ \(\overline{\tau_{0}\tau_{-1}}\)_._ **Definition 2.10**.: _For a bi-infinite critical tree \(\mathfrak{t}^{\infty}_{-\infty}\) and \(m\in\mathbb{R}\) we define \(\mathfrak{t}^{\infty}_{-\infty}(m)\) the finite tree created by all the vertices of the (negative and positive) spines that are at distance smaller than or equal to \(m\) to the root and all the trees attached to them._ Again we can construct the bi-infinite critical geometric tree by means of a simple random walk as follows. **Proposition 2.11**.: _Let \(Y^{+}\) and \(Y^{-}\) be two independent simple random walks started at height 0 with \(Y^{+}\) conditioned to be positive. Define_ \[Y(t)=\begin{cases}Y^{+}(t)&\text{ if }t\geq 0,\\ Y^{-}(-t)&\text{ if }t<0.\end{cases}\] _Then, \((\mathcal{T}_{Y},\mathsf{d}_{Y})\) is the bi-infinite critical geometric tree._ This follows from the same idea as in the deduction of Proposition 2.8 with the difference that records in the negative part has to be taken as the last time the random walk escapes each positive level. Another way to try to understand how big a uniform tree looks like is through a renormalisation of the distance of the tree, so that its diameter remains of constant order. We do this by dividing the distance of \(\mathbf{t}_{m}\) by \(\sqrt{m}\) leading to a limiting object which is continuos and has finite volume called the continuum random tree (CRT). **Definition 2.12**.: _Let \((\mathtt{e}_{t}:t\in[0,1])\) be a Brownian excursion. The CRT, \(\mathcal{T}\), is the random tree (metric space) defined by \((\mathcal{T}_{\mathrm{e}},\mathsf{d}_{\mathrm{e}})\)._ The image of the Lebesgue measure of the map from \([0,1]\) to \(\mathcal{T}_{\mathrm{e}}\) gives a natural parametrisation to explore the contour of the real tree at unit speed and formalises the length of the CRT. In the sequel we consider the contour curve \(\gamma^{\mathbf{t}}\) parametrised in such a way. As a consequence if \(\mathcal{T}\) has length 1, then \(\sigma\mathcal{T}\) has length \(\sigma\). The following theorem formalises the renormalisation of the finite volume limit. **Theorem 2.13** (Theorem 8 [1]).: _Let \(\mathbf{t}_{m}\) be a uniformly chosen tree with \(m\) edges and consider it as a metric space with its natural graph distance. Then \((2m)^{-1/2}\mathbf{t}_{m}\) converges in law to the CRT \(\mathcal{T}\) for the Gromov-Hausdorff topology._ Another renormalization technique is used to obtain continuous limits with infinite volume, this is obtained when the size and the diameter tends to infinity in a suitable way. **Definition 2.14**.: _Let \(W^{+}\) and \(W^{-}\) be two independent standard Brownian motions started at 0. We define the process \((W(t):t\in\mathbb{R})\) as_ \[W(t)=\begin{cases}W^{+}(t)&\text{ if }t\geq 0,\\ W^{-}(-t)&\text{ if }t<0.\end{cases}\] _The Infinite continuous random tree (ICRT) \(\mathbf{T}_{\infty}\) is defined as the random tree \((\mathcal{T}_{W},\mathsf{d}_{W})\)._ The next result says how we can obtain the ICRT from a discrete tree. **Proposition 2.15** (Theorem 11 ii)[1]).: _Let \(\mathbf{t}_{m}\) be a uniformly chosen tree with \(m\) edges and consider it as a metric space with its natural graph distance. Consider also any sequence \((k_{m}:m\in\mathbb{N})\) satisfying \(k_{m}\to\infty\) and \(k_{m}m^{-1/2}\to 0\). Then \(k_{m}^{-1}\mathbf{t}_{m}\) converges in law to the metric space ICRT for the Local Gromov-Hausdorff topology._ Again, it is possible to obtain a description by means of a random walk. **Definition 2.16**.: _Let \(Z^{+}\) and \(Z^{-}\) be two independent standard Brownian motions started at 0 with \(Z^{-}\) conditioned to stay positive (i.e. \(Z^{-}\) has the law of a Bessel-3 process). We define the process \((Z_{t}:t\in\mathbb{R})\) as_ \[Z(t)=\begin{cases}Z^{+}(t)&\text{ if }t\geq 0,\\ Z^{-}(-t)&\text{ if }t<0.\end{cases}\] _The bi-infinite continuous random tree \(\mathbf{T}_{-\infty}^{\infty}\) is the random tree defined by \((\mathcal{T}_{Z},\mathsf{d}_{Z})\)._ ### Infinite quadrangulations with a boundary We present here the limiting objects of quadrangulations with a boundary in different topologies. Again we start with the local limit. #### 2.5.1. UIHPQ and its peeling The uniform infinite half-plane quadrangulation with simple boundary (UIHPQ) is the local limit of a well-chosen quadrangulation with a simple boundary. In this section, we will shortly present it and describe its Markov property. Let \(\mathfrak{q}^{b}_{f,m}\) be a uniformly chosen element in \(Q^{b}_{f,m}\) the set of planar quadrangulation with a simple boundary of size \(2m\) and \(f\) internal faces. It is easy to see that there exists a random variable \(\mathfrak{q}^{b}_{m}\) on infinite quadrangulation with a simple boundary of size \(2m\), such that \(\mathfrak{q}^{b}_{f,m}\) converges to \(\mathfrak{q}^{b}_{m}\) as \(f\) goes to \(\infty\) for the Benjamini-Schramm topology. We define the UIHPQ, \(\mathfrak{q}\), as the law on random quadrangulation with a simple boundary of infinite size as that arises as the limit in law of \(\mathfrak{q}^{b}_{m}\) as \(m\) grows to \(\infty\) (see [15, 15]). This limit is one ended, in the sense that if one takes out one square of \(\mathfrak{q}\) the complement of the graph has only one infinite connected component. In this limit the curves \(\gamma^{\mathbf{b}}_{m}\) coding the boundary of \(\mathfrak{q}^{b}_{m}\) also converge to a limit \(\gamma^{\mathbf{b}}\) which is a parametrisation of \(\mathbb{Z}\) and therefore the convergence holds in the Benjamini-Schramm Uniform topology. Furthermore, if one shifts the root-edge in the boundary to another boundary point, the law of the resulting map is the same as the original one [15], this property is called invariance under rerooting. The UIHPQ \(\mathfrak{q}\) satisfies an interesting Markov property. Assume that one conditions on all the quadrilaterals \(Q^{r}\in\mathfrak{q}\) that contain the root vertex \(r\) of \(\mathfrak{q}\). Then, the unbounded connected component of \(\mathfrak{q}\backslash Q^{r}\) (rooted properly) also has the law of a UIHPQ. For the UIHPQ, we associate its boundary vertices with \(\mathbb{Z}\) and its root edge to the one going from \(0\) to \(-1\). Let us define the simple overshoot \(O^{s}(0)\) from the vertex \(0\). To do this, we take all edges connected to \(0\) and we look for the ones that intersect \(\mathbb{Z}^{+}\), the over-shoot is then taken as the maximum value of that intersection (and it is \(0\) if not). One has that [15] \[\mathbb{P}(O^{s}(0)\geq k)\asymp k^{-3/2},\quad\text{ as }k\nearrow\infty. \tag{2.4}\] We can define, now, the (infinite) overshoot from \(0\). The (infinite) overshoot \(O(0)\) is the biggest positive \(z\) such that there is a face containing \(z\) and a vertex in the negative boundary (it is \(0\) if there is no such an edge). By a summation over \(O^{s}(-j)\) one has that [15] \[\mathbb{P}(O(0)\geq k)\asymp k^{-1/2},\quad\text{ as }k\nearrow\infty.\] #### 2.5.2. Brownian half-plane The Brownian half-plane (BHP) arises as the limit in distribution of the UIHPQ in the Local Gromov-Hausdorff topology. More formally \(\lambda\cdot\text{UIHPQ}\) converges in law in the Local Gromov-Hausdorff to the \(BHP\) as \(\lambda\to 0\)[1, Theorem 3.6]. **Proposition 2.17**.: _The Brownian half-plane is invariant under rerooting and this operation is strong mixing._ Figure 4. Markov property sketch. In a UIHPQ conditioning on \(Q^{r}\), the light blue parts are given by Botzmann quadrangulations conditioned to their boundary size and the yellow part is again a UIHPQ we put the cyan line to represent the simple boundary of the yellow part. Proof of Proposition 2.17.: Invariance under rerooting is inherited from the UIHPQ and the strong mixing property is a consequence of the Markov property (on filled-in balls with target point at infinity) of the Brownian half-plane Proposition A.1 and its invariance under rerooting. More precisely, combining these properties and following the same lines as in [11, Lemma 2] one obtains that thanks to the Markov property the balls around the root and the remainder of the map (which is distributed also as a Brownian half-plane properly rooted, thanks to the invariance under rerooting) are asymptotic independent as the distance between the two roots goes to infinity, this asymptotic independence let us conclude the strong mixing condition. #### 2.5.3. Brownian disk The Brownian disk appears as the scaling limit of a planar map with a boundary of appropriate size. This is done in the following theorem which is a slight improvement of the main result of [1] whose proof can be found in Section B. **Theorem 2.18**.: _Fix \(\sigma\in\mathbb{R}^{+}\). A sequence \((\mathbf{q}^{b}_{f,\sigma\sqrt{f}},\gamma^{\mathbf{b}})_{f\in\mathbb{N}}\) of uniformly chosen quadrangulations with a simple boundary of size \(2\lfloor\sigma\sqrt{f}\rfloor\) and \(f\) internal faces. Then, \(((9f/8)^{-1/4}\mathbf{q}^{b}_{f,\sigma\sqrt{f}},\gamma^{\mathbf{b}})_{f\in \mathbb{N}}\) converges in law to the (decorated) Brownian disk \((\mathfrak{Q}_{3\sigma},\gamma^{\mathbf{B}})\) with perimeter \(3\sigma\) and area 1 for the Gromov-Hausdorff uniform topology. Furthermore, \(\mathfrak{Q}_{3\sigma}\) has the topology of the disk._ This theorem was first proved for the uniform case when the boundary is not simple in [1] and also in the case of Free Boltzmann quadrangulations with simple boundaries in [1] and then generalized to the case of uniform quadrangulation with simple boundary in [1]. The topology of the limit was first described in Section 2.3 of [1], and it can be shown that a.s. \(\gamma^{\mathbf{B}}(t)\in\partial\mathfrak{Q}_{3\sigma}\) for all \(t\in\mathbf{S}\). ### Definition of the shocked-map Here we introduce the candidate for the scaling limit of the critical tree decorated map by doing the analogue of the bijection of Section 2.2 in the continuum setting. **Definition 2.19** (Shocked map).: _Let \(\sigma>0\), \((\mathfrak{Q}_{\sigma},\mathsf{d}^{BD})\) a Brownian Disk with boundary of size \(\sigma\) and let \(\mathcal{T}\) be an independent CRT. Take \(\gamma^{\mathbf{B}}:\mathbf{S}^{1}\to\partial\mathfrak{Q}_{\sigma}\) the continuous curve that visits \(\partial\mathfrak{Q}_{\sigma}\) at unit speed starting at the root edge and \(\gamma^{\mathbf{T}}:\mathbf{S}^{1}\to\mathcal{T}\) the contour exploration6 of \(\mathcal{T}\). The shocked map of size \(\sigma\) is the (curve-decorated) metric space \((\mathcal{S}_{\sigma},\mathsf{d}^{SM},\gamma^{\mathbf{S}})\), obtained by starting with \((\mathfrak{Q}_{\sigma},\mathsf{d}^{BD})\) and identifying all points in \(x,y\in\partial\mathfrak{Q}_{\sigma}\) such that_ Footnote 6: The contour exploration associated to the CRT is the curve generated by the image of the identity function in \([0,1]\) under the glueing of the Brownian excursion used to create it. \[\gamma^{\mathbf{T}}\circ(\gamma^{\mathbf{B}})^{-1}(x)=\gamma^{\mathbf{T}} \circ(\gamma^{\mathbf{B}})^{-1}(y).\] _Here \(\gamma^{\mathbf{S}}\) is the curve defined as the image of \(\gamma^{\mathbf{B}}\) under this identification._ Let us give another equivalent description of the shocked map in the usual language of metric spaces defined as equivalent classes of pseudo-distances. **Remark 2.20**.: _To identify \(\mathfrak{Q}_{\sigma}\) using \(\gamma^{\mathbf{B}}\) and \(\gamma^{\mathbf{T}}\), we define the pseudo-distance_ \[\mathsf{d}^{SM}(x,x^{\prime})=\inf\Big{\{}\sum_{i=0}^{k}\min\{\mathsf{d}_{D}(x _{i}^{b},y_{i}^{b})\}\Big{\}},\] _where the infimum is taken over all \(k\geq 0\), sequences \(t_{0},s_{1},t_{1},s_{2},t_{2},\ldots,s_{k}\in\mathbf{S}^{1}\), such that \(x_{0}^{b}=x\), \(y_{k}^{b}=x^{\prime}\) and for all other \(i\in\llbracket 0,k\rrbracket\)_ \[x_{i}^{b}:=\gamma^{\mathbf{B}}(s_{i})\text{ and }y_{i}^{b}:=\gamma^{\mathbf{B}}(t_{ i}),\] _such that \(\gamma^{\mathbf{T}}(t_{i})=\gamma^{\mathbf{T}}(s_{i+1})\). Then, \((\mathcal{S}_{\sigma},\mathsf{d}^{SM},\gamma^{\mathbf{S}})\) is the (curve-decorated) metric space where \(\mathcal{S}_{\sigma}\) is given by \([0,1]/\{\mathsf{d}^{SM}=0\}\)._ Again we can define the infinite volume version of this object **Definition 2.21** (Infinite shocked map).: _Let \((\mathbf{H}_{\infty},\mathsf{d}_{\mathbf{H}_{\infty}})\) be a Brownian half-plane and let \((\mathbf{T}_{\infty},\mathsf{d}_{\mathbf{T}})\) be an ICRT. Take \(\gamma^{\mathbf{B}}:\mathbb{R}\to\partial\mathbf{H}\) the continuous curve that visits \(\partial\mathbf{H}_{\infty}\) at unit speed starting at the root edge and \(\gamma^{\mathbf{T}}:\mathbb{R}\to\mathbf{T}_{\infty}\) the contour exploration of \(\mathbf{T}_{\infty}\). The infinite shocked map is the (curve-decorated) metric space \((\mathcal{S}_{\infty},\mathsf{d}_{\mathcal{S}_{\infty}},\gamma^{\mathbf{S}})\), obtained by starting with \((\mathbf{H}_{\infty},\mathsf{d}_{\mathbf{H}_{\infty}})\) and identifying all points in \(x,y\in\partial\mathbf{H}_{\infty}\) such that_ \[\gamma^{\mathbf{T}}\circ(\gamma^{\mathbf{B}})^{-1}(x)=\gamma^{\mathbf{T}} \circ(\gamma^{\mathbf{B}})^{-1}(y).\] _Here \(\gamma^{\mathbf{S}}\) is the curve defined as the image of \(\gamma^{\mathbf{B}}\) under this identification._ We will use the bi-infinite version as it appears as an "intermediate" object for our proof. **Definition 2.22** (Bi-infinite shocked map).: _Let, \((\mathbf{H}_{\infty},\mathsf{d}_{\mathbf{H}_{\infty}})\) be a Brownian half-plane and let \((\mathbf{T}_{-\infty}^{\infty},\mathsf{d}_{\mathbf{T}})\) be a bi-infinite continuous random tree. Take \(\gamma^{\mathbf{B}}:\mathbb{R}\to\partial\mathbf{H}\) the continuous curve that visits \(\partial\mathbf{H}_{\infty}\) at unit speed starting at the root edge and \(\gamma^{\mathbf{T}}:\mathbb{R}\to\mathbf{T}_{-\infty}^{\infty}\) the contour exploration of \(\mathbf{T}_{-\infty}^{\infty}\). The bi-infinite shocked map is the (curve-decorated) metric space \((\mathcal{S}_{-\infty}^{\infty},\mathsf{d}_{\mathcal{S}_{-\infty}^{\infty}}, \gamma^{\mathbf{S}})\), obtained by starting with \((\mathbf{H}_{\infty},\mathsf{d}_{\mathbf{H}_{\infty}})\) and identifying all points in \(x,y\in\partial\mathbf{H}_{\infty}\) such that_ \[\gamma^{\mathbf{T}}\circ(\gamma^{\mathbf{B}})^{-1}(x)=\gamma^{\mathbf{T}} \circ(\gamma^{\mathbf{B}})^{-1}(y).\] _Here \(\gamma^{\mathbf{S}}\) is the curve defined as the image of \(\gamma^{\mathbf{B}}\) under this identification._ **Remark 2.23**.: _The meaning of "intermediate" comes from the fact that the boundary of the bi-infinite shocked map can be seen as a copy of \(\mathbb{R}\) such that when identifying the part associated with \(\mathbb{R}^{+}\) and \(\mathbb{R}^{-}\), one gets the infinite shocked map._ ## 3. Infinite continuous volume In this section we show that two elements at distance \(n\) on the spine of the ICRT are mapped by the gluing to two points that are at distance \(o(n)\). To be more precise, we consider a Brownian half-plane \(\mathbf{H}_{\infty}\), an independent infinite continuous random tree \(\mathbf{T}_{\infty}\). Let \(\gamma^{\mathbf{H}}:\mathbb{R}\mapsto\partial\mathbf{H}_{\infty}\) be the exploration of the boundary of \(\mathbf{H}_{\infty}\) such that \(\gamma^{\mathbf{H}}(0)\) is the root vertex of \(\mathbf{H}_{\infty}\) and \(\gamma^{\mathbf{T}}:\mathbb{R}\mapsto\partial\mathbf{T}_{\infty}\) be the contour exploration of \(\mathbf{T}_{\infty}\). We define \((\mathbf{M}_{\infty},\gamma^{\mathbf{M}})\) as the glueing of \(\mathbf{H}_{\infty}\) using the equivalence class generated by the curves \(\gamma^{\mathbf{H}}\) and \(\gamma^{\mathbf{T}}\) as for Definition 2.19. The aim of this section is to show that the image of the boundary under the glueing is just one point. **Theorem 3.1**.: _The exploration function \(\gamma^{\mathbf{M}}:\mathbb{R}\mapsto\mathbf{M}_{\infty}\) is constant._ To show this, it is easier to work with \(\mathbf{M}_{-\infty}^{\infty}\), which is the result of glueing a Brownian half-plane with a bi-infinite CRT \(\mathbf{T}_{-\infty}^{\infty}\). Recall from Definition 2.16 that \(\mathbf{T}_{-\infty}^{\infty}\) is defined from a process \(Z\) which is a Brownian motion started from \(0\) in the positive axis and a Brownian motion started from \(0\) and conditioned to be positive (seen backwards) in the negative axis. For simplicity in this section we denote by \(\mathsf{d}_{\mathbf{T}}\) the metric on \(\mathbf{T}_{-\infty}^{\infty}\) and for any \(y\in\mathbb{R}^{+}\), we define \[\tau_{y}=\inf\{t\in\mathbb{R}:Z(t)=y\} \tag{3.1}\] Furthermore, for \(y\in\mathbb{R}\), we define \(\kappa_{y}:=\gamma^{\mathbf{M}}(\tau_{-y})\) which is the image of the root of the \(y\)-th tree under the glueing. To show the theorem, we first start by showing that the distance in the infinite branch in the tree-decorated map is a constant times the distance in the tree itself. **Proposition 3.2**.: _There exists a constant \(c\in[0,1]\) such that_ \[\frac{\mathsf{d}_{\mathbf{M}_{-\infty}^{\infty}}(\Phi^{\mathbf{M}_{-\infty}^{ \infty}}(\kappa_{0}),\Phi^{\mathbf{M}_{-\infty}^{\infty}}(\kappa_{n}))}{n} \xrightarrow[n\to\infty]{a.s.}c.\] We define the shift \(\mathsf{S}\) on the pair \((\mathbf{T}_{-\infty}^{\infty},\mathbf{H}_{\infty}):=((\mathbf{T}_{-\infty}^{ \infty},\gamma^{\mathbf{T}}),(\mathbf{H}_{\infty},\gamma^{\mathbf{H}}))\) as \[\mathsf{S}(\mathbf{T}_{-\infty}^{\infty},\mathbf{H}_{\infty})=(\mathsf{S}_{1}( \mathbf{T}_{-\infty}^{\infty}),\mathsf{S}_{2}(\mathbf{H}_{\infty})),\] where \(\mathsf{S}_{1}\) and \(\mathsf{S}_{2}\) are the shift of \(\gamma^{\mathbf{T}}\) and \(\gamma^{\mathbf{H}}\) so that they starts in \(\tau_{-1}\) instead7 of \(0\). For the sake of notation _until the end of this section_ we drop the indices \(-\infty\) and \(\infty\) in the objects appearing in the bi-infinite infinite volume shocked map construction. Footnote 7: This is equivalent to reroot \(\mathbf{T}_{-\infty}^{\infty}\) and \(\mathbf{H}_{\infty}\) such that the root is now in \(\gamma^{\mathbf{T}}(\tau_{-1})\) and \(\gamma^{\mathbf{H}}(\tau_{-1})\) respectively **Lemma 3.3**.: _The shift \(\mathsf{S}\) is a measure preserving transformation. Moreover it is strong mixing, i.e. for every \(A,B\in\sigma(\mathbf{T})\times\sigma(\mathbf{H})\) one has that_ \[\mathbb{P}(\mathsf{S}^{-n}(A)\cap B)\to\mathbb{P}(A)\mathbb{P}(B)\] Proof.: We start by proving that this transformation is measure preserving. Consider \(f\) and \(g\) a bounded \(\sigma(\mathbf{T})\) measurable and \(\sigma(\mathbf{H})\) measurable functions respectively. \[\mathbb{E}(f(\mathsf{S}_{1}(\mathbf{T}))g(\mathsf{S}_{2}(\mathbf{ H}))) =\mathbb{E}(\mathbb{E}(f(\mathsf{S}_{1}(\mathbf{T}))g(\mathsf{S}_ {2}(\mathbf{H}))|\sigma(\mathbf{T})))\] \[=\mathbb{E}(f(\mathsf{S}_{1}(\mathbf{T}))\mathbb{E}(g(\mathsf{S} _{2}(\mathbf{H}))|\sigma(\mathbf{T})))\] \[=\mathbb{E}(f(\mathsf{S}_{1}(\mathbf{T})))\mathbb{E}(g(\mathbf{H} )).\] Where the second equality comes from measurability and the third comes from the fact that knowing \(\mathbf{T}\), the shift \(\mathsf{S}_{2}\) becomes deterministic, and the fact that the Brownian half-plane is invariant under rerooting on the boundary. With this we conclude the independence of \(\mathsf{S}_{1}(\mathbf{T})\) and \(\mathsf{S}_{2}(\mathbf{H})\) and moreover that \(\mathsf{S}_{2}(\mathbf{H})\) is equal in distribution to \(\mathbf{H}\). For the strong mixing property, it suffices to test it in any \(\pi\)-system \(\Pi\) in \(\sigma(\mathbf{T})\times\sigma(\mathbf{H})\). Thus, we take \(A=A_{1}\times A_{2}\) and \(B=B_{1}\times B_{2}\) with \(A_{1},B_{1}\in\sigma(\mathbf{T})\) and \(A_{2},B_{2}\in\sigma(\mathbf{H})\). Then for every \(\varepsilon>0\), there exists \(n_{0}\) such that for all \(n\geq n_{0}\) one has \[\mathbb{E}\left(\mathds{1}_{\mathsf{S}^{-n}(A)\cap B}\right) =\mathbb{E}\left(\mathds{1}_{\mathsf{S}_{1}^{-n}(A_{1})\cap B_{1} }\mathbb{E}\left(\mathds{1}_{\mathsf{S}_{2}^{-n}(A_{2})\cap B_{2}}\Big{|} \mathbf{T}\right)\right)\] \[\leq\mathbb{E}\left(\mathds{1}_{\mathsf{S}_{1}^{-n}(A_{1})\cap B_ {1}}\left(\mathbb{P}\left(A_{2}\right)\mathbb{P}\left(B_{2}\right)+\varepsilon \right)\right)\] \[\leq\left(\mathbb{P}\left(A_{1}\right)\mathbb{P}\left(B_{1} \right)+\varepsilon\right)\left(\mathbb{P}\left(A_{2}\right)\mathbb{P}\left(B _{2}\right)+\varepsilon\right)\] \[\leq 3\varepsilon+\mathbb{P}\left(A\right)\mathbb{P}\left(B\right)\] The first inequality follows from the strong mixing property of the Brownian half-plane and the second inequality comes from the strong mixing property of the bi-infinite continuous Brownian tree8. The converse inequality (with \(-\varepsilon\) instead of \(\varepsilon\)) follows along the same lines; with it we conclude. Footnote 8: This comes from the invariance under rerooting together with the Markov property of the Brownian motion. We can now prove Proposition 3.2. Proof of Proposition 3.2.: We start by proving the subadditivity of \(g_{n}(\mathbf{T},\mathbf{H})=\mathsf{d}_{\mathbf{M}}(\Phi^{\mathbf{M}}(0), \Phi^{\mathbf{M}}(n))\) for the shift \(\mathsf{S}\). \[\mathsf{d}_{\mathbf{M}}(\Phi^{\mathbf{M}}(\kappa_{0}),\Phi^{\mathbf{M}}( \kappa_{n+m})) \leq\mathsf{d}_{\mathbf{M}}(0,\Phi^{\mathbf{M}}(n))+\mathsf{d}_{ \mathbf{M}}(\Phi^{\mathbf{M}}(n),\Phi^{\mathbf{M}}(n+m))\] \[\leq\mathsf{d}_{\mathbf{M}}(0,\Phi^{\mathbf{M}}(n))+\mathsf{d}_{ \mathbf{M}}(\Phi^{\mathbf{M}}(\mathsf{S}^{n}(0)),\Phi^{\mathbf{M}}(\mathsf{S}^ {n}(m)))\] \[\leq g_{n}(\mathbf{T},\mathbf{H})+g_{m}(\mathsf{S}^{n}(\mathbf{T},\mathbf{H})).\] This together with Lemma 3.3 applied to the Kingsman's subadditive ergodic theorem which gives the result. The consequence that \(c\in[0,1]\) follows since \(\mathsf{d}_{\mathbf{M}}(\Phi^{\mathbf{M}}(0),\Phi^{\mathbf{M}}(n))\leq n\) after the gluing. In order to stablish that \(\mathsf{d}_{\mathbf{M}}(\kappa_{0},\kappa_{n})=o(n)\), we prove that \(c\) appearing in Proposition 3.2 is equal to \(0\). **Proposition 3.4**.: _The constant \(c\) in Proposition 3.2 is equal to 0._ The proposition is proven using the following lemma. **Lemma 3.5**.: _For every point \(u\in\mathbb{R}^{+}\) one has_ \[\frac{\mathsf{d_{M}}(\kappa_{0},\kappa_{u})}{u}\stackrel{{(d)}}{{= }}\mathsf{d_{M}}(\kappa_{0},\kappa_{1})\] Proof.: Recall that for \(B_{t}\) Brownian motion and \(\ell\in\mathbb{R}^{+}\) one has that \(B_{t}\stackrel{{(d)}}{{=}}B_{\ell^{2}t}/\ell\) and \(\tau_{1}\stackrel{{(d)}}{{=}}\tau_{\ell}/\ell^{2}\). We denote by \((H(s,t):\ s,t\in\mathbb{R})\) the process where \(H(s,t)=\mathsf{d_{H}}(\gamma^{\mathbf{H}}(s),\gamma^{\mathbf{H}}(t))\) for \(s,t\in\partial\mathbf{H}\); which has the property that \(H(s,t)\) is equal in law to \(H(\ell^{2}s,\ell^{2}t)/\ell\) (this follows from the renormalization applied to the UIHPQ\({}_{S}\) to obtain the Brownian half-plane in Theorem 1.12 [1]). Also recall the definition of the pseudometric associated to \(\mathbf{T}\) and notice that since the contour process of \(\mathbf{T}\) in the positive spine is associated to \(Z=(Z(t):t\in\mathbb{R})\) a standard Brownian motion (Definition 2.16), then \(\mathsf{d_{Z}}(s,t)=\mathsf{d_{Z}}(\ell^{2}s,\ell^{2}t)/\ell\) and we write \(s\sim_{Z}t\) if \(\mathsf{d_{Z}}(s,t)=0\). We define the set \(J(K,u)\) as the set of all sequences \(t=(t_{i}\in\mathbb{R}:i\in\{0,1,\ldots,K\})\) of length \(K\) such that \(t_{i}\sim_{Z}t_{i+1}\) and such that \(t_{0}=0\) and \(t_{K}\sim_{Z}\tau_{-u}\). We have the following equalities \[\inf_{K\in\mathbb{N}}\inf_{t\in J(K,u)}\Big{\{}\sum_{i=0}^{K}\frac {H(t_{i},t_{i+1})}{u}\Big{\}} \stackrel{{(d)}}{{=}} \inf_{K\in\mathbb{N}}\inf_{t\in J(K,u)}\Big{\{}\sum_{i=0}^{K}H \left(\frac{t_{i}}{u^{2}},\frac{t_{i+1}}{u^{2}}\right)\Big{\}}\] \[\stackrel{{(d)}}{{=}} \inf_{K\in\mathbb{N}}\inf_{s\in J(K,1)}\Big{\{}\sum_{i=0}^{K}H \left(s_{i},s_{i+1}\right)\Big{\}},\] from where we conclude. We can now prove the proposition. Proof of Proposition 3.4.: From Proposition 3.2 and Lemma 3.5 one has that \(c\) is equal in law to \(\mathsf{d_{M}}(\kappa_{0},\kappa_{1})\) and since the gluing operation decrease the distances one has that \(c\leq\mathsf{d_{H}}(\kappa_{0}^{\mathbf{H}},\kappa_{1}^{\mathbf{H}})\), where \(\kappa_{y}^{\mathbf{H}}=\gamma^{\mathbf{H}}(\tau_{-y})\) and \(\tau_{y}\) is defined in (3.1). Recalling that \(\tau_{y}\) are independent of \(\mathbf{H}\), we get that for any \(\varepsilon>0\), exists \(\delta=\delta(\varepsilon)>0\) such that \[\mathbb{P}(c\leq\varepsilon)\geq\mathbb{P}(\mathsf{d_{H}}(\gamma^{\mathbf{T}}( 0),\gamma^{\mathbf{T}}(1))\leq\varepsilon)\geq\delta\] but since \(c\) is constant a.s. we conclude that \(c=0\). We now note that by mixing Lemma 3.5 and Proposition 3.4, together with noting that the curve \(\gamma^{\mathbf{T}}\) is continuous in \(\mathbf{M}\), we obtain the following corollary. **Corollary 3.6**.: _For every \(y\in\mathbb{R}\) one has a.s._ \[\mathsf{d_{M}}(\kappa_{0},\kappa_{y})=0\] Now, we generalise Corollary 3.6 to show that any two points in the decoration have distance equal to 0. **Proposition 3.7**.: _Almost surely, for any \(s,t\in\mathbb{R}\)_ \[\mathsf{d_{M}}(\gamma^{\mathbf{M}}(s),\gamma^{\mathbf{M}}(t))=0\] Proof.: Since the the curve \(\gamma^{\mathbf{M}}\) is continuous, it is enough to show that for any \(q,r\in\mathbb{Q}\) almost surely \(\mathsf{d_{M}}(\gamma^{\mathbf{T}}(q),\gamma^{\mathbf{T}}(r))=0\). To do that, let us define \(\mathfrak{p}(q\to+\infty)\) the unique simple path starting from \(\gamma^{\mathbf{T}}(q)\) that goes to \(+\infty\) in \(\mathbf{T}\). Note that \(\mathfrak{p}(q\to+\infty)\cap\mathfrak{p}(r\to+\infty)\) is non-empty, and take \(u\in\mathbb{R}\) such that \(\gamma^{\mathbf{T}}(u)\) is the smallest element in that intersection (in the order given by the path \(\mathfrak{p}(q\to+\infty)\)). It is enough to show that \(d_{\mathbf{M}}(\gamma^{\mathbf{T}}(r),\gamma^{\mathbf{T}}(u))=0\). To do that, we use the invariance of the distribution under re-rooting Lemma 3.3, and we re-root \(\mathbf{M}\) at the point \(\gamma^{\mathbf{T}}(r)\) and we call this rerooting \(\mathbf{M}^{\prime}\). It is clear now that \(\kappa_{0}^{\prime}=\gamma^{\mathbf{T}}(r)\) and that there exists \(u^{\prime}\) such that \(\kappa_{u^{\prime}}=\gamma^{\mathbf{T}}(u)\). We conclude from Corollary 3.6. We now conclude with the proof of Theorem 3.1. Proof of Theorem 3.1.: We note that the distance of the glueing between \((\mathbf{M}_{\infty},\mathbf{H}_{\infty})\) is bigger than \((\mathbf{M}_{-\infty}^{\infty},\mathbf{H}_{\infty})\), as one can obtain the first glueing, by first doing the second glueing and the identifying the points \(\kappa_{y}\) with \(\kappa_{-y}\) for all \(y\in\mathbb{R}^{+}\). We conclude by Proposition 3.7. ## 4. Infinite discrete volume ### Local limit of the tree-decorated quadrangulation The objective of this section is to understand what a tree-decorated quadrangulation looks like when both the map and the tree are big. This will be done by obtaining a local limit of the map looked from its root, i.e, the root of the tree. Let \(\mathbf{q}_{f}^{\mathsf{T},m}=(\mathbf{q},\gamma^{\mathbf{M}})\) be a pair uniformly chosen in \(Q_{f}^{\mathsf{T},m}\) the set of pairs with first coordinate a quadrangulation with \(f\) faces and second coordinate describing the contour of a tree with \(m\) edges which is a submap containing the root edge of the first coordinate. Even tough our results are expressed by means of a quadrangulation decorated by a curve, we will indistinctly use that they are decorated by a tree, since they are in bijection. **Proposition 4.1**.: _As \(f\to\infty\), \(\mathbf{q}_{f}^{\mathsf{T},m}\) converges in distribution (for the Benjamini-Schramm Uniform topology) to a limit we call \(\mathbf{q}_{\infty}^{\mathsf{T},m}\). Furthermore, as \(m\to\infty\) we have that \(\mathbf{q}_{\infty}^{\mathsf{T},m}\) converges in distribution (for the Benjamini-Schramm Uniform topology) towards a limit \(\mathbf{q}_{\infty}^{\mathsf{T},\infty}\) that we call the infinite tree-decorated quadrangulation (ITQ). In brief,_ \[\mathbf{q}_{f}^{\mathsf{T},m}\xrightarrow[local\;(f\to\infty)]{(d)}\mathbf{ q}_{\infty}^{\mathsf{T},m}\xrightarrow[local\;(m\to\infty)]{(d)}\mathbf{q}_{ \infty}^{\mathsf{T},\infty}.\] Let us note that this proposition can be extended to other random objects, for example a half-plane tree decorated quadrangulation (see [12, Chapter 5] for a more general statement). #### 4.1.1. Description of the local limit Let \(\mathbf{t}_{\infty}\) be an infinite critical geometric tree as in Theorem 2.5, \(\gamma_{\infty}^{\mathbf{t}}\) its contour curve and let \(\mathbf{q}_{\infty,\infty}^{b}\) be a UIHPQ. Note that the vertices that are in the boundary of \(\mathbf{q}_{\infty,\infty}^{b}\) can be identified by \(\mathbb{Z}\), in a way that the root edge is \(\overrightarrow{0(-1)}\) (as the infinite face lies to the left of this edge). We define the infinite tree-decorated quadrangulation (ITQ) \((\mathbf{q}_{\infty},\gamma_{\infty}^{\mathbf{M}})\) as the graph obtained by taking the quotient (the boundary of) \(\mathbf{q}_{\infty,\infty}^{b}\) with the equivalence relationship given by \(\gamma_{\infty}^{\mathbf{t}}\), i.e., two vertices of \(v_{1},v_{2}\in\mathbf{q}_{\infty,\infty}^{b}\) are equivalent if \(v_{1},v_{2}\in\partial\mathbf{q}_{\infty,\infty}^{b}\) and \(d_{\mathbf{t}_{\infty}}(v_{1},v_{2})=0\). The curve \(\gamma_{\infty}^{\mathbf{M}}\) is the contour of the image of the boundary curve, which is a copy of \(\mathbf{t}_{\infty}\) in \(\mathbf{q}_{\infty}\). #### 4.1.2. Proof of the local limit Denote the gluing function by \[\phi:\bigcup_{f,m\in\mathbb{N}}Q_{f,m}^{b}\times T_{m}\to\bigcup_{f,m\in \mathbb{N}}Q_{f}^{\mathsf{T},m},\] where \[Q_{f,m}^{b}=\text{set of quadrangulations with $f$ internal faces and a boundary of length $2m$}\] \[T_{m}=\text{set of planar trees with $m$ edges}\] Let \(T_{\infty}\) be the set of one ended infinite planar trees. Proposition 4.1 will be a consequence of the following lemma. **Lemma 4.2**.: _The function \(\phi\) admits an extension when \(f,m\in\mathbb{N}\cup\{\infty\}\), which is continuous with respect to the product local topology and the Benjamini-Schramm Uniform topology._ This implies Proposition 4.1 by the continuous mapping theorem, Theorem 2.5 and the convergence (see [1, 1]) \[\mathbf{q}_{f,m}^{b}\xrightarrow[local\ (f\to\infty)]{(d)}\mathbf{q}_{\infty,m}^{b} \xrightarrow[local\ (m\to\infty)]{(d)}\text{UIHPQ}.\] where \(\mathbf{q}_{f,m}^{b}\) is a uniform element of \(Q_{f,m}^{b}\). The natural extension is to glue the infinite boundary starting from the root-edge to the contour of the tree following its root-edge.This extension will never glue the other side of the infinite branch in the one-ended tree, since it will never cross it. This problem can be fix in a simple way, namely to glue edges in both directions starting from the root-edge. Proof of Lemma 4.2.: Consider \((\mathfrak{q},\mathfrak{t})\in Q_{f,m}^{b}\times T_{m}\) for \(f,m\in\mathbb{N}\cup\{\infty\}\). Define \(\overline{\phi}(\mathfrak{q},\mathfrak{t})\) as the result of gluing in parallel the left (right) side of the boundary in \(\mathfrak{q}\) starting from its root and following its (counter) root-edge sense to the tree starting from its root and following the (counter) root-edge sense. This procedure finishes when the gluing meets from the left and the right (this is well defined since there is an even number of edges). This is clearly an extension of \(\phi\). For the continuity consider a sequence \((\mathfrak{q}_{n},\mathfrak{t}_{n})\) converging to \((\mathfrak{q},\mathfrak{t})\) in the product local topology. It is easy to see that \(\overline{\phi}(\mathfrak{q}_{n},\mathfrak{t}_{n})\) converges to \(\overline{\phi}(\mathfrak{q},\mathfrak{t})\) in the local Benjamini-Schramm Uniform topology, since by the locally finite property, for any \(R>0\) the ball \(\mathcal{R}_{R}(\overline{\phi}(\mathfrak{q},\mathfrak{t}))\) is determined by finite radius balls \(\mathcal{B}_{R^{\prime}}(\mathfrak{q})\) and \(\mathcal{B}_{R^{\prime\prime}}(\mathfrak{t})\). Since for \(n\) big enough \(\mathcal{B}_{R^{\prime}}(\mathfrak{q}_{n})=\mathcal{B}_{R^{\prime}}(\mathfrak{ q})\) and \(\mathcal{B}_{R^{\prime\prime}}(\mathfrak{t}_{n})=\mathcal{B}_{R^{\prime\prime}}( \mathfrak{t})\), by applying the gluing procedure we see that \(\mathcal{B}_{R}(\overline{\phi}(\mathfrak{q}_{n},\mathfrak{t}_{n}))\) and \(\mathcal{B}_{R}(\overline{\phi}(\mathfrak{q},\mathfrak{t}))\) coincide for large enough \(n\). ### Peeling of the infinite tree-decorated quadrangulation In this section, we are going to work with the infinite tree-decorated quadrangulation. That is to say, the infinite map defined in Section 4.1. We are going to define a specific peeling, i.e. a Markovian way of exploring it. The nice property of the peeling we are going to define is that in its \(k\)-th step we will have discovered a set that contains the ball of radius \(k\) of a given set. The described peeling is based in a closely related peeling that can be found in [1, 1]. #### 4.2.1. Description of the peeling Let us start with an instance of the \(ITQ\), say \((\mathfrak{q}_{\infty},\mathfrak{t}_{\infty}^{\mathsf{M}})=\overline{\phi}( \mathfrak{q}_{\infty,\infty}^{b},\mathfrak{t}_{\infty})\) where \(\mathfrak{q}_{\infty,\infty}^{b}\) and \(\mathfrak{t}_{\infty}\) are instances of the UIHPQ and infinite critical geometric tree, respectively according to Section 4.1.1. We describe the spine of \(\mathfrak{t}_{\infty}^{\mathsf{M}}\) (isometric to \(\mathfrak{t}_{\infty}\)) as a copy of \(\mathbb{N}\), in which each vertex has two critical GW trees attached to it as in Proposition 2.4 (the one to the left at the top and the right at the bottom). We are going to be interested in peelings of the spine, i.e., \(\mathbb{N}\). Figure 5. Extended gluing function. We highlight that the right contour in blue and the left contour in cyan are glued to the left and right boundary, respectively. Let \(r\in\mathbb{N}\), we will study a peeling of the set \(\llbracket 0,r\rrbracket\) together with all the trees that are attached to those vertices (see Figure 6). For simplification let us call this set \(\mathfrak{p}_{r}=\mathfrak{p}\). The objective of the peeling will be to construct a sequence of sets \(\mathfrak{p}^{(l)}\subseteq\mathfrak{q}_{\infty}\) such that the ball of radius \(l\) around \(\mathfrak{p}\) is contained in \(\mathfrak{p}^{(l)}\). In other words, all points in \(\mathfrak{q}_{\infty}\backslash\mathfrak{p}^{(l)}\) should have distance to \(\mathfrak{p}\) strictly bigger than \(l\). For each \(l\) the peeling process will also define an infinite quadrangulation with infinite simple boundary \(\mathfrak{q}^{(l)}\subset_{M}\mathfrak{q}^{b}_{\infty,\infty}\)9 and an interval \(b^{(l)}\) on the boundary of \(\mathfrak{q}^{(l)}\). Define \(\mathfrak{q}^{(0)}=\mathfrak{q}^{b}_{\infty,\infty}\), \(r^{(0)}=r\) and note that \(\mathfrak{p}\subset\mathfrak{q}_{\infty}\) corresponds (by the gluing) to an interval which we define as \(\mathfrak{b}^{(0)}\). Now, iterate for \(l\geq 1\) the following Footnote 9: This peeling can be read in the preimage, so that \(\mathfrak{q}^{(l)}\) is the part to be explored in the preimage. * Peel all the faces of \(\mathfrak{q}_{\infty}\) that are the image of a face in \(\mathfrak{q}^{(l-1)}\) having a vertex contained in \(\mathfrak{b}^{(l-1)}\). Let \(r^{(l)}\) be the biggest \(n\in\mathbb{N}\) such that \(\mathfrak{t}^{\mathsf{M}}_{\infty}\) has a tree attached at \(r^{(l)}\) containing a vertex peeled in this step. Note that if we take away from \(\mathfrak{q}_{\infty}\) all the peeled faces and all the vertices associated to \(\mathfrak{p}_{r^{(l)}}\), there is a unique infinite connected component which is the image of an infinite quadrangulation with infinite simple boundary \(\mathfrak{q}^{(l)}\). We define \(\mathfrak{b}^{(l)}\) as the union of * All the vertices of \(\mathfrak{p}_{r^{(l)}}\) that are image of vertices belonging to \(\mathfrak{q}^{(l)}\). * All the vertices that were explored in this process that have image belonging to a face in \(\mathfrak{q}^{(l)}\). Note that \(\mathfrak{b}^{(k)}\) is an interval in the boundary of \(\mathfrak{q}^{(k)}\). We define \(\mathfrak{p}^{(l)}\subseteq\mathfrak{q}_{\infty}\) as the union of: the complement of the image of \(\mathfrak{q}^{(l)}\) and the image of \(\mathfrak{b}^{(l)}\) (also known as filled-in10 of the explored part in the literature). Footnote 10: Filled-in explorations here refers to the explorations with target such after each step of the exploration we reveal the parts that are unexplored that do not contain the target point. See Figure 7 for an idea of the peeling. By construction of the peeling we obtain the following lemma. **Lemma 4.3**.: _We have that for all \(l\in\mathbb{N}\), \(\bigcup_{v\in\mathfrak{p}}B(v,l)\subseteq\mathfrak{p}^{(l)}\)._ Define the random variable \(\mathbf{q}^{(l)}\) with value \(\mathfrak{q}^{(l)}\) rooted at the edge with image \((r^{(l)}r^{(l-1)})\) when \((\mathbf{q},\mathbf{t})=(\mathfrak{q}^{b}_{\infty,\infty},\mathbf{t}_{\infty})\). Also define the random variable \(\mathbf{t}^{(l)}\) equal to the unexplored part of \(\mathfrak{t}^{\mathsf{M}}_{\infty}\) (an infinite tree) rooted at the edge \((r^{(l)}r^{(l+1)})\) when \((\mathbf{q},\mathbf{t})=(\mathfrak{q}^{b}_{\infty,\infty},\mathbf{t}_{\infty})\). This peeling is Markovian in the following sense. **Lemma 4.4**.: \(\mathbf{q}^{(l)}\) _is distributed as the UIHPQ and \(\mathbf{t}^{(l)}\) is distributed as the infinite critical geometric tree._ Figure 6. Sketch of the infinite tree \(\mathfrak{t}\). The vertices in blue are those belonging to \(\mathfrak{p}_{r}\), when \(r=5\). **Remark 4.5**.: _This can be establish as a Markovian property on a decorated map with simple boundary (see Figure 8), however we skip this, since it will not be needed in the proof._ #### 4.2.2. Overshoot estimate in the peeling with target Now, we prove that distances after gluing are bigger than \(n/\log(n)\) for points belonging to trees whose roots are bigger than \(n\) in the natural parametrization of the spine in the decoration. Recall from Definition 2.6 that \(\mathfrak{t}_{\infty}^{\mathsf{M}}(m)\) is the finite subtree of \(\mathfrak{t}_{\infty}^{\mathsf{M}}\) consisting on the spine \([\![0,m]\!]\) and all the trees that are attached to this part of the spine. Figure 8. The Markovian property seen from the decorated map, where in each successive step we take out the \(\mathfrak{p}^{(l)}-\mathfrak{p}^{(l-1)}\) discovered by the peeling in the preceding step. The red part represents the points that are covered up to \(r^{(l)}\) in the tree, the grey parts represent the increment between balls of \(\mathfrak{p}\) in each stage of the peeling, and the black boundary represents the intervals \(b^{(l)}\). The map \(\mathfrak{q}^{(l)}\) consists in the yellow part delimited by the black outer boundary and the green line that is not contained in the grey part. Figure 7. Sketch of the peeling procedure. The grey faces are the faces peeled at stage \(i\). The set of red vertices form \(\mathfrak{p}_{r^{(i+1)}}\) and the yellow region corresponds to \(\mathfrak{q}^{(i+1)}\). **Proposition 4.6**.: _For any \(\alpha>1\) and \(\varepsilon>0\), we have that with high probability as \(n\to\infty\)_ \[\frac{n}{\log(n)^{\alpha}}\leq diam(\mathfrak{t}_{\infty}^{\mathsf{M}}(n))\leq \varepsilon n.\] The proposition follows from the study the law of \(r^{(l)}-r^{(l-1)}\). **Lemma 4.7**.: _There exists a (deterministic) constant \(C\) independent of \(l\) and \(r\) such that_ \[\mathbb{P}(r^{(l)}-r^{(l-1)}\geq a\mid\mathcal{F}_{l-1})\leq Ca^{-1}+o(a^{-3}) \quad\forall a\in\mathbb{N}\setminus\{0\}, \tag{4.1}\] _where \(\mathcal{F}_{l}\) is the filtration associated to the peeling process up to time \(l\)._ Let us first prove that Proposition 4.6 follows from 4.7. Proof of Proposition 4.6.: We start by noting that the inequality to the right comes directly from the definition of the metric in the infinite continuous volume, Corollary 3.6. For the left inequality it is enough to show that for every \(\alpha>1\) with high probability as \(n\to\infty\) \[\mathsf{d}_{\mathsf{M}}(0,x)\geq\frac{n}{\log(n)^{\alpha}}+1\ \ \forall x\in \mathfrak{t}_{\infty}^{\mathsf{M}}\backslash\mathfrak{t}_{\infty}^{\mathsf{M} }(n).\] We show this by studying the process \(r^{(l)}\). An implication of Lemma 4.7 is that one can couple \(r^{(l)}-r^{(l-1)}\) with an i.i.d. sequence \(s^{(l)}\) such that \(r^{(l)}-r^{(l-1)}\leq s^{(l)}\) and furthermore there exist a constant \(C\) such that \[\mathbb{P}(s^{(l)}\geq s)\leq Cs^{-1}+o(s^{-3})\quad\forall s\in\mathbb{N} \setminus\{0\}.\] Since \(r^{(l)}\leq\sum_{j=1}^{l}s^{(j)}\) it is enough to study the last sum. From [1, Theorem 3] and Lemma 4.7, we obtain that for some choices of \(a_{l}\) and \(b_{l}\) the random variable \(a_{l}^{-1}\sum_{j=1}^{l}s^{(j)}-b_{l}\) converges in distribution to an asymmetric Cauchy random variable, whose distribution we denote by \(F_{C}\). This convergence has uniform error of size \(o(1/\ln(l))\). Now, we use [1, Theorem 8.3.1] to chose \(a_{l}=l\) and \(b_{l}=C\log(l)\) (this is the same \(C\) as in Lemma 4.7). For \(C^{-}=C-\nu\), with \(0<\nu<C\), this gives \[\mathbb{P}\left(\frac{\sum_{j=1}^{l}s^{(j)}}{l}-C\log(l)>-\nu\log(l)\right)=F _{C}(-\nu\log(l))+o\left(\frac{1}{\log(l)}\right)\] If we consider \(l=n/\log(n)\) this gives as \(n\to\infty\) that \[\mathbb{P}\left(\sum_{j=1}^{n/\log(n)}s^{(j)}\geq C^{-}n+o(n)\right)\leq F_{ C}\left(-\nu\log\left(\frac{n}{\log(n)}\right)\right)+o\left(\log\left( \frac{n}{\log(n)}\right)^{-1}\right) \tag{4.2}\] Here we use the fact that if \(f(y)=y\log(y)\), then \(f^{-1}(x)=\exp(W(x))\), where \(W\) is the \(W\) Lambert function. We also used the fact that \(W(x)\sim\log(x)-\log(\log(x))+\kappa\frac{\log(\log(x))}{\log(x)}\) as \(x\to\infty\). We conclude by noting that the right hand side of (4.2) goes to \(0\) as \(n\to\infty\) and \[B_{l}(\mathsf{M})\cap\mathfrak{t}_{\infty}^{\mathsf{M}}\subseteq\mathfrak{t} _{\infty}^{\mathsf{M}}(r^{(l)}).\] We know discuss the proof of Lemma 4.7. Proof of Lemma 4.7.: We start by noting that conditioning on \(\mathcal{F}_{l-1}\), we peel all faces of \(\mathfrak{q}^{(l-1)}\) (that is an UIHPQ) that have a vertex in the interval of the boundary \(\mathfrak{b}^{(l-1)}=[x^{-},x^{+}]\). Let us first study how far this peeling goes in the boundary of \(\mathfrak{q}^{(l-1)}\). As usual we associate to the boundary of \(\mathfrak{q}^{(l-1)}\) a copy of \(\mathbb{Z}\). Let us first define \(\tilde{O}^{+}=\tilde{x}^{+}-x^{+}\), where \(\tilde{x}^{+}\) is the largest point in \(\mathfrak{b}^{(l-1)}\) that is peeled in the step \(l\). We do the analogue definition for \(\tilde{O}^{-}=x^{-}-\tilde{x}^{-}\). Let us note that \(\tilde{O}^{+},\tilde{O}^{-}\) are stochastically dominated by \(O^{s}\), where \(O^{s}\) is the overshoot defined in Section 2.5.1. We now define \(r^{(l),+}\), resp. \(r^{(l),-}\), as the biggest \(n\in\mathbb{N}\) such that \(\mathfrak{t}^{\mathsf{M}}_{\infty}\) has a tree attached to its top, resp. bottom, part at step \(l\). In other words, \(r^{(l)}=\max\{r^{(l),+},r^{(l),-}\}\). Let us now note that for any \(a\in\mathbb{N}\backslash\{0\}\) \[\mathbb{P}\left(r^{(l),+}-r^{(l-1)}\geq a\mid\mathcal{F}_{l-1}\right)=\mathbb{ P}\left(\tilde{O}^{+}>\sum_{i=1}^{a-1}\tau_{i}\mid\mathcal{F}_{l-1}\right) \leq\mathbb{P}\left(O^{s}\geq\sum_{i=1}^{a}\tau_{i}\right), \tag{4.3}\] where \(\tau_{i}\) are i.i.d random variables having the law of twice the vertices of a critical geometric GW tree (not condition to survive, part 2) in Definition 2.4). Inequality 4.3 also hold when changing \(r^{(l),+}\) by \(r^{(l),-}\) Let us recall that the law of \(\tau_{i}\) is the same as the law of the first time a simple random walk hits level \(-1\). Thus, to finish the proof of this lemma we need to prove the following claim. **Claim 4.8**.: _We have that_ \[\mathbb{P}\left(O^{s}\geq\sum_{i=1}^{a}\tau_{i}\right)\leq Ca^{-1}+o(a^{-3}).\] Before proving the claim let us note that it implies the lemma because \[\mathbb{P}(r^{(l)}-r^{(l-1)}\geq a\mid\mathcal{F}_{l-1}) \leq\mathbb{P}(r^{(l),-}-r^{(l-1)}\geq a\mid\mathcal{F}_{l-1})+ \mathbb{P}(r^{(l),+}-r^{(l-1)}\geq a\mid\mathcal{F}_{l-1})\] \[\leq 2\mathbb{P}\left(O^{s}\geq\sum_{i=1}^{a}\tau_{i}\right).\] As we are peeling all faces that have a vertex belonging to \(\mathfrak{b}^{(l-1)}\), let us call \(O^{l}_{-}\), resp. \(O^{l}_{+}\) the furthest point of the boundary of \(\mathfrak{q}^{(l)}\) that got discover to the left, resp. right. We provide now the proof of the claim. Proof of Claim 4.8.: We start by constructing \(\tau_{i}\) by considering a simple random walk \((X(k):k\in\mathbb{N})\) started at \(0\) and define \(\tau_{i}=t_{-i}-t_{-(i-1)}\), where \(t_{i}\) is the first time a random walk hits level \(i\). We can now note that under this coupling the events \(\{\sum_{i=1}^{a}\tau_{i}\leq j\}\) and \(\{\inf_{k\in\llbracket 0,j\rrbracket}X(k)<-a\}\) are equal. Thus, we have that \[\mathbb{P}\left(O^{s}\geq\sum_{i=1}^{a}\tau_{i}\right) =\mathbb{P}\left(\inf_{k\in\llbracket 0,O^{s}\rrbracket}X(k)<-a\right)\] \[\leq C\sum_{l=1}^{\infty}\mathbb{P}\left(\inf_{k\in\llbracket 0, l\rrbracket}X(k)<-a\right)l^{-3/2}.\] We separate the last sum according to whether \(l>a^{2}\) or \(l\leq a^{2}\). The sum in the first case, i.e. for \(l>a^{2}\), can be easily bounded. \[\sum_{l=a^{2}+1}^{\infty}\mathbb{P}\left(\inf_{j\in\llbracket 0,l\rrbracket}X(k)<-a \right)l^{-3/2}\leq\sum_{l=a^{2}+1}^{\infty}l^{-3/2}\leq Ca^{-1}.\] For the sum of the second case, i.e. for \(l\leq a^{2}\), we use that for a simple random walk \[\mathbb{P}\left(\inf_{k\in\llbracket 0,l\rrbracket}X(k)<-a\right)=\mathbb{P} \left(\sup_{k\in\llbracket 0,l\rrbracket}X(k)>a\right)=2\mathbb{P}\left(X(l)>a \right).\] And that furthermore by the Hoeffding's inequality we have that \[\mathbb{P}\left(X(l)\geq a\right)\leq\exp\left(-\frac{a^{2}}{2l}\right).\] Thus, we can finally bound \[\sum_{l=1}^{a^{2}}\mathbb{P}\left(\inf_{k\in[0,l]}X(k)<-a\right)l^{ -3/2} \leq 2\sum_{l=1}^{a^{2}}\exp\left(-\frac{a^{2}}{2l}\right)l^{-3/2}\] \[\leq 2\int_{0}^{a^{2}+1}\exp\left(-\frac{a^{2}}{2x}\right)x^{-3/2 }dx+\sup_{x\geq 0}\left(\exp\left(-\frac{a^{2}}{2x}\right)x^{-3/2}\right)\] \[\leq\frac{2}{a}\int_{0}^{1+1/a^{2}}\exp\left(-\frac{1}{2u}\right) u^{-3/2}du+o(a^{-3})\leq Ca^{-1}+o(a^{-3})\] where in the second inequality we used the fact that the monotonicity of the function changes only once (at the point \(x=a^{2}/3\)) **Corollary 4.9**.: _The infinite discrete volume shocked map is not absolutely continuous with respect to the UIHPQ\({}_{S}\) with simple boundary._ Proof.: To show this it is enough to study the distances in the boundary. On one side we know from Prop 6.1[1] that distances from the root to the boundary point labeled \(n\) on the boundary of the UIHPQ\({}_{S}\) scale as \(\sqrt{n}\) and from Proposition 4.6 in the discrete volume shocked map we know that for any \(\alpha>1\) \[\mathsf{d}_{\mathfrak{q}_{\infty,\infty}^{b}}(0,n)<\mathfrak{c}\sqrt{n}< \frac{n}{\log(n)^{\alpha}}<\mathsf{d}_{\mathfrak{q}_{\infty}^{\mathsf{T}, \infty}}(0,n) \tag{4.4}\] From this we conclude. ## 5. Finite volume for the discrete map The objective of this section is to transfer Proposition 4.6 to the case of finite volume. To prove this, we first note that the event that any two different points on the tree are at positive distance on the ITQ, only depends on the tree itself and on the behaviour of the quadrangulation with a simple boundary at close distance from the boundary. Then we use the techniques of [1] to prove that close to the tree, the behaviour of both the tree and the quadrangulation with a simple boundary are not so different from their infinite counterpart. ### "Typical case for finite volume uniform tree is not-unlikely for infinite volume uniform tree" In this subsection, we construct an exploration of a uniformly chosen tree and we show that the result of this exploration is not unlikely to be seen in the infinite critical geometric tree. Take \(\mathbf{t}\) a tree of size \(k\) and \((C_{k}(i))_{i\in[\![0,2k]\!]}\) its contour function. We mark the vertex \(\bar{v}\) as the vertex visited at time \(k\) by the contour function. For any \(m\in\mathbb{N}\), we define \(\mathbf{t}^{(m)}\) as follows. If \(d_{\mathbf{t}}(0,\bar{v})\leq m\), \(\mathbf{t}^{(m)}\) is equal to \(\mathbf{t}\). However if \(d_{\mathbf{t}}(0,\bar{v})>m\), then we take \(\bar{\mathbf{t}}^{(m)}\) the subtree defined as the connected component of \(\overline{v}\) in \(\mathbf{t}\backslash\mathcal{B}_{m}(\mathbf{t},0)\) rooted at the unique edge where this connected component is attached to \(\mathcal{B}_{m}(\mathbf{t},0)\). We define \(\mathbf{t}^{(m)}\) as the (marked and rooted) tree generated by the vertices on \(\mathbf{t}\backslash\bar{\mathbf{t}}^{(m)}\), marked on the corner to where \(\bar{\mathbf{t}}^{(m)}\) is attached and rooted in the same (oriented) edge as the \(\mathbf{t}\) was. This finally allows us to define for \(j\leq k\) the tree \(\mathbf{t}_{j}\) as the tree \(\mathbf{t}^{(\bar{m})}\) where \(\widehat{m}\) is the first \(m\) such that \(\mathbf{t}^{(m)}\) has size greater than or equal to \(j\). In fact, it is possible to compute the probability of \(\mathbf{t}_{j}\). This is given in the following lemma. **Lemma 5.1**.: _For any \(j,k\geq 0\), take \(\mathbf{t}\) a random uniform tree of size \(k\in\mathbb{N}\). We have that_ \[\mathbb{P}\left(\mathbf{t}_{j}=t_{j}\right)=\frac{\frac{1}{k-r_{j}+1}\binom{2k-2 r_{j}}{k-r_{j}}}{\frac{1}{k+1}\binom{2k}{k}},\] _where \(r_{j}\geq j\) is the size of \(t_{j}\), and \(t_{j}\) is a marked rooted tree such that the size of \((t_{j})^{(m-1)}\) is less than or equal to \(j\), where the exploration goes to the base point of the marked corner._ Proof.: We analyse the probability of the Dyck path associated to \(\mathbf{t}\), for this consider the Dyck path associated to \(\mathbf{t}_{j}\) and notice that from the marked corner on it we can identify the place where to insert the Dyck path of unexplored tree with size \(k-r_{j}\). To conclude we use that the number of plane trees with \(n\) edges are counted by the \(n\)-th Catalan number. For the next lemma, we need to explore the infinite critical geometric tree \(\mathbf{t}\). In this case, to define \(\mathbf{t}_{j}\), we only need to define \(\bar{\mathbf{t}}^{(m)}\), this is done by taking the (unique) infinite connected component of \(\mathbf{t}\backslash\mathcal{B}_{m}(\mathbf{t},0)\). We can now prove that the finite volume exploration is not-unlikely for the infinite volume one. **Lemma 5.2**.: _Take \(\mathbb{P}_{k,j}\) (resp. \(\mathbb{P}_{\infty,j}\)) the law of \(\mathbf{t}_{j}\) where \(\mathbf{t}\) is a tree with size \(k\) (resp. an infinite critical geometric tree). Then for any \(\varepsilon,\delta>0\) and \(k\in\mathbb{N}\), there exists a set of trees \(T^{k}:=T^{k,\varepsilon,\delta}\) and a deterministic constant \(K^{T}_{\varepsilon,\delta}\) such that_ \[\mathbb{P}_{k,(1-\varepsilon)k}\left(\mathbf{t}_{(1-\varepsilon)k}\notin T^{k} \right)\leq\delta, \tag{5.1}\] _and such that for any \(t\in T^{k}\) we have that_ \[\frac{\mathbb{P}_{k,(1-\varepsilon)k}(t)}{\mathbb{P}_{\infty,(1-\varepsilon)k }(t)}\leq K^{T}_{\varepsilon,\delta}. \tag{5.2}\] Proof.: We define \(T^{k}\) to be the set of trees with size bigger than or equal to \((1-\varepsilon)k\) and smaller than or equal to \((1-\gamma)k\), where \(\gamma:=\gamma(\delta)\) is a parameter to be tuned, meaning that \(\mathbf{t}_{(1-\varepsilon)k}\in T^{k}\) leaves a macroscopic part of \(\mathbf{t}\) to be explored. We prove that (5.1) is satisfied, by studying the continuous limit of the trees. From the definition of \(\mathbf{t}_{(1-\varepsilon)k}\), we see that it has size bigger than or equal to \((1-\varepsilon)k\), so we just need to prove that, with high probability, it has size smaller than or equal to \((1-\gamma)k\). Consider the contour function \(C_{k}\) of the tree \(\mathbf{t}_{k}\) into play, we know from Theorem 2.5 in [10] that, for the topology of uniform convergence \[\widetilde{C}_{k}=\left(\frac{C_{k}(2kt)}{\sqrt{2k}}:t\in[0,1]\right)\xrightarrow[ k\to\infty]{law}(\mathtt{e}_{t}:t\in[0,1]), \tag{5.3}\] where \((\mathtt{e}_{t}:t\in[0,1])\) is a standard Brownian excursion. Let us now describe how to read \(\mathbf{t}_{j}\), the filled-in exploration of the tree with target point \(\overline{v}\) using \(\widetilde{C}_{k}\), to do that define \(j_{C}:=\frac{j}{2k}\). To construct, \(\mathbf{t}^{(m)}\), the exploration of the ball of radius \(m\) in the tree of size \(k\), we expose the values of the function in the set \(\widetilde{C}_{k}^{-1}([0,\sqrt{2k}m])\subseteq[0,1]\) and then we expose the values of \(\widetilde{C}\) in all connected components of \([0,1]\backslash\widetilde{C}_{k}^{-1}([0,\sqrt{2k}m])\) that do not contain the point \(1/2\), we mark the corner where the unseen interval should be connected. Denote \(\hat{L}(\widetilde{C},m)\), one minus the length of that unexplored interval, take \(m_{j}(\widetilde{C})\) the infimum over \(m\in\mathbb{R}\) such that \(\hat{L}(\widetilde{C},m)>j_{C}\), we define \(L(\widetilde{C},j_{C})=\hat{L}(\widetilde{C},m_{j}(\widetilde{C}))\geq j_{C}\). Note that this construction can be done for any renormalised contour function \(\widetilde{C}\) and any \(j_{C}\in[0,1]\). Finally, we remark that for any renormalised contour function \(\widetilde{C}\) the function \(\hat{L}(\widetilde{C},\cdot)\) is increasing and cadlag. Thanks to Skorohod's representation theorem, we may assume that we work on a probability space where the convergence (5.3) holds almost surely. The fact that \(\hat{L}(\widetilde{C},\cdot)\) is increasing and cadlag implies that in this coupling for any \(m\in\mathbb{R}\) \[\lim_{l\nearrow m}\hat{L}(\operatorname{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\ For the following lemma we need to define two probability laws, \(\mathbb{Q}_{f,\sigma,a,b,rf^{1/4}}\) and \(\mathbb{Q}_{\infty,\infty,a,b,rf^{1/4}}\) as the law of \(I_{\sigma,f}^{rf^{1/4}}\) and \(I_{\infty}^{rf^{1/4}}\) respectively. **Lemma 5.3**.: _For every \(\delta>0\) and \(0<\eta<\sigma\) there exists \(f_{0}\in\mathbb{N}\) and \(r>0\) such that for all \(f\geq f_{0}\) the following occurs. Take \(a,b\in\mathbb{N}\) with \(a+b<(\sigma-\eta)\sqrt{f}\), there exists a set marked of maps \(Q^{f}:=Q^{f,\eta,\delta,r}\) and a deterministic constant \(K_{\delta,\eta}^{b}>0\) such that_ \[\mathbb{Q}_{f,\sigma,a,b,rf^{1/4}}\left(q\notin Q^{f}\right)\leq\delta, \tag{5.5}\] _and for any \(q\in Q^{f}\)_ \[\frac{\mathbb{Q}_{f,\sigma,a,b,rf^{1/4}}(q)}{\mathbb{Q}_{\infty,a,b,rf^{1/4}} (q)}\leq K_{\eta,\delta}^{b} \tag{5.6}\] Proof.: Define as \(q_{f,\ell}\) as the cardinal of the set of quadrangulations with simple boundary of size \(2\ell\) and \(f\) internal faces. It is known [1, Proof of Lemma 10.] \[q_{f,\ell}\sim\frac{\sqrt{3}}{2\pi}12^{f}\left(\frac{9}{2}\right)^{\ell}f^{-5 /2}\ell^{1/2}\exp\left(-\frac{9\ell^{2}}{4f}\right)\] as \(f\) and \(\ell\) tend to infinity. We will study the filled-in exploration with target, meaning that we fill the connected components that do not contain the target point: the middle point of the perimeter. This is enough since the exploration without filling the holes is deterministic in the filled-in exploration. \[\mathbb{Q}_{f,\sigma,a,b}(q)=\frac{q_{m,\ell}}{q_{f,\sigma\sqrt{f}}}\] where \(\ell:=\ell(q)=|\partial q_{in}|+\lfloor\sigma\sqrt{f}\rfloor-|\partial q_{out}|\), where \(\partial q_{out}\), resp. \(\partial q_{in}\) is the segment of the boundary of \(q\) written as \([v_{1},v_{2}]\supseteq I\), resp. \((v_{2},v_{1})\) (i.e. that does not intersect \(I\)); and where \(m:=m(q)\) is equal to \(f\) minus the number of inner faces of \(q\). We define \[Q^{f}:=Q^{f,\eta,\delta,r}=\left\{q:\alpha\sqrt{f}\leq\ell(q)\leq\frac{\sqrt{ f}}{\alpha},m(q)\geq\alpha f\right\},\] where \(\alpha\) is a function of \(\delta\), \(\eta\) and \(r\). In words, the event \(Q^{f}\) is where the unexplored part has macroscopic size. Figure 9. Explorations of Section 5.1 and Section 5.2. **Left:** Filled-in exploration of the tree with target \(\overline{v}\), where \(\mathbf{t}_{j}\) is coloured in pink with its contour exploration in red. **Right:** Filled-in exploration of \(I_{\sigma,f}\) with target point \(\gamma^{\mathbf{b}}(e^{i\pi})\); the blue region represents the first filled-in ball covering \(I_{\sigma,f}\) of radius \(r_{0}\) and the green region is what is added to obtain the filled-in ball centred at the root vertex of radius \(r_{0}+r\). We need to prove properties (5.5) and (5.6). The fact that there exists \(r>0\) such that property (5.5) holds follows directly from Lemma 9 [1], since our exploration is a fast-forward stage of the exploration used by them. We are left prove that property (5.6) holds. Consider \(q\) such that \(q\in Q^{f}\), we claim that \[\frac{\mathbb{Q}_{f,\sigma,a,b,rf^{1/4}}(q)}{\mathbb{Q}_{\infty,a,b,rf^{1/4}}( q)}\leq K_{\varepsilon,\delta}^{b} \tag{5.7}\] Consider \(f^{\prime}\in\mathbb{N}\) (large enough) then by noting that if a marked map \(q\) has positive probability for \(\mathbb{Q}_{f,\sigma,a,b}\), then it also has positive probability for \(\mathbb{Q}_{f+f^{\prime},\sigma,a,b}\) we note that \[\frac{\mathbb{Q}_{f,\sigma,a,b,rf^{1/4}}(q)}{\mathbb{Q}_{f+f^{ \prime},\sigma,a,b,rf^{1/4}}(q)}\] \[=\frac{q_{f+f^{\prime},\sigma\sqrt{f+f^{\prime}}}}{q_{f,\sigma \sqrt{f}}}\times\frac{q_{m,\ell}}{q_{m+f^{\prime},\ell+\sigma(\sqrt{f+f^{ \prime}}-\sqrt{f})}}\] \[\left(\frac{f+f^{\prime}}{f}\frac{m}{m+f^{\prime}}\right)^{-5/2 }\left(\frac{\sqrt{f+f^{\prime}}}{\sqrt{f}}\frac{\ell}{\ell+\sigma\left( \sqrt{f+f^{\prime}}-\sqrt{f}\right)}\right)^{1/2}\] \[\quad\times\exp\left(\frac{9}{4}\left(\frac{\left(\ell+\sigma \left(\sqrt{f+f^{\prime}}-\sqrt{f}\right)\right)^{2}}{m+f^{\prime}}-\frac{\ell ^{2}}{m}\right)\right)\] \[\sim\left(\frac{f}{m}\right)^{5/2}\left(\frac{\ell}{\sigma\sqrt{ f}}\right)^{1/2}\exp\left(\frac{9}{4}\left(\sigma^{2}-\frac{\ell^{2}}{m} \right)\right)\] \[\leq\left(\frac{f}{m}\right)^{5/2}\left(\frac{\ell}{\sigma\sqrt{ f}}\right)^{1/2}\exp\left(\frac{9}{4}\sigma^{2}\right)\] \[\leq\alpha^{-3}\sigma^{-1/2}\exp\left(\frac{9}{4}\sigma^{2}\right) \tag{5.8}\] Again this bound does not depend on \(f^{\prime}\), so taking the limit we obtain the result. Here we used a "diagonal" version of the convergence to the UIHPQ with simple boundary when the area and the boundary tend to infinite simultaneously, with the boundary of order square root of the area. This "diagonal" version follows from Prop. 2.6 and Lemma 2.7 in [1]. ### "Close to the tree a tree decorated quadrangulation is not unlikely for the ITQ" Now, we use the results before to prove that high probability events that depend only on small neighbourhoods of a finite part of the tree in the ITQ also have high probability for a finite tree decorated quadrangulation. **Proposition 5.4**.: _Let \((\mathbf{q}_{f},\mathbf{t}_{\sigma}^{\mathsf{M}})\) be a tree decorated quadrangulation where \(\mathbf{q}_{f}\) has \(f\) faces and the tree \(\mathbf{t}_{\sigma}^{\mathsf{M}}\) is of size \(\sigma\sqrt{f}\), with \(0<\sigma<\infty\). For any \(\alpha>1\) and \(\varepsilon>0\), we have that with high probability as \(f\to\infty\)_ \[\frac{f^{1/4}}{(\log(f))^{\alpha}}\leq diam(\mathbf{t}_{\sigma}^{\mathsf{M}}) \leq\varepsilon f^{1/4}. \tag{5.9}\] Proof.: Let us first look at the diameter of the exploration tree \((\mathbf{t}_{\sigma}^{\mathsf{M}})_{\frac{3}{4}\sigma\sqrt{f}}\), with the notation of Subsection 5.1; i.e. the first tree of size bigger than \(\frac{3}{4}\sigma\sqrt{f}\) in the filled-in exploration of the tree \(\mathbf{t}_{\sigma}^{\mathbf{M}}\). We define the event \(E(f)\) as \[\frac{f^{1/4}}{(\log(f))^{\alpha}}\leq diam(\mathbf{t}_{\sigma}^{\mathbf{M}})_{ \frac{3}{4}\sigma\sqrt{f}}\leq\frac{\varepsilon}{2}f^{1/4}, \tag{5.10}\] We note that if with high probability \(E(f)\) holds, then we can conclude, by triangular inequality and the rerooting invariance, that (5.9) also holds. We note that the event \(E(f)\) only depends on an \(\frac{\varepsilon}{2}f^{1/4}\) neighbourhood of \((\mathbf{t}_{\sigma}^{\mathbf{M}})_{\frac{3}{4}\sigma\sqrt{f}}\). Using the bijection, to an independent pair \((\mathfrak{q}_{f}^{b},\mathbf{t}_{\sigma\sqrt{f}})\), let us define \(a\) and \(b\) such that the contour function of \(\mathbf{t}_{\sigma\sqrt{f}}\) visits a vertex of \((\mathbf{t}_{\sigma\sqrt{f}})_{\frac{3}{4}\sigma\sqrt{f}}\) at time \(b\) but not at \(b+1\), and visits a point of \((\mathbf{t}_{\sigma\sqrt{f}})_{\frac{3}{4}\sigma\sqrt{f}}\) at \([\sigma\sqrt{f}]-a\) but not at \([\sigma\sqrt{f}]-a-1\). All this gives that \(E(f)\) depends only on \(\left(I_{\sigma,f}^{\varepsilon f^{1/4}},(\mathbf{t}_{\sigma\sqrt{f}})_{\frac {3}{4}\sigma\sqrt{f}}\right)\). We define the event \(E_{\infty}(f)\) as \[\frac{f^{1/4}}{(\log(f))^{\alpha}}\leq diam((\mathbf{t}_{\infty}^{\mathbf{M}}) _{\frac{3}{4}\sigma\sqrt{f}})\leq\frac{\varepsilon}{2}f^{1/4}.\] By Lemmas 5.2 and 5.3 together with the definition of \(T^{k}\), we see that the probability of the complement of \(E(f)\) is upper bounded by \[2\delta+K_{3/4,\delta}^{T}K_{\eta,\delta}^{b}\mathbb{P}((E_{\infty}(f))^{c}),\] where we first chose \(\delta>0\), and then take \(\eta<\delta\) such that \(,(\mathbf{t}_{\sigma\sqrt{f}})_{\frac{3}{4}\sigma\sqrt{f}}\in T^{\sigma\sqrt{ f}}\). The probability of the event \(E_{\infty}(f)\) goes to \(1\) as \(f\to\infty\) thanks to Proposition 4.6 and the fact that with high probability, as \(C\to\infty\), \((\mathbf{t}_{\infty}^{\mathbf{M}})_{\frac{3}{4}\sigma\sqrt{f}}\) is contained in \(\mathbf{t}_{\infty}^{\mathbf{M}}(Cf^{1/4})\) and contains \(\mathbf{t}_{\infty}^{\mathbf{M}}(C^{-1}f^{1/4})\). ## 6. Finite continuous volume The objective of this section is to finally prove Theorem 1.1. This result is proven in a way that is analogous to that of the finite discrete volume case, so we only give a quick idea of how to adapt the result. The key result that allows this is the following lemma. **Lemma 6.1**.: _Take a sequence \((\mathbb{P}_{n}:n\in\mathbb{N})\) and \((\mathbb{Q}_{n}:n\in\mathbb{N})\) sequences of probability measures in a Polish space converging to \(\mathbb{P}\) and \(\mathbb{Q}\) respectively, where \(\mathbb{Q}_{n}\) is absolutely continuous with respect to \(\mathbb{P}_{n}\)\((\mathbb{Q}_{n}\ll\mathbb{P}_{n})\). Assume that \((X_{n}:n\in\mathbb{N})\) is a sequence of random variables in the same probability space such that the law of \(X_{n}\) is \(\mathbb{P}_{n}\) and converge a.s. toward \(X\) with law \(\mathbb{P}\). Take \(f\) and \(g\) two continuous function such that_ \[f(X_{n})\frac{d\mathbb{Q}_{n}}{d\mathbb{P}_{n}}(X_{n})\stackrel{{ L^{1}}}{{\to}}g(X), \tag{6.1}\] _where \(g\) is a deterministic function. Then_ \[\mathbb{P}\left[g(X)\right]=\mathbb{Q}\left[f(X)\right]. \tag{6.2}\] Proof.: We have that \(\left|\mathbb{E}\left[g(X)-\frac{d\mathbb{Q}}{d\mathbb{P}}(X)f(X)\right]\right|\) is upper bounded by \[\left|\mathbb{E}\left[g(X)-\frac{d\mathbb{Q}_{n}}{d\mathbb{P}_{n}}(X_{n})f(X_{n })\right]\right|+\left|\mathbb{E}\left[\frac{d\mathbb{Q}_{n}}{d\mathbb{P}_{n}}( X_{n})f(X_{n})-\frac{d\mathbb{Q}}{d\mathbb{P}}(X)f(X)\right]\right|.\] We conclude by noting that the left term goes to \(0\) by hypothesis, and the right one goes to \(0\) by the convergence of \(\mathbb{Q}_{n}\) towards \(\mathbb{Q}\). Which allows us to show **Lemma 6.2**.: _In the context of Lemma 6.1, assume that there is \(\mathcal{O}\) an open set of the Polish space, such that for every continuous function \(f\) with compact support in \(\mathcal{O}\) there is a \(g\) such that (6.1) holds, then for every such \(f\) a.s._ \[g(X)=f(X)\frac{d\mathbb{Q}}{d\mathbb{P}}\] Proof.: Note that (6.1) implies that \(\frac{d\mathbb{Q}_{n}}{d\mathbb{P}_{n}}(X_{n})\) converges a.s. to \(g(X)/f(X)\) for any \(X_{n}\) that converges to \(X\) with \(f(X)\neq 0\). We conclude by using the characterisation of the integrals of the limit (6.2). We can finally give (an sketch of) the proof of Theorem 1.1. Proof of Theorem 1.1.: We follow as in Section 5. * **Typical case for the CRT is not unlikely for the infinite CRT** As before we explore a CRT with a marked point and we obtain a result analogous to that Lemma 5.2, i.e. that this exploration is not unlikely to happen in the infinite CRT. This could be done directly in the continuous using the Brownian motion compared with a Brownian excursion. It also follows directly from Lemma 6.2, as the Radon-Nikodym derivative is continuous with respect to the size of the leftover tree and the fact that one can make \(T^{k}\) to be an open set. * **Typical case for the Brownian disk is not unlikely for the Brownian half-plane** As before we mark a boundary point of the Brownian disk, explore it an obtain a result analogous to Lemma 5.3. To do that we use Lemma 6.2 for decorated metric spaces with the Gromov-Hausdorff-Uniform topology. As the explored sets are not themselves Brownian disks, we have to be careful. One just needs to restrict oneself to those maps that live in \(Q^{f}\), in that case as the length of the boundary will remain bounded, one can have a subsequence where \(\mathbb{Q}_{\infty,a,b,rf^{1/4}}\mathbf{1}_{Q_{f}}\) and \(\mathbb{Q}_{a,b,rf^{1/4}}\mathbf{1}_{Q^{f}}\) both converge. As the Radon-Nikodym derivative 5.8 is continuous on the length of the map, we can apply 6.2 to conclude. * **Typical case for the Shocked map near the boundary is not unlikely for the infinite Shocked map** For the final part, we note that the event where the diameter of the map is \(0\) depends only on a small neighbourhood near the tree itself. And then we do as in the proof of Proposition 5.4, to see that we can do an exploration towards a mid-point of the tree and see that this exploration is not unlikely for the infinite shocked map and use Theorem 3.1 to see that the diameter of the exploration of the tree is \(0\). We then re-root our map and do the same exploration again, to conclude that the whole diameter of the tree is \(0\). To conclude the theorem notice that we just prove that after the glueing the CRT has diameter zero and since every path on the interior of the disk does not change its length, we can upper bound the distance of the glueing by the distance of the Brownian disk where the boundary is identified with one point. ## Appendix A Markov property of the Brownian half-plane In this section, we discuss the Markov property of the Brownian half-plane on the special case where a filled-in ball is used. A general result of this type has already been announced in [13], however as the paper is not yet published we write a short proof here. **Proposition A.1**.: _Let \(\mathbf{H}\) be a Brownian half-plane. The filled-in ball with target point at infinity \(B_{r}^{\bullet}(\mathbf{H})\) and \(\mathbf{H}^{\prime}\) the complement with respect to \(\mathbf{H}\), i.e. \(\mathbf{H}^{\prime}=\mathbf{H}\setminus B_{r}^{\bullet}(\mathbf{H})\), are independent and moreover the \(\mathbf{H}^{\prime}\) properly rooted has the law of a BHP._ Proof.: We already know that this is the case for the UIHPQ, \(\mathfrak{q}\). If we renormalise the UIHPQ \(n^{-1}\mathfrak{q}\), we have the convergence for Gromov-Hausdorff local topology to \(\mathbf{H}\). As the pair \(B_{r}^{\bullet}(n^{-1}\mathfrak{q})\), and \(\mathfrak{q}\backslash B_{r}^{\bullet}(n^{-1}\mathfrak{q})\) are independent and converge in law to \(B_{r}^{\bullet}(n^{-1}\mathfrak{q})\) together with \(\mathbf{H}\backslash B_{r}^{\bullet}(n^{-1}\mathfrak{q})\) we conclude. ## Appendix B Convergence of the map with a simple boundary In our proofs we make reference of Theorem 1 in [1], we present it for completeness. Consider \(\tilde{Q}_{f,p}\) a random uniform quadrangulation with \(f\) internal faces and with a simple boundary of length \(2p\). **Theorem B.1** (Theorem 1 [1]).: _For a sequence \((p_{f}:n\in\mathbb{N})\) satifying \(p_{f}\sim 2\alpha\sqrt{2f}\), it holds_ \[\left(\frac{9}{8f}\right)^{1/4}\tilde{Q}_{f,p_{f}}\xrightarrow[f\to\infty]{(d )}\mathfrak{D}_{3\alpha}\] _in distribution for the Gromov-Hausdorff topology._ In fact, here we use a strengthened version of this theorem where we put into play the Gromov-Hausdorff-Prohorov-Uniform distance (see [11, Eq. (1.3)]) instead of the Gromov-Hausdorff distance. The Gromov-Hausdorff-Prohorov-Uniform topology keeps track of both the area and perimeter measures of the map. The reason why this generalization of Theorem 1 [1] holds is the following. We studied the \(\varepsilon\)-restrictions \((\mathcal{R}_{f}^{\varepsilon})\)13 which are roughly the filled-in explorations started from an interior vertex with target point placed at \(1/3\) of the counter-clockwise perimeter and stopped the first time the exploration hits a point in between \(1/3-\varepsilon\) and \(1/3\) of the perimeter (see fig. 10). In order to control the perimeter and area of the complement of the \(\varepsilon\)-restriction \((\overline{\mathcal{R}}_{f}^{\varepsilon})\) we proved that with high probability they have the same order as the perimeter and area of the map, respectively, and they both go to zero as \(\varepsilon\) goes to zero14. Footnote 13: Defined in Section 2.2. of [1]. Footnote 14: This is a consequence of the volume and perimeter estimates given in Section 4.2[1] Figure 10. The exploration starts at \(\rho\) and targets the green point \(t\), it grows from lighter to darker purple until it hits for the first time the red segment at the point \(v_{-}\), where the exploration stops. The complement of the restriction is highlighted in orange. The counter-clockwise perimeter starting from \(t\) and going to \(v_{+}\)
2309.11535
Access to the full 3D Brillouin zone with time resolution, using a new tool for pump-probe ARPES
Here we report the first time- and angle-resolved photoemission spectroscopy (TR-ARPES) with the new Fermiologics "FeSuMa" analyzer. The new experimental setup has been commissioned at the Artemis laboratory of the UK Central Laser Facility. We explain here some of the advantages of the FeSuMa for TR-ARPES and discuss how its capabilities relate to those of hemispherical analyzers and momentum microscopes. We have integrated the FeSuMa into an optimized pump-probe beamline that permits photon-energy- (i.e., kz-) dependent scanning, using probe energies generated from high harmonics in a gas jet. The advantages of using the FeSuMa in this situation include the possibility of taking advantage of its "fisheye" mode of operation.
Paulina Majchrzak, Yu Zhang, Andrii Kuibarov, Richard Chapman, Adam Wyatt, Emma Springate, Sergey Borisenko, Bernd Büchner, Philip Hofmann, Charlotte E. Sanders
2023-09-20T17:24:45Z
http://arxiv.org/abs/2309.11535v1
# Access to the full 3D Brillouin zone with time resolution, using a new tool for pump-probe ARPES ###### Abstract Here we report the first time- and angle-resolved photoemission spectroscopy (TR-ARPES) with the new Fermiologies "FeSuMa" analyzer. The new experimental setup has been commissioned at the Artemis laboratory of the UK Central Laser Facility. We explain here some of the advantages of the FeSuMa for TR-ARPES and discuss how its capabilities relate to those of hemispherical analyzers and momentum microscopes. We have integrated the FeSuMa into an optimized pump-probe beamline that permits photon-energy- (_i.e.,_\(k_{z}\)-) dependent scanning, using probe energies generated from high harmonics in a gas jet. The advantages of using the FeSuMa in this situation include the possibility of taking advantage of its "fisheye" mode of operation. pacs: 73.20.At,73.20.r,07.81.a,07.05.Fb Introduction Pump-probe time- and angle-resolved photoemission spectroscopy (TR-ARPES) presents challenges, both with respect to light sources and to detection, that do not arise in "static" ARPES measurements of systems at equilibrium. In this paper, we describe the commissioning of the newly-developed "FeSuMa" analyser on a beamline for high-harmonic generation (HHG) at the UK Central Laser Facility's Artemis Laboratory. We demonstrate efficient acquisition of high-quality ARPES spectra of optically pumped excitations close to the Fermi level, and we use the FeSuMa's "fisheye" measurement mode in combination with the beamline's capability to switch rapidly between ultraviolet photon energies that are generated as high harmonics in an Ar gas jet. We suggest that this measurement configuration offers major benefits as a cost-efficient laboratory-scale approach to time-resolved TR-ARPES. ### State of the Art in Pump-Probe TR-ARPES Compared to its static equivalent, an ultrafast TR-ARPES measurement adds a "pump" laser pulse, which--arriving at a well-defined delay time \(\Delta\) before the system is probed--promotes the system into an optically excited state. The transient, out-of-equilibrium state and its evolution in time are the subject of study. Both the probe and the pump pulses must be short (_i.e.,_ must have narrow width in the time domain) relative to the time scales of the physical phenomena to be measured, and the pulse train of the pump must be well synchronized to that of the probe. The pulse widths and the pulse synchronization determine the limits of time resolution in the experiment. #### i.1.1 Background: Light Sources Pump-probe methods require the simultaneous generation of synchronized pulse trains of very different energies. In TR-ARPES, an infrared (IR) or visible pulsed beam is needed for excitation, while an ultraviolet (UV) or extreme ultraviolet (XUV) beam probes the system _via_ photoexcitation. The Fourier limit places a strict boundary on the energy resolution achievable in an experiment, depending on what time resolution is needed (or, vice versa, if energy resolution is critical, then the Fourier limit determines the maximum achievable time resolution). The demands on the resolution are set by the time and energy scales of the physical phenomena of interest: for example, studies of electron-electron interactions typically demand time resolution of no worse than tens of fs [1]. Light sources commonly used to supply the pulsed ultraviolet probe beam include tabletop laser setups [2; 3] and free-electron lasers [2; 4; 5]. This paper deals with tabletop laser setups. Wavelengths down to approximately 115 nm (_i.e.,_ energies up to approximately 11 eV) are achievable with commercial off-the-shelf laser systems [6]. However, even at 11 eV, one can access only a relatively small section of momentum space up to approximately 1.3 A\({}^{-1}\). Off-the-shelf laser systems cannot generate photons in the tens-of-eV range that allows ARPES to access the full three-dimensional (3D) Brillioun zone of most crystalline materials, or that gives access to shallow-lying core-level states. To reach this range in a tabletop setup, one typically relies on HHG--usually in a gas jet [3; 7]. It is possible to use a single powerful laser to generate both the IR pump and the HHG XUV beam. An advantage of this approach is that the pulse trains of the two beams are automatically synchronized. The method works by taking the IR beam from a commercial laser system and splitting it into two parts, one of which is used to generate HHG, and the other of which is sent along a separate beam path for use as a pump. A movable delay stage in one of the beamlines (typically the pump beamline) controls the pulse separation \(\Delta\), and then the two beams are recombined. In a 3D-dispersing system, the probe photon energy determines which part of the 3D Brillouin zone is measured. Control of the probe energy also allows for optimization of photoemission intensity in electronic states of interest, via control of final state and matrix element effects [8; 9; 10; 11; 12]. While this is the basis for photon-energy-dependent synchrotron-based studies of out-of-plane-dispersing "\(k_{z}\)" states, the situation is more challenging for laser-based experiments: while a synchrotron (or FEL) undulator can generate probe photons with continuously tunable energy across a wide range [13], no such continuous spectrum is possible with laser-based HHG. Rather, HHG produces a "frequency comb" of odd-ordered harmonics [14] (Fig. 1(b)). A single frequency from the comb can be selected with a combination of reflective and transmissive optics [15]. At Artemis, we take an alternative approach, using a grating monochromator, which spatially separates the frequencies of the HHG comb into a "fan", and a slit that picks out a single frequency from the fan [7; 16]. When the monochromator is properly aligned, any frequency in the comb can be quickly selected on-the-fly, which provides great flexibility to choose different photon energies [7]. The power of this type of approach has recently been demonstrated [16]. #### ii.1.2 Background: Photoelectron Detection and Analysers The best-established technology for photoelectron spectroscopy is the hemispherical analyser (HA). This tool measures photoemission intensity as a function of momentum and energy across a wide range of binding energies and with energy resolution that is better than 1 meV [17]. The HA has been the workhorse of the photoemission community, and is likely to remain so for the foreseeable future. Moreover, state-of-the art HA technology increasingly incorporates advanced features; for example, spin detection. However, if electronic states of interest do not correspond to a single set of emission angles along the slit direction of the analyzer, then multiple HA measurements must be taken--either by rotating the sample in front of the analyzer, or by applying "deflector" voltages to the electron lens column in order to sample emission angles away from the slit direction. This works well, but can be a challenge in the context of pump-probe measurements, where each data set is intrinsically time consuming (on account of the need to acquire spectra at multiple time delays). When important physics arise simultaneously in multiple parts of the Brillouin zone, or when photon-energy-dependent measurements will cause the 3D Brillouin zone to shrink and expand on the detector, an alternative approach to detection is desirable. Moreover, in the case of short-pulse pump-probe applications, the high energy resolution of the analyser greatly exceeds the Fourier limit of the short light pulses. Recent years have seen rapid advancements in new types of analyser technologies [18; 19; 20]. Some are based on time-of-flight (ToF), among which are various types of photoemission electron microscopes (PEEMs) and momentum microscopes (MMs) [21]. These new techniques are powerful, permitting sophisticated momentum-space and real-space mapping, but they also present new challenges. In the case of PEEM and ToF-MM, a large potential difference must be applied between the sample and the objective lens of the electron optics. Because these are close together (on the order of several mm), there is a possibility of dielectric breakdown across the small vacuum gap and, as a result, sample damage. Another challenge, in the case of pulsed-probe measurements, arises with space charge effects in the electron optics [5]. This latter issue is discussed below. Very recently, a new type of simple, economical, and yet highly effective photoelec tron analyser has been developed. The working principle--similar to that of velocity map imaging [22; 23]--has been described in a recent paper [24]. From the point of view of pump-probe measurements, we find that this new analyser, which is available commercially under the name"FeSuMa" ("Fermi Surface Mapper"), offers advantages over other types of analysers for some of the most commonly required types of pump-probe ARPES measurements. The technology offers an efficient approach to measurement of the full Brillouin zone. It has a very straightforward application to states near the Fermi surface, which are the primary states of interest for many pump-probe ARPES studies; however, it can also probe deeper-lying states, including shallow-lying core levels. When used on a beamline with photon-energy control, it offers an efficient method for 3D measurements of \(k_{z}\)-dispersing states. Additionally, its user-friendly operation and compact profile make it easy to incorporate into crowded lab spaces and into vacuum chambers that might contain several other tools for various other types of measurements. In the sections that follow, we describe how these advantages have been integrated into our tabletop HHG beamline at Artemis to take advantage of the special capabilities of the FeSuMa in the context of pump-probe ARPES measurements. ## II Experimental Details ### Optical Setup In Fig. 1, we present the layout of the Artemis optical setup for the tests described here. The pump and probe pulses are generated from the output of a 1-kHz Ti:Sapphire laser (upgraded from the RedDragon, KMLabs) with a pulse energy of about 3 mJ at 790 nm. The bandwidth is approximately 50 nm, as shown in Fig. 1(a). The output laser beam is split into two parts, with 80% focused onto a 200 \(\mu\)m Ar gas jet, via a lens of 500-mm focal length, for HHG. An example of the resulting XUV spectrum--_i.e._ the frequency comb--is shown in panel (b). For these conditions, the usable high-harmonic energies range from approximately 17 eV to 45 eV, with a maximum photon flux of about \(10^{10}\) photons/second/harmonic at 27 eV. A single harmonic is selected by a time-preserving grating monochromator [25]. To avoid space-charge effects, which are discussed in Section IV.2, the photon flux was reduced to \(10^{8}\) photons/second/harmonic by an adjustable slit after the monochromator. The remaining 20 percent of the output beam is used for pumping, either at its fundamen tal wavelength or after frequency-doubling or -quadrupling by beta barium borate crystals. A delay stage in the pump beamline enables time-resolved measurements. A half-wave plate (HWP) and a quarter-wave plate (QWP) are added into the pump beamline for polarization control: Fig. 1(c) shows calibration data for the HWP rotation angle (QWP angle was held fixed). The pump beam is finally focused on the sample, using a lens with focal length of 1.5 m. The pump and probe beams reach the sample almost collinearly, with an angle of 45\({}^{\circ}\) relative to the sample normal when the sample is at normal emission relative to the detector. The pump fluence is about 2 mJ/cm\({}^{2}\), with pump spot size of approximately 250 \(\mu\)m at FWHM (450 \(\mu\)m at 1/e\({}^{2}\) width). The XUV spot size of about 80 \(\mu\)m is measured roughly by the size of the spot on a scintillating crystal, and confirmed by the FeSuMa in Direct Mode (see Section II.2). The images of the two beam spots recorded by FeSuMa are presented in Fig. 1(d). The time resolution is determined from the auto-correlation spectrum, as shown in the Supplementary Material [26]. Figure 1: (a) The spectrum of the Ti:Sapphire laser, centered at about 790 nm with a bandwidth of approximately 50 nm. (b) High-harmonics spectrum generated in the argon gas jet. The maximum photon flux is approximately 10\({}^{10}\) photons/second/harmonic at 27 eV. A time-preserving monochromator is used to choose between the probe energies. (c) Calibration curves for the incident pump polarisation, acquired as measured intensity through a polarizer after the QWP as the HWP is rotated. (d) The spot sizes of the probe (80 \(\mu\)m, FWHM) and pump beams (250 \(\mu\)m, FWHM), captured by FeSuMa. (e) A simplified schematic of the experimental setup. The locations of delay stages are labeled “DS,” while “BS” indicates an 80-20 beam splitter. ### Working Principles of FeSuMa The FeSuMa is a new type of ARPES analyser that combines Fourier electron optics with retarding field techniques [24]. The lens of the device consists of several cylindrical elements that represent the simplest element of electron optics--the Einzel lens. It focuses parallel electron beams, originating from the sample surface, into corresponding points in the focal plane. This is similar to the action of a convex optical lens which makes a Fourier transformation of light. The novelty of the approach is in placing the detector, a multichannel plate (MCP), directly in the focal plane, and applying a retarding potential, \(V_{r}\), to the front of the MCP. In practice, the focal points lie not on a plane but on a curved surface, and the detector is placed so as to achieve a reasonable balance between angular acceptance and angular resolution. The signal is amplified by a pair of MCPs in "chevron" geometry, and is converted into photons by a phosphorus screen. A camera outside the vacuum captures the image and sends it to the computer for further processing. By setting \(V_{r}\) such that only Fermi-level electrons can reach the detector from an unpumped sample, one can observe the Fermi surface map directly on the screen. In order to obtain information about electrons with higher binding energies, \(V_{r}\) is reduced step-by-step while the detector collects the integrated signal. Subsequent differentiation results in a conventional photoemission spectrum. An example of such a measurement is in Fig. 2(a), where we show a Bi core-level spectrum acquired from the Bi(111) surface [26]. The spin-orbit splitting in the Bi \(5d\) doublet is well resolved when the spectrum is differentiated, in Fig. 2(b). In like manner, to obtain the intensity distribution of a photoemission signal from valence states as a function of momentum and energy, a three-dimensional data set is recorded and then differentiated along the energy axis across a smaller range of energies close to the Fermi level. We show the example for the case of Bi(111) in Fig. 2(c), where the Fermi surface, momentum distribution, and underlying dispersion of the electronic states are visible. Due to the semimetallic nature of bulk Bi, the photoemission intensity at the Fermi level is dominated by surface states [27]. The bulk and surface Brillouin zone (BZ) of Bi(111) is provided in Fig. 2(d) for reference. The FeSuMa operates in three regimes: Fourier Mode, Direct Mode and Optical Mode. Within the first of these regimes, there are actually three settings, characterized by angular acceptances of \(\pm 8^{\circ}\), \(\pm 14^{\circ}\) and \(\pm 16^{\circ}\). Angular acceptance in the Fourier modes can be extended by applying a bias potential (see discussion below and Fig. 5)--a technique that is also used in conventional ARPES [29]. The FeSuMa's ability to instantly detect the angular distribution of intensity allows the parameters to be quickly adjusted, minimising the distortion of the electric field caused by any non-cylindrical symmetry in the sample environment. In the Direct Mode, the lens projects an image of the electron source in real coordinates; thus, it can be used to characterize and track the beam spot in two dimensions (see Fig. 1(d)). This is a significant advantage in comparison with conventional HAs, where only one spatial coordinate, corresponding to the direction along the entrance slit, is accessible. Since the MCP is sensitive to UV photons, Direct Mode can also be used to detect reflected or scattered light from surface features and sample edges, and thus either to track the position of the photon beam or to find flat portions of the surface (since no photons should enter the Figure 2: (a) Example of static angle-integrated raw data obtained from Bi(111) by scanning the retarding potential \(V_{r}\) (\(h\nu\) = 37.4 eV, sample measurement temperature 300K). (b) Same data as in (b), after differentiation. The core-level spectrum of Bi(111) is now well resolved. (c) Example of an (\(E\), \(k_{x}\), \(k_{y}\))-resolved data set, acquired without optical pumping from Bi(111) (\(h\nu\) = 22.4 eV, sample measurement temperature 78K). The directions of the cuts correspond to the directions of the lines (matched in colour to the frames of the two spectra) in the constant-energy contour at left. Note the scale bar at the bottom: high-symmetry points \(\mathrm{\bar{K}}\) and \(\mathrm{\bar{M}}\) are outside the range shown in the panel. The asymmetry of intensity in the cut along \(\mathrm{\bar{M}}\)-\(\mathrm{\bar{\Gamma}}\)-\(\mathrm{\bar{M}}\) arises from matrix element effects that can be seen clearly in the constant energy slices at left. (d) Schematic of the Bi BZ, with high-symmetry points labeled. Panel (d) is adapted from Ref. [28]. analyser from a flat sample region if the electron signal is optimised). We finally mention here an advantage of the FeSuMa for pump-probe experiments: unlike in HAs and MMs, electron trajectories in the FeSuMa (being an order of magnitude shorter) do not pass through auxiliary focal planes or crossing points. In HAs, there are two imaging planes and one crossing point where electron trajectories are brought together (_e.g.,_ Ref. [30]), and Coulombic electron-electron interactions are presumably enhanced at such points. It is generally desirable to avoid such space charge effects, as they degrade angular and energy resolution. In the case of MMs, electron-electron interactions both inside the focusing column and in front of the objective lens are complex and problematic [19; 5; 31]. The FeSuMa's design, which reduces the effects of space charge inside the electron optics, is beneficial to pump-probe measurements. This will be discussed further below. ## III Proof of principle data In the following, we summarise the versatile applications of the FeSuMa analyser when coupled with a pump-probe setup. To facilitate comparison with similar approaches involving HAs and MMs [19], we benchmark the capabilities of the system using a widely studied layered transition metal dichalcogenide, cleaved bulk trigonal prismatic tungsten diselenide (\(2H\)-WSe\({}_{2}\)). Bulk WSe\({}_{2}\) is an indirect bandgap semiconductor [32] with a hexagonal BZ that is sketched in Fig. 3(a). Its valence band maximum (VBM) is located at the \(\Gamma\)-point, and the conduction band minimum (CBM) at the \(\Sigma\)-valley, in between \(\Gamma\) and \(K\). Upon optical excitation with a circularly polarized infrared pulse, the material exhibits spin-, valley-, and layer-polarisation [33]. We start by demonstrating a simple approach to a common (but historically challenging) application of TR-ARPES: namely, characterization of excited carrier relaxation between local conduction band minima in different parts of the BZ. In Fig. 3(b), we present the evolution of excited state signals that have been collected with \(V_{r}\) set so as to probe just above the Fermi level. Since every electron with a kinetic energy greater than \(eV_{r}\) is collected by the FeSuMa, all the unoccupied states can be monitored concurrently, regardless of their energy dispersion. A comprehensive discussion of the dynamics, both for bulk and single-layer WSe\({}_{2}\), can be found in multiple publications (_e.g._ Refs. [34; 35; 33]). Here, we simply highlight that the FeSuMa allows detection of localised charge populations in a large portion of the BZ simultaneously, allowing for identification of scattering pathways in the material. The time traces in Fig. 3(c) were collected over 20 mins, corresponding to 36 s of acquisition per frame. As can be seen in the figure, the statistics are excellent, despite having been acquired with a low probe flux of only \(10^{8}\) photons/second. We note certain limitations of the efficient approach just described: here, time-resolved measurements are performed by integration, maintaining \(V_{r}\) at a set value. The analysis of data acquired in this way can be challenging if there are multiple excitations at different binding energies but similar \(k\); furthermore, access to information about band curvatures is restricted. Of course, the dataset can be extended to four dimensions (\(k_{x}\), \(k_{y}\), \(E\), \(\Delta\)), simply by sweeping \(V_{r}\) in the manner described above (leading to longer acquisition times). An advantage that the FeSuMa shares with PEEM and momentum microscopy is the capability for maintaining a fixed sample geometry while mapping the momentum space. Figure 3: (a) Surface (dark blue) and three-dimensional (green) Brillouin zones of \(2H\)-WSe\({}_{2}\). (b) (\(k_{x}\), \(k_{y}\)) slices corresponding to different stages of the ultrafast evolution of the system after pumping with \(s\)-polarized light at 800 nm (probe photon energy \(h\nu=22.6\) eV, sample temperature 78K). The sample is rotated such that \(\bar{\Gamma}\) is at the left edge of the detector. The \(\bar{\rm K}\) and \(\bar{\Sigma}\) points of the Brillouin zone are labeled. (i) Just after the optical excitation, only \(\bar{\rm K}\)-points are populated. (ii) Within 50 fs of the optical excitation, electrons can be seen to transfer from the \(\bar{\rm K}\)-valleys to \(\bar{\Sigma}\)-valleys. Retarding potential is set such that \(E-E_{F}=0.65\) eV. (iii) At longer times after the pump arrival, all of the excited electronic population has either transferred to the \(\bar{\Sigma}\)-points or has already relaxed fully. (c) Ultrafast dynamics of \(2H\)-WSe\({}_{2}\): orange (green) markers denote photoemission intensity difference, relative to pre-pumped intensity, integrated over the \(\bar{\rm K}(\bar{\Sigma})\)-points of the Brillouin zone. These temporal dynamics are in good agreement with previously published results [19; 33]. Incident light polarisation can remain fixed and photoemission matrix elements unchanged throughout an experiment, and one can straightforwardly extract information such as dichroism from excited-state populations. In Fig. 4(a), we show the excited carrier distributions that arise in 2\(H\)-WSe\({}_{2}\) pumped with four polarisations: linear vertical (LV), linear horizontal (LH), circular right (CR), and circular left (CL). Here, the choices of photon energy and acceptance angle do not image the whole BZ, but allow us to simultaneously see dynamics at the inequivalent \(\bar{\rm K}\)- and \(\bar{\rm K}\)'-points and at the corresponding \(\bar{\Sigma}\)- and \(\bar{\Sigma}\)'-points. The excitation with linearly polarised light leads to negligible linear dichroic (LD) contrast in the population at \(\bar{\rm K}\)- and \(\bar{\Sigma}\)-points. Pumping with LH light produces a strong signal around the \(\bar{\Gamma}\)-point; this is a known consequence of multi-photon photoemission process enhanced by this polarisation[36; 37]. On the other hand, we see significant circular dichroic (CD) signal at the adjacent \(\bar{\rm K}\)- and \(\bar{\rm K}^{\prime}\)-points. This arises due to a combination of (1) the primarily two-dimensional character of the states at \(\bar{\rm K}\)- and \(\bar{\rm K}^{\prime}\), and (2) the surface sensitivity of the ARPES measurement[33; 38]. Indeed, the low photoelectron kinetic energies in the measurements described here mean that these spectra are highly sensitive to the physics of the topmost atomic layer of the crystalline structure[39]. A full movie of dynamics in a different material system--Bi(111)--is available in the Supplementary Materials[26]. A powerful aspect of the Artemis setup is its ability to switch efficiently between different HHG probe energies. (See also Ref.[16].) This is possible because of carefully optimized optical alignment in the beamline and fine angular control of the final toroidal focusing mirror. Thus, we can coarsely map the out-of-plane dispersion of unoccupied states, in a manner analogous to that by which the occupied-state \(k_{z}\)-dispersion is obtained at synchrotron light sources. We demonstrate this principle in Fig. 4(b). Varying the probe energy leads to strikingly different excited state signals across the BZ. The lowest-lying conduction-band states along the \(K\)-\(H\) path are nearly non-dispersive[40], and are visible at all photon energies. However, the scattering from \(K\) to \(\Sigma\) is well captured at only one probe energy, 22.4 eV. In this connection, we note both that the out-of-plane dispersion along the \(\Sigma\)-\(X\) path is more pronounced than that along \(K\)-\(H\) path[40], and also that the photoemission matrix elements are presumably enhanced at particular probe energies[10; 16; 41]. The first of these points highlights the importance of thoughtful HHG photon-energy selection in studies of materials in which 3D-dispersing band structures play an important role; this is the case, for example, in Weyl candidates Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\)[42] and PtTe\({}_{2}\)[43]. The second points to the possibility of using matrix element effects to optimise signal-to-noise for all types of samples, including those with a primarily 2D electronic character. Figure 4: (a) Spin- and valley-polarised excited carriers in the surface electronic band structure of 2\(H\)-WSe\({}_{2}\) along the \(\bar{\Gamma}\)-\(\bar{\text{K}}\) high-symmetry line. At left, adapted from Ref. [33], red and blue color-coding in the band structure plot refers to the spin polarisation of the bands. Light and dark arrows symbolise right- and left-circular polarisation of the pump pulse. The data shows pump polarization-dependent measurements of excited-state spectra. The labels at the upper right-hand corners indicate linear vertical (LV) and horizontal (LH) polarizations, linear dichroism (LD) as a difference plot of LH-LV, circular right (CR) and left (CL) polarizations, and circular dichroism (CD) as a difference plot of CL-CR. Probe energy was 22.6 eV, and the spectra were collected at the time delay of 200 fs. (b) Probe-photon-energy-dependent excited-state spectra at the peak of excitation (top row) and at 200 fs after the excitation (bottom row). The probe photon energy is indicated at the upper right-hand corner at the top of each column. The out-of-plane electronic dispersion along \(\Sigma\)-X leads to a photon-energy-dependence in the photoemission intensity of the states projected into \(\bar{\Sigma}\). Meanwhile, the states along K-H are nearly non-dispersing, and thus the photoemission intensity in those states is nearly independent of probe photon energy. The schematic of the out-of-plane-dispersing band structure is adapted from Ref. [33]. Data were acquired with \(V_{r}\) set such that \(E-E_{F}=0.65\) eV. ## IV Additional Technical Considerations ### "Fisheye" Data Acquisition Applying a bias voltage to the sample holder is a very convenient approach to increase the momentum field of view [29]. Due to the additional component of the field towards the analyser (shown schematically in Fig. 5(a)), electron trajectories are bent, and electrons that initially Figure 5: (a) Electric fields between sample and FeSuMa analyser for normal operation mode (top) and when ”fisheye voltage” is applied (bottom). Calculated with SIMION [44]. (b) Fermi surface of Bi(111) taken with probe energy of 16.2 eV, with increasing “fisheye voltage” applied. (The focusing conditions were optimized at condition ii, with the result that the focusing conditions are slightly non-optimal in (i) and (iii) and the image appears off-centre on the detector.) (c) Workflow for applying the corrections required for datasets acquired with “fisheye voltage”. (i) Conversion between position on the MCP, \((x,y)\), to emission angle, \(\theta_{x},\theta_{y}\), based on the calibration radial function (red curve) obtained from ray tracing. (ii) Angle-to-momentum transformation. (iii) Angular and radial corrections to the image based on the expected symmetry of the intensity distribution. Black dot-dashed lines represent the portions of the image that need correction, while arrows indicate the direction of the corrections. deviate strongly from the lens axis are nevertheless able to enter the analyser. Thus, using photon energies of only 16.2, 22.2, 28.6, and 34.2 eV, we cover portions of the momentum space at the Fermi level that are much larger than we would otherwise be able to access without the fisheye voltage, achieving radii of 0.81, 0.88, 0.9, and 0.97 A\({}^{-1}\), respectively. The drawback of this approach is that it can lead to distortions resulting from the presence of the electrical field, especially when cylindrical symmetry around the lens axis is broken by the sample's immediate environment (_i.e.,_ non-cylindrical sample holder, manipulator shape, cables, etc.). Because we can easily see the momentum distribution "live" before acquiring a spectrum, we can take some steps to minimize distortions by adjusting of the geometry of the experiment. Further processing after the measurement, based on purely symmetry-driven considerations, allows us to eliminate all visible distortions of the angular distribution. This will now be explained. We introduce two types of corrections to deal with angular and radial distortions, taking as our starting point the known symmetries of our material systems. In the angular case, we are concerned with a segment of the dataset where there are distortions like those illustrated schematically by black dashed lines in the left panel of Fig. 5(c)(iii). We take the two axes A and B, as indicated by dashed lines leading to the red and green triangles, respectively, in the left panel of Fig. 5(c)(iii). In the affected segment of the data we then shift all points that lie along the A-axis onto the B-axis. For all other points in this segment, a linear interpolation then squeezes the part of the image that lies to the left of B and stretches the part of the image that lies to the right of B. For the radial correction, we show an illustrative example in Fig. 5(iii). In this simple cartoon, we only need to correct one portion of the image that is obviously compressed relative to the others. Identifying the two points C and D that lie along the same axis (orange and purple triangles in the central panel of Fig. 5(c)(iii)), we perform a linear interpolation such that C is moved onto D, and all other points in a segment are stretched (or squeezed) linearly while keeping the centre of the image intact. ### Space-charge As sketched schematically in Fig. 6(a), space charge arises due to Coulombic repulsive interactions within the dense cloud of photoelectrons emitted from the sample surface, leading to energy shifts and distortions of electron trajectories as they move towards the analyser [45; 46]. The resultant photoemission spectra exhibit reduced effective energy- and momentum-resolution, as well as other artifacts, such as shifting of spectra and possible "ghost" peaks [46]. The energy shift and broadening are illustrated schematically in Fig. 6(b). In addition to the fact that a dense cloud of Coulombically interacting photoelectrons can be generated by the probe pulse, the pump beam can produce an unwanted cloud of "slow" secondary electrons via multiphoton photoemission and emission from surface defects [47]. This latter effect can contribute additional space-charge effects. In our setup, photoemitted electrons are tightly confined in space and time only once, at the sample surface, before they interact with the MCP [24]. This is an advantageous situation relative to HAs and MMs, where additional focal planes and spatial confinement can cause further Coulombic interaction [5; 18; 19]. Moreover, in a ToF, a long-range electric field develops as slow electrons produced by the pump propagate through the lens tube, and fast valence electrons experience an accelerating or decelerating force, depending on the time delay, culminating in a "fake" time-zero at a large (tens-of-ps) time delay [5]. These effects are largely avoided in the FeSuMa. The retarding voltage readily repels the slowest secondaries--possibly even at the very entrance of the lens column, depending their kinetic energies--so as to reduce their interaction with the other photoelectrons in the lens tube. Of course, the severity of distortions always depends also on XUV beam diameter and on pulse energy [48]. At the relatively low 1-kHz repetition rate of the Artemis set-up that was used for this particular experiment, the choice of photon flux was a compromise between the space-charge and the acquisition time required to achieve sufficient signal-to-noise ratio. In future experiments on the Artemis 100-kHz beamline, we expect to see this issue partially remedied. In Fig. 6(c), we characterise the spectral modifications due to probe-induced space-charge. We use Bi(111) spectra to estimate the shift of the Fermi edge, which occurs across the entire investigated XUV range as a function of flux [46]. Fig 6(d) shows the pump-induced Fermi-edge shift. The pump-induced spectral distortions exhibit a complex dependence on the time delay, and are present over a range of several picoseconds after temporal overlap [49]. The secondary-electron population scales non-linearly with the \(n^{th}\)-power of the laser fluence and, in general, affects primarily the low kinetic energy portion of a spectrum. These pump-fluence-dependent measurements were made at a time delay of 150 fs before the op tical excitation, in order to exclude the affects of real ultrafast dynamics happening in the sample. Up to a fluence of approximately 5 mJ cm\({}^{-2}\), the spectra are virtually unaltered by any pump-induced space-charge. Above this threshold, the spectral shift shows a power dependence of \(F^{x}\) (\(x=2.7\pm 0.1\)), in agreement with a previous study of the excitation with a 1.55 eV pump [47]. ## V Conclusions The FeSuMa offers a simple, affordable approach to high-quality pump-probe photoemission measurements, particularly for time-resolved ARPES of valence and conduction states near the Fermi level. Like certain PEEM-based approaches, it permits measurement of dynamics spanning the entire Brillouin zone. In the context of an HHG beamline that permits scanning of the probe energy, the fisheye mode of operation offers particular benefit for Figure 6: (a) Cartoon of space-charge generated in a pump-probe photoemission experiment. Coulombic interactions occur between charged particles in a dense cloud of photoemitted electrons. (b) Qualitative impact of space-charge on photoemission spectra. Dark (light) blue peaks represent the electron distribution just after photoemission (after travel towards the analyser). (c) Measured probe-induced peak shift of the spectrum at the peak of excitation, as a function of probe photon flux (measured as photocurrent \(I_{PD}\) induced in a photodiode that can be extended into the beampath. Error bars are estimated from the standard deviation in detected photoelectron counts.) (d) Measured pump-induced peak shift of the Fermi edge before the excitation, as a function of pump fluence. (Horizontal error bars estimated from typical fluctuation in pump power as measured with power meter [26], vertical error bars determined by uncertainty in fitting the FL shift). studies of 3D-dispersing states. The FeSuMa is highly complementary to hemispherical analyzers, and constitutes an attractive option for laboratory-scale measurements of electron dynamics. Measurements of conduction-band dynamics in layered 2H-WSe\({}_{2}\) yield excellent agreement with previously published results based on momentum microscopy. ###### Acknowledgements. We thank Phil Rice, Alistair Cox, and the CLF Engineering Section for technical support; and Drs. James O. F. Thompson and Marco Bianchi for helpful discussion. We acknowledge funding from VILLUM FONDEN through the Centre of Excellence for Dirac Materials (Grant No. 11744) and from the Independent Research Fund Denmark (Grant No. 1026-00089B). Work at the Artemis Facility is funded by the UK Science and Technology Facilities Council. The research leading to these results has received funding from LASERLAB-EUROPE (grant agreement no. 871124, European Union's Horizon 2020 research and innovation programme). Supplementary material is available online. It includes a movie of data acquired with the FeSuMa across a range of delay times before and after an optically pumped excitation in Bi(111)/Bi\({}_{2}\)Se\({}_{3}\), and information about the time resolution and laser stability on the beamline used for this experiment.
2309.17144
Prototype Generation: Robust Feature Visualisation for Data Independent Interpretability
We introduce Prototype Generation, a stricter and more robust form of feature visualisation for model-agnostic, data-independent interpretability of image classification models. We demonstrate its ability to generate inputs that result in natural activation paths, countering previous claims that feature visualisation algorithms are untrustworthy due to the unnatural internal activations. We substantiate these claims by quantitatively measuring similarity between the internal activations of our generated prototypes and natural images. We also demonstrate how the interpretation of generated prototypes yields important insights, highlighting spurious correlations and biases learned by models which quantitative methods over test-sets cannot identify.
Arush Tagade, Jessica Rumbelow
2023-09-29T11:16:06Z
http://arxiv.org/abs/2309.17144v1
# Prototype Generation: Robust Feature Visualisation for Data Independent Interpretability ###### Abstract We introduce _Prototype Generation_, a stricter and more robust form of feature visualisation for model-agnostic, data-independent interpretability of image classification models. We demonstrate its ability to generate inputs that result in natural activation paths, countering previous claims that feature visualisation algorithms are untrustworthy due to the unnatural internal activations. We substantiate these claims by quantitatively measuring similarity between the internal activations of our generated prototypes and natural images. We also demonstrate how the interpretation of generated prototypes yields important insights, highlighting spurious correlations and biases learned by models which quantitative methods over test-sets cannot identify. ## 1 Introduction Interpretability techniques have become crucial in the era of increasingly powerful artificial intelligence (AI) systems [1, 2, 3, 4]. As AI models continue to outperform human benchmarks across numerous tasks and domains, the importance of understanding their decision-making processes has never been more pressing. This is particularly true for black-box deep learning models, which are increasingly employed across various industries ranging from healthcare to autonomous vehicles [5, 6]. Apart from safety concern in these high-stakes domains, EU law requires certain AI systems to comply with the 'right to explanation' [7] making interpretability crucial for business operations. Over the past decade, various methods have been developed to improve human understanding of complex AI models. Techniques such as LIME [8], SHAP [9], CAM [10] and Network Dissection [11, 12] have targeted local interpretability, offering explanations of model decisions for individual data points. However, these techniques cannot provide a global understanding of _what a model has learned overall_, which is necessary for comprehensive analysis and trust in automated systems. In this work, we focus on feature visualisation [13] as a powerful interpretability tool able to extract such holistic insights from arbitrary neural networks. Despite its promise, feature visualisations have not been without criticism. Past research has pointed out the disparity between internal processing of feature visualisations as compared to other natural images [14] by observing path similarity. We discuss these criticisms further in Section 2. Addressing these limitations, we introduce _Prototype Generation_ in Section 3, a robust visualisation tool that not only contains determining features for any given class but also maintains equal or better path similarity with natural images. Our experiments using Resnet-18[15] and InceptionV1[16] show that prototypes generated using our method are highly similar to natural images in terms of internal path activations. Understanding the model at a global level helps in identifying systemic biases, uncovering spurious correlations, and potentially refining the model for better performance and fairness. We use prototype generation to discover undesired correlations and identify failure modes on unseen data in Section 4, demonstrating how our method provides _data-independent_ insights, removing the need for time-consuming manual inspection of training datasets to subjectively identify unwanted biases. Through this, our contribution serves the broader goal of enhancing global data-independent interpretability in deep learning models, thereby making them more transparent, accountable, and trustworthy. ## 2 Related Work Feature visualisation is a method to extract information from a model about what it has learned [17, 18, 19, 13, 20]. Unlike local interpretability methods that focus on individual predictions, feature visualisation is a _global_ interpretability method that aims to visualise the features that different neurons inside a model have learned to respond to. Observing feature visualisations to understand model behaviour is a data-independent approach to interpretability, allowing for qualitative assessment of a model's internal logic irrespective of any test dataset - and so, can be used to find failure modes irrespective of whether examples of those failures exist in a test set. This technique works by generating an input \(\hat{x}\) that maximises a chosen output logit or internal activation (in this case, output logit \(c\) with respect to model \(h\)): \(\hat{x}=\arg\max\limits_{x}h_{c}(x)\). Feature visualisation has been used for a number of purposes, such as identifying specialised circuits within neural networks [21] and understanding the learned features of individual nodes or groups of nodes [22]. Despite its utility, feature visualisation is not without its detractors. One prominent line of criticism comes from Geirhos et al.[14], arguing that the visualisations may not truly represent what the model has learned, and so cannot be reliably used to predict its behaviour on unseen data in the future. These criticisms are substantiated by experiments that manipulate feature visualisations to produce misleading or contradictory representations without changing the model's decision-making process. They also introduce the path similarity metric to quantify this. This metric measures the similarity between internal activation 'paths' caused by two different inputs across the layers of a neural network. If two inputs excite similar neurons, this leads to a high path similarity between these two inputs. The measure of similarity chosen by Geirhos et al.[14] is Spearman's rank order correlation (referred to as spearman similarity (SS) in the rest of this paper). This path similarity metric is used to show the disparity between internal activations in response to natural images versus feature visualisations of the same class. Geirhos et al.[14] also provide a proof that feature visualisation cannot formally guarantee trustworthy results, claiming that without additional strong assumptions about the neural network feature visualisations can't be trusted. However, this is also the case with any existing evaluation metric - on a given test set, two models may perform exactly alike, but there is always the possibility that they will differ on some unknown future input. Therefore, we argue that feature visualisation based approaches should not be seen as a magic bullet - but rather as an important and practically useful complement to quantitative assessment metrics. In the sections that follow, we show that feature visualisations of a specific kind - _prototypes_ - generated using our method contain key features for the class they represent, and maintain a consistent path similarity with natural images. By doing so, we overcome some of the limitations previously associated with feature visualisation. ## 3 Prototype Generation For a given model \(M\), we define a prototype \(P\) as an input that maximally activates the logit corresponding to \(c\), while keeping the model's internal activations in response to that input close to the distribution of 'natural' inputs. Let \(\mathbb{I}\) represent the set of all possible natural inputs that would be classified by model \(M\) as belonging to class \(c\). We aim to generate a prototype \(P\) such that it aggregates the representative features of a majority of inputs in \(\mathbb{I}\). Formally, we posit that the activations \(\textbf{A}_{P}\) of \(P\) are 'closer' to the mean activations \(\textbf{A}_{\mathbb{I}}\) of all \(I\in\mathbb{I}\) than any individual natural image \(I\) across all layers \(\mathbb{L}\) in \(M\). We measure 'closeness' between \(\textbf{A}_{P}\) and \(\textbf{A}_{\mathbb{I}}\) using two metrics: L1 distance and spearman correlation. We use spearman similarity as per Geirhos et al.[14] to allow for direct comparison of our methods with their published work. We also use the L1 distance to further substantiate this comparison. Calculating spearman similarity involves ranking the activations in terms of magnitude, whereas calculating L1 distance preserves the magnitude of activations. Since the input images are subject to preprocessing and so belong to a set training distribution, the magnitude of activations is relevant information that provides a more complete picture of path similarity. Using L1 distance our formal assertion is that, \[\sum_{l\in\mathbb{L}}|\textbf{A}_{l_{l}}-\textbf{A}_{P_{\mathcal{L}_{l}}}| \leq\sum_{l\in\mathbb{L}}|\textbf{A}_{l_{l}}-\textbf{A}_{I_{l}}| \tag{1}\] For the rest of the paper we will denote \(\sum_{l\in\mathbb{L}}|\textbf{A}_{l_{l}}-\textbf{A}_{P_{\mathcal{L}_{l}}}|\) as \(D_{P}\) and \(\sum_{l\in\mathbb{L}}|\textbf{A}_{l_{l}}-\textbf{A}_{I_{l}}|\) as \(D_{\textbf{I}}\). Denoting spearman similarity as SS, our formal assertion is that: \[SS(\textbf{A}_{l_{l}},\textbf{A}_{P_{\mathcal{L}_{l}}})\geq SS(\textbf{A}_{l_ {l}},\textbf{A}_{I_{l}}),\forall l\in L,\forall\textbf{I}\in\mathbb{I} \tag{2}\] If both of these conditions are satisfied, we can confidently assert that prototype P shows prototypical qualities of the class \(c\), and contains features representative of the model's understanding of that class. ### Our Method Existing feature visualisation methods aim to generate an input that maximally activates a selected neuron inside a neural network. Prototype generation is similarly a technique that generates an input, but with the objective of Figure 1: Example prototypes generated by our method for the ImageNet classes Mosquito Net, Alaskan malamute and Flute with their average spearman similarity across all layers of Resnet-18 denoted in brackets Figure 3: Comparison between our method and feature visualisation method proposed by Olah et al.[13] Figure 2: Example prototypes generated by Olah et al.[13]’s method for the ImageNet classes Mosquito Net, Alaskan malamute and Flute with their average spearman similarity across all layers of Resnet-18 denoted in brackets maximally activating a selected output logit (rather than an internal activation), as shown in Figure 1. This positions prototype generation as a specialised form of feature visualisation, distinguished by its focus on class-specific logits rather than internal activations. When a generated input satisfies the criteria we have laid out, remaining within the domain of 'natural' inputs and capturing representative features of its corresponding class, we term it as the model's 'learned prototype' for that class. Here, 'ideal representation' is used to signify that this learned prototype encapsulates what the model perceives as the most representative or 'ideal' features for categorising inputs into that particular class. Our approach differs from the existing feature visualisation methodology in a number of ways, as shown in Figure 3, We compare our implementation with the publicly available _Lucent_[23] library - the PyTorch implementation of the methodology proposed by Olah et al.[13]. Both implementations begin with a randomly initialised image. Lucent converts this randomly initialised image to the Fourier basis, but we find (as shown later) that this causes the resulting feature visualisations to be unrepresentative of natural class features. In contrast, we do not optimise in the Fourier basis, instead optimising the input pixels directly. We first optimise to minimise what we call probability variance loss, \(L_{pv}\) to generate a baseline input. This loss ensures that the output logits for our input image are balanced i.e. the input image has roughly an equal chance of being predicted to be a member of any class. Our preprocessing steps vary depending on the model's expectations; for models trained on ImageNet, this involves a normalisation shift using the mean and standard deviation of the ImageNet training set - of whatever preprocessing the model expects for inference. Additionally we apply random affine transformations to constrain the optimisation process and discourage the generation of out-of-distribution (OOD) adversarial inputs - we further discuss the effect of these transformations in Appendix A. Lucent uses similar random transformations, but does not tune them for path similarity. The difference in the resultant prototypes for _Lucent_ and our method can be seen clearly by comparing Figures 1 and 2. We define two losses: \(L_{c}\), the negative of the logit for the desired class; and \(L_{hf}\), the high-frequency loss that penalises large differences between adjacent pixel values. We use both \(L_{c}\) and \(L_{hf}\) to define our combined loss whereas Olah et al.[13] employ only \(L_{c}\). ### Experiments Figure 4: **Comparison of L1 Distance**. The L1 distances are normalised such that 0 corresponds to the mean L1 distance between natural image activations of the same class and 1 corresponds to the mean L1 distance between natural image activations of different classes. We assess the prototypes generated by observing how closely the prototype's activations mirror the average activations of natural images in the same class. We quantify closeness between activations by calculating L1 distance and spearman similarity as defined in Section 2. Appendix A contains information about hyperparameters and other implementation details. **L1 distance.** We generate prototypes for 11 random classes from Resnet-18 and InceptionV1 and collect 100 random images from each of these classes. We use these prototypes and images to calculate average \(D_{P}\) and average \(D_{I}\) across all 11 classes. In the case of Resnet-18 we find that for the 67 layers in the model, \(D_{P}\) is lower than \(D_{I}\) for 55/67 i.e. 82.1% of Resnet-18 layers. For InceptionV1, we find that \(D_{P}\) is lower than \(D_{I}\) for 150/223 layers i.e. 67.2% of all layers. Figure 4 shows \(D_{P}\) and \(D_{I}\) across all layers in Resnet-18 and InceptionV1. We also plot \(D_{I_{dc}}\) where \(I_{dc}\) denotes the set of all images that belong to different classes than the class \(P\) is generated for. It is clear to see that \(D_{P}\) is lower than both \(D_{I}\) and \(D_{I_{dc}}\) for most of the layers for prototypes generated from both Resnet-18 and InceptionV1 showing that the prototypes approximately satisfy our formal assertion related to L1 distances as specifed in Equation 1. For the majority of the model's layers, generated prototypes results in activations that are closer to the mean _natural_ activation, than any individual natural input image. **Path similarity per layer.** To characterise the path similarity of our generated prototypes, we select 11 random classes from the ImageNet dataset. We approximate \(A_{\mathbb{I}}\) by averaging the activations of 100 randomly selected images from each of the 11 classes. For each class \(c\), we generate a prototype \(P\), capture its activations \(A_{P}\), and also capture activations from the individual images in \(\mathbb{I}\). To allow for direct comparison of our results with those reported by Geirhos et al.[14], we also select 100 random images from other classes and capture their activations denoted by \(A_{I_{dc}}\). Our raw results consist of three sets of spearman similarity scores between, * Approximated \(A_{\mathbb{I}}\) and \(A_{P}\) * Approximated \(A_{\mathbb{I}}\) and \(A_{\boldsymbol{I}}\), averaged across all \(\boldsymbol{I}\in\mathbb{I}\) * Approximated \(A_{\mathbb{I}}\) and \(A_{\boldsymbol{I}_{dc}}\), averaged across all \(\boldsymbol{I}_{dc}\in\mathbb{I}_{dc}\) This raw data is normalised such that 1 corresponds to the spearman similarity obtained by comparing natural images of the same class, and 0 corresponds to the spearman similarity obtained by comparing images of one class against images of different classes. To reduce noise in our plots showing spearman similarity between \(A_{\mathbb{I}}\) and \(A_{P}\), we smooth the curve using _scipy.ndimage.convolve_ with a window size of 10 for the mean values, and 5 for the standard deviations. Figure 5 shows the normalised path similarity obtained by making the above comparisons. Our experimental results show that these prototypes support our formal assertion for Spearman similarity specified in Equation 2 for 38/67 i.e. 56.7% of all layers and have higher path similarity than natural images for all layers of Resnet-18 averaged across 11 different classes. In the case of InceptionV1 we find that our formal assertion holds for 147/224 i.e 65.6% of all layers in InceptionV1 averaged across the same 11 classes. Using our raw results, we also quantify the mean spearman similarity across all layers of Resnet-18 and InceptionV1 in Table 1. We can see that the average spearman similarity of our generated prototypes is higher than other natural images belonging to the same class on average, for both Resnet-18 and InceptionV1. \begin{table} \begin{tabular}{|c|c|} \hline & **Average spearman similarity** \\ \hline Prototype \(P\) & \(0.54\pm 0.06\) \\ \hline Same class images \(\mathbb{I}_{c}\) & \(0.50\pm 0.05\) \\ \hline Diff class images \(\mathbb{I}_{dc}\) & \(0.41\pm 0.06\) \\ \hline \end{tabular} (a) Resnet-18 \begin{tabular}{|c|c|} \hline & **Average spearman similarity** \\ \hline Prototype \(P\) & \(0.56\pm 0.07\) \\ \hline Same class images \(\mathbb{I}_{c}\) & \(0.50\pm 0.05\) \\ \hline Diff class images \(\mathbb{I}_{dc}\) & \(0.40\pm 0.06\) \\ \hline \end{tabular} (b) InceptionV1 \end{table} Table 1: Comparison of average spearman similarity Figure 5: **Comparison of Spearman Similarity**. Spearman similarities are normalised such that 1 corresponds to the spearman similarity between natural image activations of the same class and 0 corresponds to the spearman similarity between natural image activations of different classes. Here we show examples on two different networks, and for comparative purposes also provide the results of the same experiment from Geirhos et al.[14]. Note that our method produces super-normal results, with early layer activations from prototypes being closer to the mean natural activation than any natural input. ## 4 Prototype Insights Since our method generates prototypes that have high path similarity with natural images, 3 we might expect to be able to better understand _what_ models have learned about given classes simply by observing their generated prototypes for those classes. Here follows a case study to test whether information present in our prototypes can reliably predict model behaviour on unseen data. We focus on prototypes for two Imagenet classes: the academic gown prototype and the mortarboard prototype, generated by Resnet-18, as shown in Figure 6. **Hypothesis 1:** Resnet-18 will have higher accuracy classifying lighter skinned people wearing academic gowns than darker skinned people wearing academic gowns. This hypothesis emerges from the observation that the academic gown prototype in Figure 5(a) shows a lighter-skinned face prominently. We test this hypothesis by observing the performance of Resnet-18 on two different sets of images, one containing lighter skinned people wearing academic gowns and the other containing darker skinned people wearing academic gowns. We collect 50 random images for each of these sets from the internet, taking care to maintain a general parity between the two sets in image quality, setting and size. As shown in Table 2, lighter-skinned people wearing academic gowns are more likely to be classified as the academic gown class than darker-skinned people. **Hypothesis 2:** Resnet-18 is likely to misclassify images of mortarboards as academic gowns, if the images have a mortarboard and a face in them. By observing differences in the mortarboard and academic gown prototypes, we see that the mortarboard prototype has a much weaker representation of a face, compared to the academic prototype. This leads us to hypothesise that an image containing both a mortarboard and a face is likely to be misclassified as an academic gown. To test this hypothesis we again observe the performance of Resnet-18 on a set of images containing mortarboards with no faces and a set of images containing mortarboards and a face. We ensure that the mortarboards with faces have no presence of academic gowns. Results were again as expected, Table 3 shows that mortarboards with faces are more likely to be misclassified as academic gowns. Inspection of these prototypes directly hints at biases embedded in the training data. For instance, the academic gown prototype's lighter-skinned face prominence suggests that the training dataset might have had an over-representation of lighter-skinned individuals - and inspection of ImageNet training dataset shows that this is indeed the case. This \begin{table} \begin{tabular}{|l|c|c|} \hline & **Accuracy** & **Probability(AG)** \\ \hline **Lighter-skinned people** & 72.5\% & 0.62 \\ \hline **Darker-skinned people** & 60\% & 0.54 \\ \hline \end{tabular} \end{table} Table 2: Comparison of Resnet-18 performance on light and dark skinned people wearing academic gowns along with the probability of prediction for academic gowns(AG) Figure 6: Example prototypes from Resnet-18 imbalance, when unaddressed, could lead to real-world consequences where certain demographic groups consistently experience lower accuracy, perpetuating biases. The differences observed between prototypes can further shine light on potential areas of misclassification. Using the mortarboard example, understanding how the model interprets and prioritiises features can identify when and why an image might be misclassified; this model seems to be great at classifying mortarboards in a vacuum, but the inclusion of facial features leads to misclassification. Crucially, all of this is done without reference to any test set of data, meaning that our insights are not constrained only to misclassifications contained within that limited test set. In both these cases, we didn't have to comb through the entire dataset but rather observing the prototypes provided us with a data-independent way of understanding Resnet-18's behaviour on unseen data. While metrics on a test set can provide a broad overview of a model's performance _with respect to that test set_, they often don't provide the granularity needed to understand _why_ a model might be likely to fail on future, unseen data. Prototypes can meaningfully augment existing evaluation metrics in a number of ways: * **Identifying dataset bias.** If a model shows bias, as in the lighter-skinned academic gown prototype, this points at bias in the data. Armed with this knowledge, the dataset can be modified or augmented to remove this bias, and the model retrained to improve performance on underrepresented classes or features. * **Spotting spurious correlations.** By comparing prototypes for closely related classes, one can discern which features are given undue importance, enabling deeper understanding of model failures due to the presence of potentially misleading features. * **Rapid Iteration.** Model developers can generate prototypes during training, spotting issues like biases or potential misclassifications early in the process. These insights also enable targeted data augmentation, necessitating the collection and preprocessing of data samples specific to correcting a problem in the model, rather than just throwing more (potentially also biased) data at the problem. This means more rapid iteration and correction, saving both time and resources. ## 5 Discussion The primary advantage of our methodology is its ability to furnish insights into what a model has learned. In situations where the stakes are high - medical diagnoses, financial predictions, or autonomous vehicles, to name a few - a deeper comprehension of what a model has learned is important for safety. It moves us from a regime of blindly trusting the outputs of our models if the test set accuracy is high, to one where we can verify that a model has learned sensible features and will therefore perform reliably in deployment. Our prototypes allow us to essentially engage in an iterative feedback loop that continually enhances a model's performance: * **Prototype Generation.** Initially, we generate prototypes to visualize the model's understanding of different classes. * **Insight Extraction.** Once these prototypes are available, they can reveal specific biases or tendencies in the model's learning. As shown in Section 4, if the prototype of an 'academic gown' predominantly features a lighter-skinned individual, it highlights a potential bias in the model's understanding of that class. * **Targeted Retraining.** Based on the insights derived from the prototypes, targeted retraining can be conducted. Using our earlier example, the model can be retrained with a more diverse set of images representing 'academic gowns', thus rectifying its inherent bias. Furthermore, if a model is underperforming on a specific class and the reason is not immediately clear from the validation data, generating a prototype can shed light on the shortcomings in its learning. This proactive approach facilitates the identification of what the model has learned or perhaps, more importantly, what it has failed to learn. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & **Accuracy** & **Probability(MB)** & **Probability(AG)** \\ \hline **Mortaboard without face** & 92.3\% & 0.77 & 0.05 \\ \hline **Mortaboard with face** & 70.5\% & 0.67 & 0.25 \\ \hline \end{tabular} \end{table} Table 3: Comparison of Resnet-18 performance on mortarboards with and without faces along with the probability of prediction for mortarboards(MB) and academic gowns(AG) Moreover, interpretability techniques of this kind make _knowledge discovery_ possible - that is, if we are able to train a model to perform a task that humans cannot, we can use interpretability to understand what patterns it has identified in its training data that we are unaware of, and thereby gain new insights about that data. **Future Work** While our method has shown promise, it's essential to acknowledge its limitations. We cannot, as of now, provide a formal proof that feature visualisation of this nature will consistently offer useful insights across all use cases and models. Geirhos et al.[14] also raise the issue of fooling circuits, and demonstrate that it is possible to make visual changes in the prototype that can still maximally activate a given class logit without containing any representative features of the class, which we do not address. We wish to address these limitations in future work, and expand our analysis with further case studies of models deployed in the real world, with reference to both bias detection and knowledge discovery. We will also apply prototype generation to other modalities such as tabular data and language, to see if insights similar to Section 4 can be gleaned from prototypes in these other modalities as well.
2308.00077
A Novel Deep Learning based Model to Defend Network Intrusion Detection System against Adversarial Attacks
Network Intrusion Detection System (NIDS) is an essential tool in securing cyberspace from a variety of security risks and unknown cyberattacks. A number of solutions have been implemented for Machine Learning (ML), and Deep Learning (DL) based NIDS. However, all these solutions are vulnerable to adversarial attacks, in which the malicious actor tries to evade or fool the model by injecting adversarial perturbed examples into the system. The main aim of this research work is to study powerful adversarial attack algorithms and their defence method on DL-based NIDS. Fast Gradient Sign Method (FGSM), Jacobian Saliency Map Attack (JSMA), Projected Gradient Descent (PGD) and Carlini & Wagner (C&W) are four powerful adversarial attack methods implemented against the NIDS. As a defence method, Adversarial Training is used to increase the robustness of the NIDS model. The results are summarized in three phases, i.e., 1) before the adversarial attack, 2) after the adversarial attack, and 3) after the adversarial defence. The Canadian Institute for Cybersecurity Intrusion Detection System 2017 (CICIDS-2017) dataset is used for evaluation purposes with various performance measurements like f1-score, accuracy etc.
Khushnaseeb Roshan, Aasim Zafar, Shiekh Burhan Ul Haque
2023-07-31T18:48:39Z
http://arxiv.org/abs/2308.00077v1
A Novel Deep Learning based Model to Defend Network Intrusion Detection System against Adversarial Attacks ###### Abstract **Network Intrusion Detection System (NIDS) is an essential tool in securing cyberspace from a variety of security risks and unknown cyberattacks. A number of solutions have been implemented for Machine Learning (ML), and Deep Learning (DL) based NIDS. However, all these solutions are vulnerable to adversarial attacks, in which the malicious actor tries to evade or fool the model by injecting adversarial perturbed examples into the system. The main aim of this research work is to study powerful adversarial attack algorithms and their defence method on DL-based NIDS. Fast Gradient Sign Method (FGSM), Jacobian Saliency Map Attack (JSMA), Projected Gradient Descent (PGD) and Carlini & Wagner (C&W) are four powerful adversarial attack methods implemented against the NIDS. As a defence method, Adversarial Training is used to increase the robustness of the NIDS model. The results are summarized in three phases, i.e., 1) before the adversarial attack, 2) after the adversarial attack, and 3) after the adversarial defence. The Canadian Institute for Cybersecurity Intrusion Detection System **2017** (CICIDS-**2017**) dataset is used for evaluation purposes with various performance measurements like f1-score, accuracy etc. Adversarial Machine Learning, Adversarial Attacks, Adversarial Defence, Network Intrusion Detection, Deep Neural Network. ## I Introduction Machine Learning (ML) and Deep Learning (DL) based algorithms are widely famous and extensively adopted in various sectors like Transportation [1], Healthcare [2], Image and Speech Recognition [3, 4, 5], Machine Translation [5], Network Intrusion Detection Systems (NIDS) [6, Cybersecurity [7, 8, 9, 10, 11] and much more. The tremendous growth of ML/DL based systems becomes possible due to the ease of affordable computational power, such as cloud services, and multiple GPU and TPU support, which leads towards promising results for future automation. The research community has been working to improve the efficiency of ML and DL algorithms in terms of various performance metrics for more than a decay [12][13]. However, the generalization and robustness capability of the ML and DL algorithms cannot be ignored in today's era. This phenomenon includes the model's ability to deal with adversarial cyber attacks. ML and DL methods are vulnerable to adversarial attacks, and these are intentionally crafted inputs (perturbations examples) to the models and mislead the system into producing incorrect results. Adversarial examples are the biggest vulnerability of the ML and DL algorithms, hence rendering its adoption in mission-critical applications such as NIDS, Streaming and Online ML/DL learning algorithms. Evasion, Extraction, Poisoning, and Inference are the four types of adversarial machine learning. In the case of NIDS, the Evasion attack led the model to misclassify the malicious network traffic as benign. The Poisoning attack aims to corrupt the NIDS model by inserting adversarial points during its training phase that cause the model to act in a way that is advantageous to the malicious user. The other type of attack is Extraction, where the malicious actor gathers information about the learning algorithms and their parameters to rebuild the same model and later uses it to attack and learn the targeted system. In the Inference attack, the malicious actor attempts to analyze the dataset information on which the learning system is trained or tested without accessing it. The black-box and white-box approaches are the two categories of adversarial machine learning. In the white-box approach, the malicious actor knows the learning algorithm; however, in the black-box method, only little or no information is known about the learning algorithm. Adversarial machine learning is extensively explored in the unconstraint domain (e.g. image and object classification and recognition); however, it is less explored in constrained areas. In the case of the unconstrained domain, the adversary can fully exploit the features or pixels of the object/image. But in a constrained domain, the situation is different as 1) features may be correlated with each other, 2) it can be binary, continuous or categorical, and 3) some feature values can not be changed by the adversary and remain fixed. All these factors make it unclear "whether the constrained domain is less vulnerable to adversarial examples?". This hypothesis is tested by Sheatsley et al. [14]. The authors concluded that generated misclassification rate is greater than 95% when experimented with two algorithms, adaptive JSMA and histogram sketch generation. In this research work, we empirically analyzed the effects of four powerful adversarial attack algorithms: FGSM [15], JSMA [16], PGD [17] and C&W [18] over the DL-based NIDS. We further studied one of the most powerful defence methods to safeguard NIDS from adversaries by applying the adversarial training defence mechanism. The overall research is organized as follows: Section II discusses the related study, and Section III explains the proposed approach. Section IV discusses the experimental results, and Section V concludes the overall research work. ## II Related Work Adversarial machine learning is a common area of machine learning and computer security. Szegedy et al. [19] first discovered the intriguing properties of Neural Networks (NN) in 2014, which revealed that the NN are vulnerable to adversarially crafted inputs. The authors empirically proved this by generating adversarial inputs with the box-constrained optimization-based algorithm. Three datasets, namely, MNIST, ImageNet and approx 10M image samples from Youtube, are utilized with different NN architectures and hyperparameters settings. Since then, the research community has been exploring and searching for new methods for adversarial machine learning. Wang [20] explored adversarial machine learning algorithms in a supervised NIDS. The author explored the effect of four methods, namely FGSM, JSMA, DeepFool, and C&W, to analyse the impact of the top features that are altered and contribute to generating adversarial examples. Through extensive experimentation, the reduced confidence score is evaluated with various performance measures like f1-score, accuracy, precision, AUC value etc. However, the research work can be further extended to include recent defence strategies to secure NIDS from adversarial attacks. Pawlicki et al. [21] studied the impact of adversarial machine learning on ML and DL based NIDS. The authors used DL, Evolutionary Computation and Monte Carlo methods to generate perturbed examples to fool NIDS into producing incorrect predictions. The latest CICIDS-2017 dataset is used for evaluation purposes over the five ML and DL algorithms, namely, ANN, Random Forest, AdaBoost, SVM and KNN. Guo et al. [22] studied adversarial machine learning in cybersecurity and compared the generation of adversarial examples in computer vision and NIDS. The authors applied the Basic Iterative Method (BIM) [23], it is the extension of the FGSM [15] method with a multiple-step size applied iteratively. The authors built two models, the first was the target, and the second was the substitute model. The KDDCUP-99 and CICIDS-2018 two benchmark datasets are used for experimentation. The BIM method generates adversarially perturbed examples to attack the target system. However, the research work can be extended to explore more complex adversarial attack strategies as well as their defences to increase the robustness of the model. Qureshi et al. [24] proposed a novel algorithm, Random Neural Network based Adversarial NIDS (RNN-ADV), along with the JSMA algorithm. JSMA algorithm is more efficient in terms of resource utilization as it changes only few features to create perturbed examples. The NSL-KDD dataset is used for experimentation, and a comparison is made between the MLP and the proposed RNN-ADV algorithm to show its effectiveness. Alhajjar et al. [25] studied adversarial machine learning in NIDS and explored Genetic, Particle Swarm Optimization Evolutionary algorithms and Generative Adversarial Networks (GAN) along with Monte Carlo simulation for adversarial examples generation. Two publicly available NSL-KDD and UNSW-NB15 datasets are used for extensive experimentation over the eleven ML algorithms with evasion rate as performance evaluation measurement. The authors also discussed the transferability phenomena of the ML and DL models, which implies that an input meant to fool one model can cause a similar behaviour to occur in a different model. Usama et al. [26] explored the use of GAN for both adversarial attacks and defence mechanisms in ML and DL based NIDS. The authors evaluated eight ML/DL NIDS classifiers: DNN, Random Forest, Logistic Regression, Support Vector Machine, K-Nearest Neighbour, Decision Trees, Gradient Boosting, and Naive Bayes, techniques as black-box IDS. And as a defence strategy, adversarial training with GAN is used to mitigate the effect of adversarial attacks and to increase the robustness of the NIDS model. Furthermore, to preserve the network traffic functional behaviour, the complete features set is divided into functional and non-functional attributes. Furthermore, based on the non-functional sets, adversarial examples are created. The KDD CUP-99 dataset is used, and the results are evaluated based on accuracy, precision, recall and f1-score. The extensive literature review shows that not much work has been explored in adversarial machine learning under the constrained domain. Hence, in this research article, we have explored and examined the various adversarial attack and defence methods in the network intrusion detection domain. ## III Proposed Approach This section describes the proposed approach starting from dataset selection, pre-processing, and model building, followed by adversarial attacks and defence methods. ### _Dataset and Pre-processing_ The subset of the CICIDS-2017 [27] dataset is used for experimentation. It is developed by the Canadian Institute for Cybersecurity and encapsulated into five files from Monday to Friday. The Monday file contains only benign data, and the remaining files include both benign and malicious network traffic. The dataset is publicly available in PCAP, Generated Labelled Flows and CSV file formats for ML and DL applications. It consists of seventy-nine features, which are divided into statistical measurements such as average packet flow, standard deviation, min-max packet counts, and other packet flow and packet size distributions etc. The CICIDS-2017 dataset is up to date and contains a variety of the latest insider and outsider attacks with fourteen classes, such as DDoS, DoS, Heartbleed, Bot, Infiltration etc. Hence this dataset would be the right choice to efficiently evaluate the NIDS model in both normal and adversarial attack situations. The dataset is pre-processed to remove any null and infinity values and divided into training, validation and testing sets with the scikit-learn ML function. ### _Deep Learning based NIDS Model_ Deep Neural Networks (DNN) is an Artificial Neural Network (ANN) which consists of one input layer, one output layer and one or more hidden layers. Each layer consists of artificial neurons and activation functions. The number of hidden layers and neurons may vary depending on the complexity of the problem to be solved. In this research work, we used a supervised DNN algorithm to build the NIDS system with optimal hyperparameters. The selection of the optimal structure of DNN and the other parameters, such as learning rate, activation function etc., are based on random search [28] algorithms which converge faster compared to grid search algorithms, with semi-optimal parameter sets [29]. The NIDS model is trained and validated on the subset of the CICIDS-2017 dataset with 50 epochs, as shown in Fig. 1. Fig. 2 represents the conceptual architecture of the proposed approach. The initial phase consists of dataset selection, pre-processing and splitting into training and testing sets. The next phase is model training and testing. Adversarial attacks are encountered during the testing phase and detected by the administrator. As a defence mechanism the Adversarial Training is implemented to safeguard NIDS against adversarial attacks. ### _Adversarial Examples Generations_ In this research study, FGSM, JSMA, PGD and C&W, four powerful adversarial attack generation algorithms, are utilized to generate perturbed examples to fool DL based NIDS model. Goodfellow et al. [15] proposed the FGSM method to generate adversarial examples based on the gradient sign method using backpropagation. The algorithm is based on the optimization of Lp norm (distance). It is an untargeted attack approach used to obtain max-norm constrained perturbation \(\eta\) expressed in Equation (1). Here \(\theta\) represents the model parameter, \(x\) is the input vector to the model and \(y\) is the associated label of the input and \(J\)(\(\theta\),\(x\),\(y\)) is the cost function. FGSM rapidly generate perturbation samples with a small noise parameter \(\epsilon\). \[\eta\ =\ \mathit{sign}\left(\forall x\ \mathit{j}(\theta,x,y)\right) \tag{1}\] Papernot et al. [16] proposed JSMA method based on the Jacobian Matrix, which aims to calculate the forward derivative of the cost function _f(x)_. JSMA algorithm is more efficient as it iteratively calculates the saliency map hence leading to identifying the most significant input features that contribute more to model predictions and triggers large variations. The problem can be formulated as shown in Equation (2). Here \(x\) is the input feature vector to the NN-based model. \[\mathit{Jf}(x)=\ \frac{\partial f(x)}{\partial x}=\ \left[\frac{\partial f \mathit{j}(x)}{\partial xi}\right]i\times j \tag{2}\] PGD [17] adversarial examples generation algorithms are based on the first-order L\({}_{\infty}\) norm that searches iteratively for the perturbation and optimizes the saddle point (min-max) formulation. Madry et al. [17] addressed two main issues of adversarial machine learning. The first is "generating strong adversarial examples with only small noise," and the second is "model training should be done in such a way that no perturbed examples are possible or difficult to find by adversaries". The problem is formulated as in Equation (3). \[x0\mathrm{adv}\ =\ \mathit{x},\] \[x(n+1)\ =\ \mathit{Clipx},\epsilon\left\{xn\right.\] \[\left.\ \ ### _Evaluation Metrics_ The performance evaluation of the NIDS model is demonstrated with accuracy, precision, recall and f-score in all three scenarios, i.e., under normal conditions, under adversarial attacks and after the adversarial defence. Equations (5) - (8) are the performance measurement metrics formulated over the binary classification terms: True Positive, True Negative, False Positive and False Negative cases. \[\text{Accuracy (ACC)}=\frac{TP+TN}{TP+TN+FP+FN} \tag{5}\] \[\text{Precision (P)}=\frac{TP}{TP+FP} \tag{6}\] \[\text{Recall (R)}=\frac{TP}{TP+FN} \tag{7}\] \[\text{F1}-\text{Score(F)}=\frac{2\times R\times P}{R+P} \tag{8}\] ## IV Experiment and Evaluation This experimentation is conducted on Windows-11 OS, Core-i7 CPU, 16-GB RAM, 500-GB SSD, Python 3.7 and other supportive libraries such as pandas, scikit-learn, IBM ART-toolbox. The initial DL-based NIDS model is the supervised architecture with three hidden layers consisting of 50, 30 and 10 neurons in each layer. The learning rate of 0.001 is used, with the hidden layers relu activation function is used, and the sigmoid activation function is in the output layer. The NIDS model is trained with 50 epochs on the training data set with a validation-split parameter set to 0.25 for model validation. Finally, the model is trained with 50 epochs. The obtained accuracy, precision, recall and f1-Score of the model under normal situations (without adversarial attack) are 98.54%, 98.56%, 98.54%, and 98.54%, as shown in Table 1. The obtained AUC score of the NIDS model is 98.551, as shown in Fig. 3. The NIDS model is evaluated under the adversarial attack cases with FGSM, JSMA, PGD and C&W four evasion adversarial algorithms. The aim is to reduce the performance of the model. The obtained reduced accuracy, precision, recall and f1-score under the FGSM, JSMA, PGD and C&W adversarial attack generation algorithms are (57.95%, 66.93%, 57.95%, 52.54%), (66.58%, 77.66%, 66.58%, 66.33% ), (56.81%, 66.64%, 56.81%, 51.47%) and (61.74%, 74.82%, 61.74%, 57.11), respectively. The results are also evaluated in terms of the ROC curve, as shown in Fig. 3. The resulting reduced AUC score under the four algorithms are 59.232, 68.037, 58.485, and 63.41, respectively. It is clearly visible that the performance of the NIDS model is almost reduced to half in all four cases. As discussed previously, adversarial training is one of the most effective and intuitive approaches for the adversarial vulnerability of the ML and DL models. Hence to increase the robustness of the NIDS model and its defensive capability, the re-training is done along with adversarial perturbation examples generated with FGSM, JSMA, PGD and C&W methods. The built defensive model is again evaluated over the perturbated examples. As a result, the model performance is significantly improved. The resulting improved accuracy, precision, recall and f1-Score, under the FGSM, JSMA, PGD and C&W scenarios are illustrated in Table 1. Fig. 4 graphically demonstrates the results for clear visualization under all three conditions, namely, before the attack, after the adversarial attack and after the adversarial defence. C&W is regarded as one of the most potent adversarial sample generation algorithms, but after experimentation, we see quite similar effects in terms of reducing confidence score compared to other algorithms (FGSM, JSMA and PGD). After the adversarial defence implementation, significant improvement is achieved under FGSM, JSMA and PGD. However, in C&W method, the maximum reach is up to 80.35% only in terms of precision. Hence, future work may involve investigating and increasing model robustness in C&W adversarial attacks. The adversarial transferability is another interesting phenomenon of ML and DL models in which the adversarial perturbation examples also similarly impact other models having different architecture and parameter settings. ## V Conclusion This research study concluded that no domain weather constraints or un-constraints are secure from adversarial attacks. The same idea is demonstrated with the FGSM, JSMA, PGD and C&W, four powerful adversarial algorithms to tool the DL-based NIDS model to misclassify the benign samples into anomalies and vice-versa. The NIDS model is first evaluated under a normal situation. The achieved accuracy and f1-score of the model is 98.54%. Later, the same model is evaluated under the adversarial attack situation with FGSM, JSMA, PGD, and C&W methods. As a result, the accuracy, f1-score, and AUC value have significantly reduced. The adversarial defence approach is used to mitigate the effect of adversarial attacks and to improve the robustness and confidence score of the model. After adversarial training, the improved accuracy under FGSM, JSMA, PGD and C&W are 98.7%, 98.47%, 98.68%, and 71.56%, respectively. In future work, the proposed approach could be extended with ML and DL architecture and recent intrusion detection datasets to see the impact of adversarial attack and defence methods.
2309.12982
Eigenstate correlations, the eigenstate thermalization hypothesis, and quantum information dynamics in chaotic many-body quantum systems
We consider the statistical properties of eigenstates of the time-evolution operator in chaotic many-body quantum systems. Our focus is on correlations between eigenstates that are specific to spatially extended systems and that characterise entanglement dynamics and operator spreading. In order to isolate these aspects of dynamics from those arising as a result of local conservation laws, we consider Floquet systems in which there are no conserved densities. The correlations associated with scrambling of quantum information lie outside the standard framework established by the eigenstate thermalisation hypothesis (ETH). In particular, ETH provides a statistical description of matrix elements of local operators between pairs of eigenstates, whereas the aspects of dynamics we are concerned with arise from correlations amongst sets of four or more eigenstates. We establish the simplest correlation function that captures these correlations and discuss features of its behaviour that are expected to be universal at long distances and low energies. We also propose a maximum-entropy Ansatz for the joint distribution of a small number $n$ of eigenstates. In the case $n = 2$ this Ansatz reproduces ETH. For $n = 4$ it captures both the growth with time of entanglement between subsystems, as characterised by the purity of the time-evolution operator, and also operator spreading, as characterised by the behaviour of the out-of-time-order correlator. We test these ideas by comparing results from Monte Carlo sampling of our Ansatz with exact diagonalisation studies of Floquet quantum circuits.
Dominik Hahn, David J. Luitz, J. T. Chalker
2023-09-22T16:28:15Z
http://arxiv.org/abs/2309.12982v2
# The statistical properties of eigenstates in chaotic many-body quantum systems ###### Abstract We consider the statistical properties of eigenstates of the time-evolution operator in chaotic many-body quantum systems. Our focus is on correlations between eigenstates that are specific to spatially extended systems and that characterise entanglement dynamics and operator spreading. In order to isolate these aspects of dynamics from those arising as a result of local conservation laws, we consider Floquet systems in which there are no conserved densities. The correlations associated with scrambling of quantum information lie outside the standard framework established by the eigenstate thermalisation hypothesis (ETH). In particular, ETH provides a statistical description of matrix elements of local operators between pairs of eigenstates, whereas the aspects of dynamics we are concerned with arise from correlations amongst sets of four or more eigenstates. We establish the simplest correlation function that captures these correlations and discuss features of its behaviour that are expected to be universal at long distances and low energies. We also propose a maximum-entropy Ansatz for the joint distribution of a small number \(n\) of eigenstates. In the case \(n=2\) this Ansatz reproduces ETH. For \(n=4\) it captures both the growth with time of entanglement between subsystems, as characterised by the purity of the time-evolution operator, and also operator spreading, as characterised by the behaviour of the out-of-time-order correlator. We test these ideas by comparing results from Monte Carlo sampling of our Ansatz with exact diagonalisation studies of Floquet quantum circuits. ## I Introduction Although textbook approaches to the thermodynamic equilibrium of quantum systems rely on invoking a weak coupling to a heat bath, it was understood over the course of the last four decades that this is not strictly necessary. In a large class of many-body quantum systems, the interactions between its constituents enable an isolated system to act as its own heat bath and to reach a thermal equilibrium state at long times when starting from most nonequilibrium initial states. While the late-time state remains a pure state, it approaches a _typical state_, representative of the Gibbs ensemble with small fluctuations away from this state for large system size [1]. This phenomenon emerges from the pseudo-random nature of physical observables in the energy-eigenbasis, which was suggested as a criterion for quantum chaos [1] and demonstrated numerically early on [2], with diagonal matrix elements clustering around equilibrium expectation values [3]. Integrable systems can evade this behaviour, but are not robust in the sense that very weak perturbations suffice to recover thermalisation [4]. These observations were subsequently formalised and now constitute the eigenstate thermalisation hypothesis (ETH) [1; 2; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13], which can be derived from the assumption that the behaviour of the quantum system within a narrow energy window is essentially captured by random matrix theory. This generically leads to Gaussian distributions of the matrix elements of local operators in the energy eigenbasis [14; 8; 9; 10], with possible exceptions in tails of the distribution for systems with slow particle transport [15; 16; 17; 18]. In these Gaussian distributions, the mean values of diagonal matrix elements are fixed to reproduce statistical mechanical averages of observables, while off-diagonal matrix elements have zero mean and an energy structure in their variance that governs the dynamics of autocorrelation functions of the local operators [19; 12]. It was pointed out recently on general grounds that this picture cannot be complete [20; 21]. If matrix elements of local operators in the eigenbasis were independent Gaussian random variables, then their mean and variance would determine not only autocorrelation functions, but also all higher-order correlators. In particular, the implications for the out-of-time-order correlators (OTOC) [22; 23] are in stark contrast to the known behaviour of ergodic many-body quantum systems. This leads to the conclusion that there are necessarily correlations between matrix elements, which contain information characterised by higher order cumulants [24; 25; 20; 21]. More specifically, in quantum systems with local interactions, there are strong bounds on how fast correlations can spread [26], and this limits for instance the rate of growth of the entanglement entropy [27; 28]. This behaviour is reflected in the typically linear growth of the operator entanglement entropy of the time evolution operator of local systems [29; 30], and is captured by light-cone structures of out-of-time-order correlators [31]. Our focus here is on the resulting correlations between matrix elements of observables, and related correlations between eigenstates of the time-evolution operator. A separate potential source of correlations between matrix elements is provided by locally conserved densities. Such correlations were considered early on [32], have been investigated via transport timescales [33] and subsequently observed in larger systems [34; 35]. To isolate features arising because of the dynamics of quantum information from those due to locally conserved densities, we focus in the following on Floquet systems, for which time-dependence of the Hamiltonian eliminates energy conservation, and in which there are no other local conservation laws. Recent work has presented a generalised formulation of ETH using Free Probability theory and numerical tests for higher-order correlations between matrix elements [36; 37]. That perspective considers matrix elements of local operators as fundamental objects, revealing the frequency structure of the higher-order free cumulants, particularly the fourth-order free cumulant, which encodes the leading correlations of matrix elements beyond standard ETH. An alternative perspective, adopted in Ref. [38], is to consider eigenstates of the time-evolution operator, rather than matrix elements of observables, as the relevant set of variables. In particular, the typical time evolution of Renyi entropy in local systems implies non-trivial correlations between eigenstates. Separately, in Ref. [39] a derivation is given of ETH via a study of eigenstates in random Floquet quantum circuits using a field-theoretic approach. In the present paper, we discuss correlations in chaotic many-body quantum systems that are associated with the dynamics of quantum information scrambling and that can be expected to be universal in spatially extended systems with local interactions. While ETH is formulated as a statement about the statistical properties of matrix elements of operators, we find that it is more transparent to consider instead eigenstates and correlators constructed from them, without reference to particular operators. The first main contribution of our work is to identify the leading-order eigenstate correlator that contains information about this scrambling dynamics, and to discuss its behaviour at long distances and low energy differences. The second main contribution is to introduce an Ansatz for the joint probability distribution of a small number \(n\) of eigenstates of the time-evolution operator. In the case of a pair of eigenstates, this reproduces the Gaussian distribution for matrix elements of local observables that constitutes ETH. Extending this Ansatz, we show that a constraint on the joint distribution of four eigenstates is sufficient to capture the essential features of the OTOC and of the operator entanglement entropy of the time-evolution operator. As a third main contribution, we test these ideas by comparing results from exact diagonalisation of the time-evolution operator with those from Monte Carlo sampling of our Ansatz for the joint eigenstate distribution. The ideas and results that we set out in this paper are consistent with, but broadly complementary to, the recent discussion of a generalised version of ETH, formulated to describe the average of products of arbitrary numbers of matrix elements of observables using the language of Free Probability [36; 37]. In particular, our emphasis is different from that of [37] in two ways. First, we focus on correlations at large distances and small energy separations that we expect to be a universal consequence of the dynamics of quantum information. Second, we find that it is advantageous to consider correlations between eigenstates in place of matrix elements. The remainder of this paper is organised as follows. In Sec. II we provide a compact overview of our main results. In Sec. III we give details of our calculations, including the microscopic models we use for numerical studies, the determination of parameters in our Ansatz for the joint probability distribution of eigenstates, and a summary of the numerical methods used. We develop a treatment of our Ansatz based on a perturbative expansion in Sec. IV and we present numerical results for additional models in Sec. V. We conclude with a summary and outlook in Sec. VI. ## II Overview In this section we provide an overview of our results. We introduce the class of model studied and the eigenstate correlators of interest in Sec. II.1. We set out the relationship between these correlators and the OTOC in Sec. II.2, and indicate generalisations in Sec. II.3. We review the sense in which the original form of ETH fails to capture these correlators in Sec. II.4 and we propose a representation for them in terms of the joint distribution function for sets of four eigenstates in Sec. II.5. We present results for the correlators from exact diagonalisation and from Monte Carlo sampling of this distribution in Sec. II.6. ### Models and Correlation Functions We start by setting out some essential notation. We consider a one-dimensional Floquet system consisting of \(L\) sites, each with a local Hilbert space dimension \(q\), coupled by local interactions. We use \(W\) to denote the Floquet operator for the system, which is then a unitary \(q^{L}\times q^{L}\) matrix that generates evolution through one time period. Defining eigenstates \(|a\rangle\) and quasienergies \(\theta_{a}\) satisfying \(W|a\rangle=\mathrm{e}^{-i\theta_{a}}|a\rangle\), the evolution operator \(W(t)\) for an integer number of time-steps \(t\) has the spectral decomposition \[W(t)=\sum_{a}\mathrm{e}^{-\mathrm{i}\theta_{a}t}|a\rangle\langle a|\,. \tag{1}\] The dynamics can be characterised in terms of correlators of local operators \(X_{\alpha},Y_{\beta}\ldots\). Here we use upper case letters \(X,Y,\ldots\) as labels for subsystems on which local operators act, with subscripts \(\alpha,\beta\ldots\) to distinguish different operators acting on a given subsystem. Later we will use \(\overline{X}\), \(\overline{Y},\ldots\) to denote the complements of these subsystems. The time evolution in the Heisenberg picture is \(X_{\alpha}(t)=W^{\dagger}(t)X_{\alpha}W(t)\), and for a Floquet system it is natural to evaluate correlators using the infinite-temperature density matrix. In addition, it is useful to consider an ensemble of realisations of \(W\) and to average physical quantities over the ensemble. We indicate this average by \([\ldots]_{\rm av}\). An alternative average is over a Haar distribution of vectors, which we indicate by \([\ldots]_{0}\). The simplest correlator is the autocorrelation function of a single operator, which has the spectral decomposition \[q^{-L}{\rm Tr}[X_{\alpha}(t)X_{\alpha}]=q^{-L}\sum_{ab}|\langle a|X_{\alpha}|b \rangle|^{2}{\rm e}^{{\rm i}(\theta_{a}-\theta_{b})t}\,. \tag{2}\] In an ergodic system this is expected to decay on a timescale that is microscopic in the sense that it is of order a few Floquet periods. Evidently, its behaviour reflects statistical properties of pairs \(|a\rangle,|b\rangle\) of eigenstates, which are therefore expected to show features as a function of the quasienergy difference \(\theta_{a}-\theta_{b}\) that vary on a scale only a few times smaller than the spectral width \(2\pi\)[21]. The OTOC has the definition and spectral decomposition \[q^{-L}{\rm Tr}[X_{\alpha}(t)Y_{\beta}X_{\alpha}(t)Y_{\beta}]=q^{ -L} \sum_{abcd}\langle a|X_{\alpha}|b\rangle\langle b|Y_{\beta}|c\rangle\times \tag{3}\] \[\times\langle c|X_{\alpha}|d\rangle\langle d|Y_{\beta}|a\rangle e^{i(\theta_{a}-\theta_{b}+\theta_{c}-\theta_{d})t}\,.\] If the subsystems \(X\) and \(Y\) are separated by a large distance \(s\), the main features of the OTOC appear on a large timescale. More specifically, the support of the operator \(X_{\alpha}(t)\) is expected [26] to grow with a butterfly velocity \(v_{\rm B}\); the OTOC is constant and non-zero if this support is disjoint from that of \(Y_{\beta}\), but falls towards zero when the support of \(X_{\alpha}(t)\) expands to contain that of \(Y_{\beta}\). Clearly, behaviour of the OTOC reflects statistical properties of sets of four eigenstates, \(|a\rangle,|b\rangle,|c\rangle\) and \(|d\rangle\), which must show features on a quasienergy scale \(2\pi v_{\rm B}/s\) that is much smaller for large \(s\) than the spectral width. Besides the OTOC, a second way to characterise quantum information dynamics is via the spread of entanglement. Consider an initial state \(|\psi\rangle\) with low entanglement in the site basis. Although the corresponding density matrix \(\rho(t)=W(t)|\psi\rangle\langle\psi|W^{\dagger}(t)\) remains pure at all \(t\), for a subsystem \(X\) that is much smaller than its complement \(\overline{X}\), the reduced density matrix \(\rho_{X}(t)={\rm Tr}_{\overline{X}}\rho(t)\) is expected to evolve towards an equilibrium one. This is probed at the simplest level by considering the purity \({\rm Tr}_{X}|\rho_{X}(t)|^{2}\). Since the definition of the purity involves two powers of \(W(t)\) and two of \(W^{\dagger}(t)\), its behaviour, like that of the OTOC, reflects correlations amongst sets of four eigenstates. Both the OTOC and the purity require choices in their definitions - of the operators denoted by \(X_{\alpha}\) and \(Y_{\beta}\) in Eq. (3) for the former, and of the initial wavefunction \(|\psi\rangle\) for the latter. This arbitrariness can be eliminated by averaging the OTOC over two complete sets of operators \(\{X_{\alpha}\}\) and \(\{Y_{\beta}\}\) with given supports \(X\) and \(Y\), and by considering the operator entanglement entropy of \(W(t)\)[29; 30; 40] in place of the purity of \(\rho_{X}(t)\). Both routes lead to an identical correlator which is defined solely in terms of sets of four eigenstates and the choice of \(X\) and \(Y\). We defer discussion of details and present first an alternative argument that singles out the same correlator. As a starting point, consider the Schmidt decomposition of an eigenstate in terms of tensor products of orthonormal basis states \(|i_{X}\rangle\) and \(|i_{\overline{X}}\rangle\) for subsystem \(X\) and its complement \(\overline{X}\), which we write as \[|a\rangle=\sum_{i_{X}i_{\overline{X}}}[C_{X}(a)]_{i_{X}i_{\overline{X}}}|i_{X} \rangle\otimes|i_{\overline{X}}\rangle\,. \tag{4}\] Here \(C_{X}(a)\) is the matrix version of the eigenstate \(|a\rangle\), separating states on subsystem \(X\) into row indices and states on \(\overline{X}\) into column indices. It hence has dimensions \(q^{L(X)}\times q^{L(\overline{X})}\), where \(L(X)=|X|\), the number of sites in \(X\), and similarly for \(L(\overline{X})\). The problem of constructing correlators from sets of eigenstates is equivalent to one of forming scalars from sets of matrices \(C_{X}(a)\). This can be done by taking the trace of products of an even number of terms, in which the matrices alternate with their Hermitian conjugates. The matrices within such a trace must all refer to a given choice of subsystem \(X\) but may refer to multiple eigenstates \(|a\rangle\), \(|b\rangle\ldots\). At lowest order this recipe simply yields the quantity \[{\rm Tr}[C_{X}(a)C_{X}^{\dagger}(b)]=\delta_{ab}, \tag{5}\] which has a value fixed by orthonomality of the eigenstates. At next order it gives \[M_{X}(abcd)={\rm Tr}[C_{X}(a)C_{X}^{\dagger}(b)C_{X}(c)C_{X}^{\dagger}(d)]\,. \tag{6}\] Such quantities can be represented diagrammatically as shown in Fig. 1 We now invoke two guiding ideas. One follows from the fact that we want to characterise dynamics in space and time, which suggests that we should consider more than one way of subdividing the system into subsystems, with different alternatives labelled \(X\), \(Y\), \(\ldots\). The other follows from the fact that the overall phases of individual eigenstates can be chosen arbitrarily, but physical quantities should be invariant under the transformation \(|a\rangle\to e^{i\phi_{a}}|a\rangle\). To eliminate the phases \(\phi_{a}\), each \(C_{X}(a)\) appearing in a correlator must be accompanied by a Hermitian conjugate \(C^{\dagger}_{Y}(a)\), referring to the same eigenstate but possibly with a different division into subsystems. Employing both these ideas, we are led to the main quantity of interest in this paper, \(M_{X}(abcd)M^{*}_{Y}(abcd)\) and its appropriately normalised ensemble average \[F_{4}(X,Y,\theta)= q^{-L(X,Y)}\Big{[}\sum_{abcd}M_{X}(abcd)M^{*}_{Y}(abcd)\times \tag{7}\] \[\times\delta(\theta-\theta_{a}+\theta_{b}-\theta_{c}+\theta_{d}) \Big{]}_{\rm av}\,.\] Here and in the following, the argument of the \(\delta\)-function on quasienergy differences is taken modulo \(2\pi\). The length \(L(X,Y)\) appearing in the normalisation is defined by \[L(X,Y) = 2L-|\overline{X}\setminus\overline{Y}|-|X\setminus Y| \tag{8}\] \[= 2L-|\overline{Y}\setminus\overline{X}|-|Y\setminus X|=L(Y,X)\,,\] where notation of the form \(|A\setminus B|\) indicates the number of sites that are in subsystem \(A\) but not in \(B\). The choice of subscript on \(F_{4}(X,Y,\theta)\) indicates that this quantity characterises correlations within sets of four eigenstates. Somewhat surprisingly, at this order in powers of the eigenstates, Eq. (7) is the unique outcome of interest from the approach we have sketched for constructing correlators. To see this, consider potential alternatives. Any such alternative should involve two factors, each consisting of a trace over products of Schmidt matrices \(C_{X}(a)\), since each trace carries a subsystem label and we are interested in correlations between a pair of subsystems. Moreover, at this order the two factors together involve four such matrices and four Hermitian conjugates. If these matrices are equally distributed between the two traces, then alternatives to Eq. (7) must all be generated by replacing \(M^{*}_{Y}(abcd)\) with a similar factor that preserves invariance under changes of eigenstate phases. These can all be obtained from Eq. (7) using the equalities \(M_{Y}(abcd)=M_{Y}(cdab)=M^{*}_{Y}(adcb)=M^{*}_{Y}(badc)\) and so are equal to \(F_{4}(X,Y,\theta)\) or \(F_{4}(X,\overline{Y},\theta)\). Finally, one might regroup matrices under the trace, so that one trace involves six matrices and the other trace involves only two. Then, however, the value of the second trace is fixed via Eq. (5) and is independent of subsystem label, eliminating the spatial dependence of interest. Amongst correlators involving only a single \(X\), the lowest order quantity that is independent of eigenstate phases is \(M_{X}(abba)\) (equal to \(M_{\overline{X}}(aabb)\)). From this we define \[F_{2}(X,\theta) = q^{-(L+L(X))}\times \tag{9}\] \[\times \Big{[}\sum_{ab}M_{X}(abba)\delta(\theta-\theta_{a}+\theta_{b}) \Big{]}_{\rm av}\,.\] The subscript on \(F_{2}(X,\theta)\) indicates that this correlator characterises correlations between pairs of eigenstates. The two correlators \(F_{2}(X,\theta)\) and \(F_{4}(X,Y,\theta)\) are the central quantities of interest in the following, together with their counterparts in the time domain, defined by \[f_{2}(X,t)=\int_{-\pi}^{\pi}\mathrm{d}\theta\,F_{2}(X,\theta)e^{i\theta t} \tag{10}\] and \[f_{4}(X,Y,t)=\int_{-\pi}^{\pi}\mathrm{d}\theta\,F_{4}(X,Y,\theta)e^{i\theta t }\,. \tag{11}\] The initial values \(f_{2}(X,0)=f_{4}(X,Y,0)=1\) follow from completeness of the set of eigenstates and the choices of normalisation in Eqns. (7) and (9) (see Eqns. (25) and (26) for a discussion). The late-time limits are also system independent. For \(f_{4}(X,Y,t)\) this limit comes (assuming no degeneracies) from terms in (7) with pairwise equal labels \(a\)=\(b\),\(c\)=\(d\) or \(a\)=\(d\),\(b\)=\(c\). For \(f_{2}(X,t)\) it comes from terms in (9) with \(a\)=\(b\). All these terms can be simplified by noting that \(C_{X}(a)C^{\dagger}_{X}(a)\equiv\mathrm{Tr}_{\overline{X}}|a\rangle\langle a|\) is the reduced density matrix on subsystem \(X\) formed from the eigenstate \(|a\rangle\). If \(L(X)\ll L\), one expects from ETH that to an excellent approximation \(C_{X}(a)C^{\dagger}_{X}(a)=q^{-L(X)}\openone_{X}\), where \(\openone_{X}\) is the identity on \(X\). This implies that \(\lim_{|t|\to\infty}f_{2}(X,t)=q^{-2L(X)}\) and that \(\lim_{t\to\infty}f_{4}(X,Y,t)=q^{2L-L(X,Y)-L(X)-L(Y)}\) for \(L(X),L(Y)\ll L\). ### Relation to autocorrelation functions of observables and OTOC As we now discuss, these correlators are related respectively to the autocorrelation function [Eq. (2)] and the OTOC [Eq. (3)] via averages over the operators appearing in the latter two quantities. We begin by stating a key relation between \(M_{X}(abcd)\) and an operator average of matrix elements. Given a subsystem \(X\), choose a complete basis of \(q^{2L(X)}\) Hermitian operators \(\{X_{\alpha}\}\) that act on the subsystem and obey the orthonormality condition \[q^{-L(X)}\mathrm{Tr}_{X}[X_{\alpha}X_{\beta}]=\delta_{\alpha\beta}\,. \tag{12}\] Using the resolution of the identity in the vector space of operators, one finds \[q^{-L(X)}\sum_{\alpha}\langle a|X_{\alpha}|b\rangle\langle c|X_{\alpha}|d \rangle=M_{X}(abcd)\,. \tag{13}\] The operator resolution of the identity and the relation given in Eq. (13) are represented diagrammatically in Fig. 2. We first apply this to the simpler case of the autocorrelation function. The autocorrelation function averaged over all choices of operator (and over the ensemble of systems) is \[q^{-2L(X)}\sum_{\alpha}\Big{[}q^{-L}\mathrm{Tr}[X_{\alpha}(t)X_{\alpha}]\Big{]} _{\rm av}=f_{2}(X,t)\,. \tag{14}\] he special case \(X_{\alpha}=\openone_{X}\) is the only contribution to this average that survives at late times, giving a value of \(\lim_{|t|\to\infty}f_{2}(X,t)\) consistent with the discussion above. Similar arguments apply to the OTOC. In this case we choose two complete sets of operators. Operators in one set act on the subsystem labelled \(X\) and are denoted by \(X_{\alpha}\). Operators in the other set act on the subsystem labelled \(Y\) and are denoted by \(Y_{\beta}\). Then \[q^{-(L(X)+L(Y))} \sum_{\alpha\beta}\langle a|X_{\alpha}|b\rangle\langle b|Y_{ \beta}|c\rangle\langle c|X_{\alpha}|d\rangle\langle d|Y_{\beta}|a\rangle \tag{15}\] \[=M_{X}(abcd)M_{Y}(bcda)\,.\] This can be rewritten using \(M_{Y}(bcda)=M_{\overline{Y}}^{*}(abcd)\), and so the average of the OTOC over both sets of operators is \[q^{-2(L(X)+L(Y))} \sum_{\alpha\beta}\Big{[}q^{-L}\mathrm{Tr}[X_{\alpha}(t)Y_{\beta} X_{\alpha}(t)Y_{\beta}]\Big{]}_{\mathrm{av}} \tag{16}\] \[=q^{-S(X,Y)}f_{4}(X,\overline{Y},t)\] with \(S(X,Y)=L(X)+L(Y)+L-L(X,\overline{Y})=|X\setminus\overline{Y}|+|Y\setminus \overline{X}|\). Contributions to this average from the special cases \(X_{\alpha}=\openone_{X}\) and/or \(Y_{\beta}=\openone_{Y}\) survive at long times and are responsible for the limiting value given above. The correlator \(f_{4}(X,Y,t)\) also arises from a discussion of the operator entanglement entropy of the evolution operator. This quantity stems from considering the operator \(W(t)\) as a state on a doubled Hilbert space, with components given by the matrix elements \([W(t)]_{i_{X}i_{\overline{Y}},j_{Y}j_{\overline{Y}}}\). The corresponding reduced operator density matrix, obtained by tracing out the degrees of freedom in the subsystems \(\overline{X}\) and \(\overline{Y}\), is \[[\rho(X,Y,W(t)) ]_{i_{X}j_{Y},l_{X}m_{Y}}=\sum_{ab}[C_{X}(a)C_{X}^{\dagger}(b)]_{ i_{X}l_{X}}\times \tag{17}\] \[\times [C_{Y}(b)C_{Y}^{\dagger}(a)]_{m_{Y}j_{Y}}e^{i(\theta_{b}-\theta_{ a})t}\,.\] The ensemble-averaged operator purity arising from this reduced density matrix is simply \[\big{[}\mathrm{Tr}[\rho(X,Y,W(t))^{2}]\big{]}_{\mathrm{av}}=q^{L(X,Y)}f_{4}(X,Y,t)\,. \tag{18}\] The proportionality between \(f_{4}(X,Y,t)\) and the operator purity of the evolution operator implies a straightforward link to the idea of an entanglement membrane, which has been proposed as a coarse-grained description of entanglement dynamics in chaotic many-body quantum systems [41; 42]. For the one-dimensional models we are considering, the entanglement membrane is a curve in space-time, and to discuss the link to operator purity we build on the exposition of Ref. [42]. In outline, coarse-grained features of entanglement dynamics are determined by the line tension \(\mathcal{E}(v)\) of this membrane, which is a function of a velocity \(v\). In our notation, \(v=s/t\), where \(s\) is the distance between the ends of subsystems \(X\) and \(Y\), defined in Fig. 3(d). For a fixed choice of \(X\) and \(Y\) with \(s\) large, the operator purity of the evolution operator is proportional to the line tension, and so traces out the function \(\mathcal{E}(v)\) as \(t\) varies. Hence the correlators \(f_{4}(X,Y,t)\) and \(F_{4}(X,Y,\theta)\) can be seen as representations of the line tension \(\mathcal{E}(v)\). ### Multi-time and multi-quasienergy correlators An obvious generalisation [21] of the OTOC [Eq. (3)] is to introduce three time arguments, by considering the quantity \(q^{-L}\mathrm{Tr}[X_{\alpha}(t+t_{2})Y_{\beta}(t_{1})X_{\alpha}(t)Y_{\beta}]\). Correspondingly, in the quasienergy domain we have a generalisation of the correlator \(F_{4}(X,Y,\theta)\), defined by \[F_{4}(X,Y;\theta,\theta_{1},\theta_{2}) = q^{-L(X,Y)}\Big{[}\sum_{abcd}M_{X}(abcd)M_{Y}^{*}(abcd)\times \tag{19}\] \[\times \delta(\theta-\theta_{a}+\theta_{b}-\theta_{c}+\theta_{d})\times\] \[\times \delta(\theta_{2}-\theta_{a}+\theta_{b})\times\delta(\theta_{1}- \theta_{b}+\theta_{c})\Big{]}_{\mathrm{av}}\] with the Fourier transform \[f_{4}(X,Y;t,t_{1},t_{2}) = \int_{-\pi}^{\pi}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! This motivates the approximation \[f_{4}(X,Y;t,t_{1},t_{2})\approxeq f_{4}(X,Y,t)f_{2}(X,t_{2})f_{2}(Y,t_{1})\,, \tag{23}\] which in the quasienergy domain is \[F_{4}(X,Y;\theta,\theta_{1},\theta_{2})\approxeq F_{4}(X,Y,\theta)F_{2}(X, \theta_{2})F_{2}(Y,\theta_{1})\,. \tag{24}\] For the models we study in this paper, \(F_{2}(X,\theta_{2})\) is only weakly dependent on \(\theta\) and so the multi-quasienergy correlator carries only limited extra information compared to the single-quasienergy version. For this reason we leave study of \(F_{4}(X,Y;\theta,\theta_{1},\theta_{2})\) for future work. ### Existence of correlations beyond ETH Our objective in the remainder of this work is to find a form for the joint distribution function (JDF) of a small number of eigenstates that reproduces these correlations. We do this using a maximum entropy Ansatz with a final form that we build up by considering first individual vectors, then pairs of vectors, and finally sets of four vectors. To place our approach in context, it is useful to recall (following Refs. [20] and [21]) the limitations of ETH in its standard formulation when applied to the OTOC. As a starting point, consider the spectral decomposition of the OTOC in terms of operator matrix elements, as displayed in Eq. (3). ETH asserts that matrix elements of the form \(\langle a_{1}|X_{\alpha}|a_{2}\rangle\) and \(\langle a_{3}|Y_{\beta}|a_{4}\rangle\) appearing in this expression are Gaussian random variables, and are independent apart from the constraint implied by Hermiticity of the operators \(X_{\alpha}\) and \(Y_{\beta}\). The mean values of off-diagonal matrix elements are automatically zero, and those of diagonal matrix elements are zero for traceless operators in the Floquet setting of interest. Finally, the variance of these matrix elements is set by the Hilbert space size and is \(\mathcal{O}(q^{-L})\). Applying these ideas to Eq. (3), the OTOC is given by \(q^{-L}\) times a sum of \(q^{4L}\) random \(\mathcal{O}(q^{-2L})\) terms. Of these, only the \(q^{L}\) terms with \(a\)=\(b\)=\(c\)=\(d\) are expected from ETH to have a non-zero average. This would imply an average value for the OTOC of \(\mathcal{O}(q^{-2L})\). Treating the remaining terms as independent random variables, one expects \(\mathcal{O}(q^{-L})\) fluctuations around this average. In contrast, the true value is \(\mathcal{O}(1)\) at short times. To resolve this discrepancy it is necessary that a product of four matrix elements of the form \(\langle a|X_{\alpha}|b\rangle\langle b|Y_{\beta}|c\rangle\langle c|X_{\alpha} |d\rangle\langle d|Y_{\beta}|a\rangle\) has a non-zero average that is \(\mathcal{O}(q^{-3L})\) in addition to the \(\mathcal{O}(q^{-2L})\) fluctuations captured by the standard version of ETH [21]. These additional correlations are the central concern in this paper and in the generalisation of ETH discussed in Refs. [36; 37]. A simple demonstration that such correlations must be present, regardless of details of the dynamics, is provided by a sum rule related to the value of the OTOC at \(t=0\). From the left-hand side of Eq. (3), assuming for simplicity that the subsystems \(X\) and \(Y\) do not overlap and using the operator normalisation of Eq. (12), we have \[q^{-L}\text{Tr}[X_{\alpha}(t)Y_{\beta}X_{\alpha}(t)Y_{\beta}]\Big{|}_{t=0}=1\,. \tag{25}\] Using this in Eq. (16) with Eq. (11) we have \[\int_{-\pi}^{\pi}\text{d}\theta\,F_{4}(X,\overline{Y},\theta)=1\,. \tag{26}\] This sum rule for \(F_{4}(X,\overline{Y},\theta)\) is automatically satisfied if eigenstates are Haar distributed vectors, and in that case \(F_{4}(X,\overline{Y},\theta)\) is independent of \(\theta\). The eigenstate correlations that we are concerned with generate a dependence of \(F_{4}(X,\overline{Y},\theta)\) on \(\theta\) but do not alter the fact that, with the normalisation of Eq. (7), it has an order of magnitude that is independent of the Hilbert space dimension \(q^{L}\). ### Describing correlations beyond ETH Some constraints on the eigenstate JDF are implied by ETH, which we now consider. ETH specifies statistical properties of both diagonal and off-diagonal matrix elements of local observables between eigenstates, and we discuss the two classes of matrix elements separately. For a system with a time-independent Hamiltonian, a key part of ETH is that diagonal matrix elements of observables vary smoothly with energy, taking average values compatible with a thermal ensemble at the same energy density, and with fluctuations of a characteristic size that vanishes rapidly as the thermodynamic limit is approached. By contrast, for Floquet systems, statistical properties of diagonal matrix elements of observables are independent of quasienergy. We capture this property of diagonal matrix elements in a Floquet system by taking individual eigenstates to be isotropically distributed vectors in the Hilbert space for the model. We outline in Sec. VI the alternative choice required to model the energy dependence of diagonal matrix elements in Hamiltonian systems within our approach. We denote the isotropic (Haar) distribution for one, two or four orthonormal vectors by \(P_{1}^{(0)}(a)\), \(P_{2}^{(0)}(a,b)\) and \(P_{4}^{(0)}(a,b,c,d)\) respectively, and set out to modify these distributions in a way that introduces the correlations of interest. Statistical properties of off-diagonal matrix elements determine the approach to equilibrium and the autocorrelation functions of observables. ETH applied to Floquet systems asserts that these matrix elements are independent Gaussian random variables with a variance that depends only on quasienergy separation. A central idea in our work is that strict independence is incompatible with the correlations implied by the dynamics of quantum information. Instead there are correlations (albeit weak) and these are better handled by considering distributions for eigenstates rather than matrix elements. We take the joint distribution of a pair of eigenstates to have the Maximum Entropy form \[P_{2}(a_{1},a_{2})=Z_{2}^{-1}P_{2}^{(0)}(a_{1},a_{2})e^{-S_{2}(a_{1},a_{2})} \tag{27}\] with \(Z_{2}\) a normalisation constant and \[S_{2}(a,b)=\sum_{X}G_{2}(X,\theta_{a}-\theta_{b})M_{X}(abba)\,, \tag{28}\] where the coefficients \(G_{2}(X,\theta)\) act as a Lagrange multipliers and should be chosen to reproduce the behaviour of \(F_{2}(X,\theta)\) as determined for a particular system. Extending this pattern, we take the joint distribution of four eigenstates to have the form \[P_{4}(a_{1},a_{2},a_{3},a_{4}) =Z_{4}^{-1}P_{4}^{(0)}(a_{1},a_{2},a_{3},a_{4})\times \tag{29}\] \[\times e^{-\sum_{j<k}S_{2}(a_{j},a_{k})-S_{4}(a_{1},a_{2},a_{3},a_{4 })}\,,\] with \(Z_{4}\) a normalisation constant and \[S_{4}(a,b,c,d) = \sum_{XY}G_{4}(X,Y,\theta_{a}-\theta_{b}+\theta_{c}-\theta_{d})\times \tag{30}\] \[\times M_{X}(abcd)M_{Y}^{*}(abcd)\,.\] Here the Lagrange multipliers \(G_{4}(X,Y,\theta)\) should be chosen to reproduce the behaviour of \(F_{4}(X,Y,\theta)\). Two further ingredients are required. One is to establish a practical method for deducing the values of the Lagrange multipliers from information on the correlators. The other is to test the approach by sampling \(P_{2}(a,b)\) or \(P_{4}(a,b,c,d)\) and comparing the results with correlators calculated from exact diagonalisation (ED) of \(W\). In this work we focus on the correlator \(F_{4}(X,Y,\theta)\) since it contains the long-distance, low-energy information related to the dynamics of quantum information. Moreover, in the models we study, the lower-order correlator \(f_{2}(X,t)\) decays rapidly in time. This implies that \(F_{2}(X,\theta)\) is approximately independent of quasi-energy and so we simply set \(G_{2}(X,\theta)\) to zero in our initial treatment. We return to consideration of a \(\theta\)-dependent \(F_{2}(X,\theta)\) and non-zero \(G_{2}(X,\theta)\) immediately after our discussion of \(F_{4}(X,Y,\theta)\). The determination of the Lagrange multipliers from ED data for eigenstate correlator can be seen simply as a fitting problem, but since this involves a high-dimensional parameter space, alternative approaches are desirable. Fortunately, as we describe in Sec. III.2, we have been able to find direct and straightforward methods to derive \(G_{4}(X,Y,\theta)\) from \(F_{4}(X,Y,\theta)\) and \(G_{2}(X,Y,\theta)\) from \(F_{2}(X,Y,\theta)\). ### Results We implement and test these ideas using an open chain with the Floquet operator defined by a brickwork circuit (see Sec. III.1). Our fastest method (see Sec. III.2.1) for determining the Lagrange multipliers \(G_{4}(X,Y,\theta)\) is effective if the probability distribution of \(M_{X}(abcd)\) is well-approximated by a Gaussian, which is the case for the model studied provided the subsystem sizes \(L(X)\) and \(L(\overline{X})\) are not too small. In order to satisfy this requirement, and in order to limit the total number of Lagrange multipliers under consideration, we include \(G_{4}(X,Y,\theta)\) for all \(L-5\) subsystems \(X\) with \(L(X)>2\) that can be obtained from the full system by means of a single spatial cut, and similarly for \(Y\). Taking account of symmetry under interchange of \(X\) and \(Y\), this gives \((L-5)(L-4)/2\) (i.e. 28 for the example studied of \(L=12\)) independent Lagrange multipliers. Using as input the values of \(F_{4}(X,Y,\theta)\) for these subsystem choices from ED, we arrive at a final form for the JDF. We use Monte Carlo (MC) sampling of this distribution to compute the correlator [denoted by \(F_{4}^{\rm MC}(X,Y,\theta)\)] with two objectives. First, for the simplest choices of \(X\) involving single spatial cuts (and similarly for \(Y\)), comparison with ED is a test of our Ansatz for the JDF and of our procedure to determine the Lagrange multipliers. Second, it is interesting to see whether this input alone is sufficient to capture correlations more generally. To probe this we compare ED and MC results for the correlator, making choices of \(X\) (and also \(Y\)) that are defined by more than one spatial cut. This is a test of the extent to which the proposed JDF captures long-distance, low energy correlations in general. In particular, taking \(X\) and \(Y\) each to consist of a small number of sites acting as the support for a local observable, we test the implications of the JDF for the (operator-averaged) OTOC. Some of our principal results are shown in Fig. 3 and discussed in the figure caption. The main conclusions are as follows. (i) As expected from its relation to the OTOC, the correlator \(f_{4}(X,Y,t)\) is time-independent at short times and falls off at a timescale that is long if the spatial separation \(s\) between subsystems \(X\) and \(\overline{Y}\) is large. All aspects of this behaviour are apparent in Fig. 3(a). (ii) In turn, this implies structure in \(F_{4}(X,Y,\theta)\) at small quasienergies \(\theta\), as is visible in Fig. 3(b); the width in quasienergy of this structure decreases with increasing \(s\). (iii) Monte Carlo sampling of the JDF of Eq. (29), with Lagrange multipliers determined as described in Sec. III.2, generates results for \(F_{4}(X,Y,\theta)\) that are in excellent agreement with those from ED, as demonstrated in Fig. 3(c). Further important results are shown in Fig. 4. Here we examine how well the JDF constructed using correlators for single-cut subsystems can capture correlators for two-cut subsystems. It is apparent from Fig. 4(a) that MC sampling of the JDF generates a moderately good representation of \(F_{4}(X,Y,\theta)\) for the two-site choices of \(X\) and \(Y\) shown in Fig. 4(b). Equivalently, the JDF determined using information from the geometries of Fig. 3(d) reproduces the main features of the OTOC, as a function of time and spatial separation, for two operators, each supported on a pair of sites in the geometry of Fig. 4(b). As a final indication of the effectiveness of our approach, we return to the behaviour of \(F_{2}(X,\theta)\), which characterises the correlations that are incorporated by ETH. For simplicity we consider only the joint distribu tion of pair of eigenstates, in this way treating \(F_{2}(X,\theta)\) separately from \(F_{4}(X,Y,\theta)\). By determining the Lagrange multiplier \(G_{2}(X,\theta)\) from ED data as described in Sec. III.2 we generate and sample from this joint distribution, with results that are shown in Fig. 5. As is evident, our MC data are in excellent agreement with ED results for all values of quasienergy difference \(\theta\) and all subsystem choices \(X\). The fact that deviations are small from the value \(F_{2}(X,\theta)=(2\pi)^{-1}\approxeq 0.159\) for a pair of Haar-distributed orthogonal unit vectors is justification for our omission of \(F_{2}(X,\theta)\) in our discussion of \(F_{4}(X,Y,\theta)\). A more complete treatment would require the simultaneous inclusion of both \(G_{2}(X,\theta)\) and \(G_{4}(X,Y,\theta)\), following Eq. (29). Some consequences have been discussed previously in Ref. [21] and we do not consider them further here. The overall aim of MC sampling from our Maximum Entropy Ansatz for the JDF of a small number of eigenstates is to test whether, with a suitable choice of Lagrange multipliers, the JDF reproduces correlations measured from ED. This test of our approach is a crucial one, and we believe Fig. 3(c) offers excellent evidence that the JDF can indeed reproduce the required correlations for the model and parameter range investigated there. Further discussion of the determination of Lagrange multipliers, including a treatment of other models, is presented in Sec. V and the Appendix. Figure 3: Overview of main results, calculated for the brickwork circuit model defined in Sec. III.1 with \(L=12\), \(q=2\) and open boundary conditions: behaviour from ED in (a) and (b); comparison of ED and MC in (c); and geometries of subsystems \(X\) and \(Y\) in (d). (a) The correlator \(f_{4}(X,Y,t)\) [Eq. (9)] vs \(t\), obtained using ED. Recall [Eq. (16)] that this correlator is proportional to the OTOC \(q^{-L}\text{Tr}[X_{\alpha}(t)\overline{Y}_{\beta}X_{\alpha}(t)\overline{Y}_{ \beta}]\) averaged over operators \(X_{\alpha}\) and \(\overline{Y}_{\beta}\) with support on subsystems \(X\) and \(\overline{Y}\) respectively. Decay of the correlator reflects operator spreading, and the onset time for decay increases with \(s\). (b) The correlator \(F_{4}(X,Y,\theta)\) [Fourier transform of \(f_{4}(X,Y,t)\): see Eq. (7)] vs quasienergy difference \(\theta\), obtained using ED [contributions to Eq. (7) in which any of the state labels \(a\), \(b\), \(c\) and \(d\) are equal have been omitted: they are atypical and carry vanishing weight in the thermodynamic limit]. It has a peak centred on \(\theta=0\) which grows narrower and higher with increasing \(s\), reflecting the short-time plateau in (a); black dashed line: behaviour when the Floquet operator \(W\) is modelled using a \(q^{L}\times q^{L}\) Haar unitary, showing for this structureless case that \(F_{4}(X,Y,\theta)\) is non-zero but \(\theta\)-independent. (c) Comparison of ED results with MC results from the Ansatz for the JDF of Eq. (29) fitted to behaviour in the geometries of (d), showing excellent agreement between \(F_{4}^{\text{MC}}(X,Y,\theta)\) (open circles from MC) and \(F_{4}(X,Y,\theta)\) (lines from ED) vs \(\theta\) for various \(s\). (d) Illustration of two ways of dividing the 12-site system with open boundary conditions into subsystems by means of a single spatial cut. In one case the subsystems are labelled \(X\) and \(\overline{X}\); in the other the labels are \(Y\) and \(\overline{Y}\). The distance between the spatial cuts in the two cases is denoted by \(s\). ## III Models, Lagrange multipliers and numerical methods In Sec. III.1 we describe the microscopic models we use to generate the numerical results shown in this paper. In Sec. III.2 we set out efficient methods to determine the Lagrange multipliers \(G_{2}(X,\theta)\) and \(G_{4}(X,Y,\theta)\) that appear in the JDF for eigenstates. In Sec. III.3 we give details of the numerical methods used in this paper. ### Models We use the brick-wall circuit depicted in Fig. 6 as a simple model of a periodically driven many-body quantum system with local interactions in one dimension. Such Floquet models have been studied extensively in past work: see for example [43; 21; 44]. This circuit is defined in terms of two-site gates \(w_{i}\in\mathbb{C}^{q^{2}\times q^{2}}\) and the driving period is decomposed into two parts: in the first half of the period couplings are active only on even bonds of the system, while in the second half the couplings are active only on odd bonds. Thus, the time evolution operator for the first half period is \(W_{1}=w_{0}\otimes w_{2}\otimes w_{4}\dots\) and for the second half period is \(W_{2}=\mathbbm{1}\otimes w_{1}\otimes w_{3}\otimes\dots\). The evolution operator over one full period is \(W=W_{2}W_{1}\), and for \(t\) periods we write \(W(t)\equiv W^{t}\). In order to define an ensemble of systems, a natural choice would be to draw each unitary matrix \(w_{i}\) independently from the Haar distribution. We find, however, that in this case determination of the Lagrange multipliers \(G_{4}(X,Y,\theta)\) is complicated by effects that we attribute to realisations containing weak links \(i\) on which the gate \(w_{i}\) is close to the identity (especially in small systems or with small subsystems). To avoid such weak links, we draw the \(w_{i}\) from a truncated version of the Haar distribution in which all gates with an operator purity above a cutoff are discarded. With operator purity defined as in Eq (18) (so that the two-site identity operator has a purity of \(q^{4}\)) we take the cutoff to be \(0.3\times q^{4}\) for local Hilbert space dimension \(q=2\). The consequences of changing or omitting this cutoff are discussed in Sec. V.1 and Appendix A.2, and results for \(q=3\) are given in Sec. V.2. Figure 4: Test of JDF fitted to behaviour in the geometries of Fig. 3(d) but applied to geometries of Fig. 4(b). (a) Comparison of data from MC (open circles) and ED (solid lines). (b) Partition used for (a), in which the 12-site system is divided by two spatial cuts into a two-site subsystem \(X\) and its complement \(\overline{X}\), or a two-site subsystem \(\overline{Y}\) and its complement \(Y\). Calculations are for the brickwork circuit model defined in Sec. III.1 with \(L=12\), \(q=2\) and open boundary conditions. Figure 5: Numerical results for \(F_{2}(X,\theta)\) in a system with \(L=12\) and \(q=2\) as a function of the left cut position \(k\): ED results (solid lines) vs. Monte-Carlo (circles). [Delta-function contributions at \(\theta=0\) have been omitted; they are responsible for ensuring that the sum rule \(\int_{-x}^{x}\mathrm{d}\theta\,F_{2}(X,\theta)=1\) is satisfied for all \(X\).] The almost perfect agreement between both sets of data indicates that the correlations between pairs of eigenstates, as captured by ETH, may also be represented accurately within our approach. Figure 6: Unitary time evolution operator \(W\) of the Floquet circuit written as a tensor network. ### Determining Lagrange multipliers in the JDF We now discuss the problem of determining the Lagrange multipliers that appear in Eq. (28) and Eq. (30), and that define the JDFs, Eq. (27) and Eq. (29), of a small number of eigenstates. We present a method specific to \(G_{4}(X,Y,\theta)\) in Sec. III.2.1 and one specific to \(G_{2}(X,\theta)\) in Sec. III.2.2. These are both single-shot methods that make explicit use of information about the probability distributions of \(M_{X}(abcd)\) and \(M_{X}(abba)\) respectively, and for this reason they are particularly efficient. In Sec. III.2.3 we outline a third, more generally applicable iterative approach, that is agnostic to the probability distributions involved. #### iii.2.1 Determination of \(G_{4}(x,y,\theta)\) As noted above, pairwise correlations between eigenstates are very weak in the Floquet model we consider, in the sense that the correlator \(F_{2}(X,\theta)\) is only weakly dependent on \(\theta\), taking values close to that for a Haar-distributed pair of orthogonal vectors. As a simplifying approximation, we therefore set to zero the Lagrange multipliers \(G_{2}(X,\theta)\) [see Eq. (28)] that control these pairwise correlations. We make further choices concerning the set of subsystems \(X\) and \(Y\) for which Lagrange multipliers \(G_{4}(X,Y,\theta)\) are included in the JDF. Without restrictions there are \(2^{L}-1\) distinct subsystems: to reduce this number we include only Lagrange multipliers for connected subsystems - those that can be obtained from the full system (which has open boundary conditions) by means of a single cut - and we omit them for all subsystems obtained using multiple cuts. Our objective is to find the values of the Lagrange multipliers \(G_{4}(X,Y,\theta)\) for which the JDF reproduces the eigenstate correlator \(F_{4}(X,Y,\theta)\) (known from ED) as accurately as possible. We are able to simplify this task and avoid attacking directly a high-dimensional fitting problem if the quantities \(M_{X}(abcd)\) [Eq. (6)] are Gaussian distributed. Our motivation for treating a model with a truncated Haar distribution of gate unitaries is that in this system \(M_{X}(abcd)\) is well-approximated by a Gaussian. Evidence for this is presented in Fig. 7. Here we consider only the magnitude \(|M_{X}(abcd)|\) since the phase of \(M_{X}(abcd)\) is dependent on the phase convention used for the eigenstates, and we compare the distribution of \(|M_{X}(abcd)|\) with one in which the real and imaginary parts of \(M_{X}(abcd)\) are assumed to be uncorrelated Gaussian variables with equal variances and zero means. An analysis of the dependence of these distributions on the system size \(L\) and subsystem size \(L(X)\) (see Appendix A) suggests that deviations from a Gaussian vanish in the limit of large \(L\) and \(L(X)\). Since deviations are significant for small subsystem size, we omit Lagrange multipliers \(G_{4}(X,Y,\theta)\) for subsystems \(X\) with \(L(X)\) or \(L(\overline{X})\leq 2\). This leaves \((L-5)^{2}\) Lagrange multipliers, or \((L-5)(L-4)/2\) independent quantities after taking account of symmetry relations. Similarly, a Gaussian distribution for \(M_{X}(abcd)\) also arises from the Haar distribution for four orthogonal vectors \([P_{4}^{(0)}(a,b,c,d)\) in the notation of (29)] in the large \(q^{L}\), \(q^{L(X)}\) limit. In this case, and in contrast to the Floquet model, the covariance is independent of the quasienergy difference \(\theta\) and has the value \[[M_{X}(abcd)M_{Y}^{*}(abcd)]_{0}=q^{L(X,Y)-4L}\,. \tag{31}\] In order to represent these Gaussian distributions in a compact way, it is convenient to introduce notation in which the \(L-1\) single-cut subsystems \(X\) used to define our Lagrange multipliers are labels for basis states in a \((L-1)\)-dimensional vector space. Then the values of \(M_{X}(abcd)\) for different \(X\) are components of an \((L-1)\)-component column vector \(\mathsf{M}\), so that \([\mathsf{M}]_{X}=M_{X}(abcd)\), while the Lagrange multipliers are entries in the \((L-1)\times(L-1)\) matrix \(\mathsf{G_{4}}\) with rows and columns labelled by \(X\) and \(Y\) respectively, so that \([\mathsf{G_{4}}]_{X,Y}=G_{4}(X,Y,\theta)\). Similarly, the eigenstate correlators \(F_{4}(X,Y,\theta)\) are elements of a matrix \(\mathsf{F_{4}}\). Then Eq. (30) can be rewritten in the compact form \[S_{4}=\mathsf{M^{T}G_{4}M^{*}}\,. \tag{32}\] The fact that the distributions \(P_{4}(a,b,c,d)\) and \(P_{4}^{(0)}(a,b,c,d)\) are [at large \(L\) and \(L(X)\)] Gaussian for \(\mathsf{M}\) suggests that a convenient coordinate system consists of the components of \(\mathsf{M}\) together with additional variables \(\Omega\) that we do not specify explicitly. We indicate this by writing \(P_{4}(\mathsf{M},\Omega)\) and \(P_{4}^{(0)}(\mathsf{M},\Omega)\). The result given in Eq. (31) for the covariance within the Haar distribution Figure 7: Probability distribution of \(q^{L}|M_{X}(abcd)|\) in the Floquet model of Sec. III.1 (coloured data) compared with fitted Gaussian distributions (black dashed lines). Data are for \(L=12\), \(q=2\) and a subsystem \(X\) consisting of the four sites closest to the end of an open system, and are shown for the four indicated values of the relative phase \(\theta=\theta_{a}-\theta_{b}+\theta_{c}-\theta_{d}\). Results for other \(q\) and \(X\) are shown in Appendix A. implies that \[\int\mathrm{d}\Omega P_{4}^{(0)}(\mathsf{M},\Omega) = [\mathcal{Z}_{4}^{(0)}]^{-1}e^{-\mathsf{M}^{\mathsf{T}}\mathsf{G}_{ \mathsf{d}}^{(0)}\mathsf{M}^{*}} \tag{33}\] \[\text{with}\quad[(\mathsf{G}_{\mathsf{d}}^{(0)})^{-1}]_{X,Y} = \left[[\mathsf{M}]_{X}[\mathsf{M}^{*}]_{Y}\right]_{0}=q^{L(X,Y)-4L}\,.\] and \(\mathcal{Z}_{4}^{(0)}=\pi^{L-1}/\det\mathsf{G}_{\mathsf{d}}^{(0)}\). This in turn implies that \[\int\mathrm{d}\Omega P_{4}(\mathsf{M},\Omega)=[\mathcal{Z}_{4}]^{-1}e^{- \mathsf{M}^{\mathsf{T}}[\mathsf{G}_{\mathsf{d}}+\mathsf{G}_{\mathsf{d}}^{(0) }]\mathsf{M}^{*}} \tag{34}\] and hence that \[\left[[\mathsf{M}]_{X}[\mathsf{M}^{*}]_{Y}\right]_{\mathrm{av}}=\left[( \mathsf{G}_{\mathsf{d}}+\mathsf{G}_{\mathsf{d}}^{0})^{-1}\right]_{X,Y}. \tag{35}\] In addition, we have from Eq. 7 \[[\mathsf{F}_{\mathsf{d}}]_{X,Y}=(2\pi)^{-1}q^{4L-L(X,Y)}\big{[}[\mathsf{M}]_{X }[\mathsf{M}^{*}]_{Y}\big{]}_{\mathrm{av}}\,. \tag{36}\] Eq. (35) allows the determination of the Lagrange multipliers in Eq. (29) in terms of the matrix \(\mathsf{F}_{\mathsf{d}}\), which is obtained using ED. This method was used to generate Fig. 3(c). #### iii.2.2 Determination of \(G_{2}(X,\theta)\) Next we describe the method we use to determine the Lagrange multipliers \(G_{2}(X,\theta)\), treating explicitly the case of \(L-1\) subsystems \(X\) generated by single cuts. A different approach is required to that for \(G_{4}(X,Y,\theta)\) because the probability distribution of \(M_{X}(abba)\) is quite different to that of \(M_{X}(abcd)\) for \(a\neq b\neq c\neq d\). Indeed, while (as discussed) \(M_{X}(abcd)\) has a complex Gaussian distribution with zero mean, \(M_{X}(abba)\) is from its definition real and non-negative. To understand the distribution of \(M_{X}(abba)\) it is useful to start from Eq. (13), which specialises here to \[M_{X}(abba)=q^{-L(X)}\sum_{\alpha}|\langle a|X_{\alpha}|b\rangle|^{2}\,. \tag{37}\] With \(a\neq b\) we expect from ETH that \(\langle a|X_{\alpha}|b\rangle\) for each \(\alpha\) is an independent complex Gaussian random variable. From this we can conclude, first, that the two quantities \(M_{X}(abba)\) and \(M_{X^{\prime}}(abba)\) are correlated if the sets \(\{X_{\alpha}\}\) and \(\{X^{\prime}_{\alpha}\}\) have operators in common, and second, that statistically independent quantities can be constructed using a transformation that organises the operators into disjoint sets. In detail, this transformation is defined recursively by considering the \(L-1\) subsystems in order of increasing size. Again we introduce an \((L-1)\)-dimensional vector space, and define the vector \(\mathsf{M}\) to have components \([\mathsf{M}]_{X}=M_{X}(abba)\). Similarly, we introduce the notation \(\mathsf{T}\) for our target vector with statistically independent components \([\mathsf{T}]_{X}\), and \(\mathsf{G}\) with \([\mathsf{G}]_{X}=G_{2}(X,\theta)\) for the vector of Lagrange multipliers. To write the transformation we abuse notation and substitute in place of the component label \(X\) the value \(\ell=L(X)\). Then using Eq. (37) we write \[\mathsf{T}_{1} =\mathsf{M}_{1} \tag{38}\] \[\text{and}\quad\mathsf{T}_{\ell} =\mathsf{M}_{\ell}-q^{-1}\mathsf{M}_{\ell-1}\] for \(\ell=2\) to \(L-1\). This can be recast in the matrix form \[\mathsf{T}=\mathsf{VM} \tag{39}\] where \[\mathsf{V}=\begin{pmatrix}1&0&0&\dots&0\\ -q^{-1}&1&0&\dots&0\\ \vdots&&\ddots&&\\ 0&0&\dots&-q^{-1}&1\end{pmatrix}\,. \tag{40}\] The effect of this transformation is that Eq. (37) is replaced by \(\mathsf{T}_{\ell}=q^{-\ell}\sum_{\alpha}^{\prime}|\langle a|X_{\alpha}|b \rangle|^{2}\,,\) where the sum runs over the subset of \(n_{\ell}\equiv(q^{2}-1)q^{2(\ell-1)}\) operators \(X_{\alpha}\) that act non-trivially at the rightmost site in \(X\) [see illustration in Fig. 3(d)]. Since \(\langle a|X_{\alpha}|b\rangle\) is Gaussian with mean zero, the variable \(s_{\alpha}\equiv|\langle a|X_{\alpha}|b\rangle|^{2}\) has the distribution \(p_{\alpha}(s_{\alpha})=\sigma_{\alpha}e^{-\sigma_{\alpha}s_{\alpha}}\), where \(\sigma_{\alpha}\equiv\big{[}|\langle a|X_{\alpha}|b\rangle|^{2}\big{]}_{ \mathrm{av}}\big{]}^{-1}\). We make the approximation that \(\sigma_{\alpha}\) takes the same value \(\sigma_{\ell}\) for all \(X_{\alpha}\) that contribute to a given \(\mathsf{T}_{\ell}\). This framework applies not only to the true eigenstate distribution under consideration, but also to vectors with a Haar distribution, and in the latter case we denote the value of \(\sigma_{\alpha}\) by \(\sigma_{\ell}^{(0)}\). Then \[[\mathsf{T}_{\ell}]_{\mathrm{av}}=\frac{n_{\ell}}{\sigma_{\ell}}\quad\text{and }\quad[\mathsf{T}_{\ell}]_{0}=\frac{n_{\ell}}{\sigma_{\ell}^{(0)}}\,. \tag{41}\] With these ingredients in hand, Eq. (28) can be written in the form \[S_{2}=\mathsf{G}^{\mathsf{T}}\mathsf{V}^{-1}\mathsf{T}\quad\text{so that}\quad\sigma_{\ell}=\sigma_{\ell}^{(0)}+[\mathsf{G}^{\mathsf{T}} \mathsf{V}^{-1}]_{\ell}\,. \tag{42}\] Substituting Eq. (41) into Eq. (42) and rearranging, we obtain \[\mathsf{G}_{\ell}=\sum_{\ell^{\prime}}\left\{\frac{n_{\ell^{\prime}}}{[( \mathsf{VM})_{\ell^{\prime}}]_{\mathrm{av}}}-\frac{n_{\ell^{\prime}}}{[( \mathsf{VM})_{\ell^{\prime}}]_{0}}\right\}\mathsf{V}_{\ell^{\prime}\ell}\,. \tag{43}\] We employ Eq. (43) to determine the Lagrange multiplier \(G_{2}(X,\theta)\) in terms of \([\mathsf{M}]_{\mathrm{av}}\) obtained from ED and the Haar average \[[\mathsf{M}_{\ell}]_{0}=q^{-L}(q^{\ell}-q^{-\ell})\,. \tag{44}\] This approach is used to obtain the data shown in Fig. 5. Note [from the definition of \(\mathsf{T}_{\ell}\) below Eq. (40) and the discussion following Eq. (41)] that the probability distribution of \(\mathsf{T}_{\ell}\) is consistent with a Gaussian distribution for the matrix elements \(\langle a|X_{\alpha}|b\rangle\), as expected from ETH, with a variance controlled by the Lagrange multipliers \(G_{2}(X,\theta)\). Iterative method for determining Lagrange multipliers We next describe a straightforward method for determining the Lagrange multipliers without making use of information about the probability distributions of \(M_{X}(abcd)\) and \(M_{X}(abba)\). We treat the case of \(G_{4}(X,Y,\theta)\); the necessary modifications for \(G_{2}(X,\theta)\) are obvious. Our starting point is a perturbative expansion of Eq. (29) to first order in \(G(X,Y,\theta)\), which yields \[[M_{X}M_{Y}^{*}]_{\mathrm{av}}= [M_{X}M_{Y}^{*}]_{0} \tag{45}\] \[-\sum_{X^{\prime}Y^{\prime}}[\mathsf{K}_{4}]_{XY,X^{\prime}Y^{ \prime}}G(X^{\prime},Y^{\prime},\theta)\,,\] where \([\mathsf{K}_{4}]_{XY,X^{\prime}Y^{\prime}}\) is the connected correlator \[[\mathsf{K}_{4}]_{XY,X^{\prime}Y^{\prime}} =[M_{X}M_{Y}^{*}M_{X^{\prime}}M_{Y^{\prime}}^{*}]_{0} \tag{46}\] \[-[M_{X}M_{Y}^{*}]_{0}[M_{X^{\prime}}M_{Y^{\prime}}^{*}]_{0}\,.\] Introducing the abbreviations \[[\mathsf{V}_{4}]_{XY} =[M_{X}M_{Y}^{*}]_{0}-[M_{X}M_{Y}^{*}]_{\mathrm{av}} \tag{47}\] \[\text{and}\quad[\mathsf{G}_{4}]_{XY} =G_{4}(X,Y,\theta),\] this gives at first order in perturbation theory \[\mathsf{G}_{4}=\mathsf{K}_{4}^{-1}\mathsf{V}_{4}. \tag{48}\] To go beyond first order perturbation theory, we define an iterative procedure based on Eq. (48). Let \([M_{X}M_{Y}^{*}]_{\mathrm{MC}}^{(n)}\) denote the MC result obtained with the \(n\)th approximant \(G_{4}^{(n)}(X,Y,\theta)\) as Lagrange multiplier, and let \([M_{X}M_{Y}^{*}]_{\mathrm{ED}}\) be the value from ED. Then iterate for \(n=1,2\ldots\) \[[\mathsf{V}_{4}^{(n+1)}]_{XY}=[M_{X}M_{Y}^{*}]_{\mathrm{MC}}^{(n)}-[M_{X}M_{Y }^{*}]_{\mathrm{ED}} \tag{49}\] with \[[\mathsf{V}_{4}^{(1)}]_{XY}=[M_{X}M_{Y}^{*}]_{0}-[M_{X}M_{Y}^{*}]_{\mathrm{ ED}}\,. \tag{50}\] and \[\mathsf{G}_{4}^{(n)}=\mathsf{G}_{4}^{(n-1)}+\mathsf{K}_{4}^{-1}\mathsf{V}_{4} ^{(n)}\,. \tag{51}\] with \[\mathsf{G}_{4}^{(0)}=0\,. \tag{52}\] A possible refinement is to replace Eq. (51) with \[\mathsf{G}_{4}^{(n)}=\mathsf{G}_{4}^{(n-1)}+\alpha\mathsf{K}_{4}^{-1}\mathsf{V }_{4}^{(n)}\,. \tag{53}\] where \(0<\alpha\leq 1\) is a real parameter. Small \(\alpha\) reduces the risk of overshooting the solution at the expense of a slower convergence rate. This method is used to produce the data shown in Fig. 14. ### Numerical methods In this section, we give details of the ED, MC and ensemble averaging procedures used to obtain the data presented in this paper. #### iii.3.1 Exact diagonalization In order to study eigenstate correlations, we use ED of the Floquet operator \(W\) to compute exact eigenvectors and hence averages of \(M_{X}(abcd)M_{Y}^{*}(abcd)\) for all choices of our selected subsystems \(X\) and \(Y\). We obtain phase-resolved averages by dividing the phase interval \([-\pi,\pi]\) into 64 bins. Each realisation of \(W\) with \(L\) spins generates approximately \(q^{4L}/4!\) different quadruples of eigenstates and from these we randomly choose up to \(10^{6}\) quadruples. We average over between 150 (for \(L=12\)) and 1000 (for \(L=10\) and \(L=8\)) realisations of \(W\) to obtain the ED data in Fig. 3, and over 25000 realisations for the results shown in Fig. 4. The symmetry relation \(F_{4}(X,Y,\theta)=F_{4}(X,Y,-\theta)\) allows us to restrict calculations to \(\theta\geq 0\). Similarly, in the case of \(M_{X}(abba)\) we take \(10^{6}\) tuples out of the approximately \(q^{2L}/2\) possibilities for each realisation of \(W\) and \(L=12\). We average over 1000 realisations for \(L=12\) to obtain the data in Fig. 5. From these ED results we compute the Lagrange multipliers \(G_{4}(X,Y,\theta)\) and \(G_{2}(X,\theta)\) using the procedures described in Sec. III.2. We find that a further symmetrisation of the data, using the spatial symmetry under the interchange of \(X\), \(Y\) with \(\overline{X}\), \(\overline{Y}\), improves stability. #### iii.3.2 Monte-Carlo sampling In order to determine the eigenstate correlator \(F_{4}(X,Y,\theta)\) from the JDF [Eq. (29)] for four eigenstates \(|a\rangle\), \(|b\rangle\), \(|c\rangle\) and \(|d\rangle\) we use Monte Carlo sampling with \(e^{-S_{4}(a,b,c,d)}\) [Eq. (30)] as the weighting term. Similarly, to determine the correlator \(F_{2}(X,\theta)\) from the JDF [Eq. (27)] for two eigenstates \(|a\rangle\) and \(|b\rangle\) we use Monte Carlo sampling with \(e^{-S_{2}(a,b)}\) [Eq. (28)] as the weighting term. We follow the Metropolis-Hastings algorithm to obtain a Markov chain of vector quadruples \((a,b,c,d)\) distributed according to \(P_{4}\). To generate the next quadruple \((a^{\prime},b^{\prime},c^{\prime},d^{\prime})\), we use a random unitary rotation of the vectors in the previous sample of the form \(V=\exp(i\epsilon A)\), where \(A\) is a random Hermitian matrix drawn from the Gaussian unitary ensemble with unit variance. We set \(\epsilon=0.1\) for \(L=12\) and \(q=2\), and \(\epsilon=0.8\) for \(L=8\) and \(q=2\). Since \(V\) is unitary, orthonormal vectors retain this property after the transformation. This choice of update rule with tuning parameter \(\epsilon\) allows us to perform effective importance sampling, since we only propose relatively small changes to the sample. The new sample is then accepted in the Markov chain with probability \[P_{\rm accept}=\min\left(1,{\rm e}^{-S_{4}(a,b,c,d)+S_{4}(a^{\prime},b^{\prime},c^ {\prime},d^{\prime})}\right). \tag{54}\] For each \(\theta\) we perform 2000 Monte-Carlo runs in parallel with up to \(10^{6}\) samples per run. The results are obtained by averaging over all runs. In the case of small \(\theta\), we find in rare cases (one out of 1000 runs) instabilities towards local maxima of the weighting function during the sampling process. This is visible by tracking \(S_{4}(a,b,c,d)\) or the acceptance rate. This problem can be circumvented by decreasing the size of the update steps, but only at the expense of longer autocorrelation times. As a compromise, we discard runs where \(S_{4}\) falls below a threshold value of \(-50\). To provide an overall test of our form [Eq. (29)] for the JDF, we show in Fig. 8 a comparison of the distributions of \(S_{4}(a,b,c,d)\) obtained respectively from ED and from MC sampling of the JDF. The excellent agreement between the two distributions over a range of values for \(\theta\) is evidence of the internal consistency of our approach. ## IV Perturbative treatment of Lagrange multipliers An obvious approach to calculations based on the joint eigenstates distributions of Eqns. (27) and (29) is a perturbative expansion in powers of the Lagrange multipliers \(G_{2}(X,\theta)\) and \(G_{4}(X,Y,\theta)\). In this section we set out a general framework for such an expansion and apply it in several ways. While the expansion does not generate fundamentally new results, it provides a useful perspective that is complementary to the one set out in Sec. II and Sec. III. We use the expansion to provide an alternative justification of the fitting procedure for \(G_{4}(X,Y,\theta)\) to the one described in Sec. III.2.1. We also use it to consider Eq. (29) without the simplification employed in Sec. II.6 of setting \(G_{2}(X,\theta)=0\). We show that for large \(q^{L}\) and \(q^{L(X)}\) that if \(F_{2}(X,\theta)\) is calculated from the JDF for four eigenstates rather than two, the influence of \(G_{4}(X,Y,\theta)\) on the result is small. This means that the predictions of ETH for matrix elements between pairs of eigenstates are only weakly affected by the correlations between sets of four eigenstates that are introduced with our Ansatz for the JDF. Finally, we show that the effect of the Lagrange multipliers \(G_{2}(X,\theta)\) and \(G_{4}(X,Y,\theta)\) on the normalisation of eigenstates drawn from the joint distributions is small in large systems. An exact perturbative expansion requires the evaluation of averages over a Haar distribution of vectors, denoted by \(P_{1}^{0})(a)\), \(P_{2}^{(0)}(a,b)\) and \(P_{4}^{(0)}(a,b,c,d)\) in Sec. II.5. While the formalism required for this is well developed (see e.g. [45]), it is quite cumbersome. Moreover, we require results only at leading order for \(q^{L}\) large. These can be obtained by substituting in place of the Haar distribution, one in which each vector component is an independent Gaussian random variable. Specifically, consider a computational basis \(\{|k_{c}\rangle\}\) and denote the overlap of the eigenstate \(|a\rangle\) with the basis state \(|k_{c}\rangle\) by \(a(k)=\langle k_{c}|a\rangle\). Define \(S_{1}(a)\) by \[S_{1}(a)=q^{L}\sum_{k}|a(k)|^{2}\,. \tag{55}\] We replace \(P_{4}^{(0)}(a_{1},a_{2},a_{3},a_{4})\) by \[P_{4}^{(\rm G)}(a_{1},a_{2},a_{3},a_{4})=\left(\frac{q^{L}}{\pi}\right)^{4q^{L }}e^{-\sum_{i}S_{1}(a_{i})}\,. \tag{56}\] Although different vectors drawn from this distribution are not in general exactly orthonormal, orthonormality is recovered in the limit \(q^{L}\to\infty\). We denote averages with respect to this Gaussian distribution by \([\ldots]_{\rm G}\). ### Diagrammatic notation It is useful to employ diagrammatic notation for these Gaussian averages. As in Fig. 1, eigenstates are represented by circles and index contractions following from the definition of \(M_{X}(abcd)\) are indicated by solid lines carrying arrows that run from states \(|a\rangle\) towards conjugate states \(\langle b|\); these lines carry labels to indicate the subsystem within which the contraction is done. The combination of circles and full lines, with labels for states and subsystems, is fixed by the choice of quantity we average, and diagrams are generated by making all possible Wick pairing of circles. These pairings are represented by dashed lines. The contribution of a diagram is a product of two factors. One factor stems from Eq. (56) and consists of \(q^{-L}\) for every dashed line. The other factor arises from sums over the Hilbert space at each site. To evaluate this factor we form closed loops in the diagram consisting alternately of dashed lines and full lines traversed in the direction of the arrows. These full lines Figure 8: Probability distribution of \(S_{4}(a,b,c,d)\) [Eq. (30)] for quasienergy differences \(\theta\) as indicated: comparison between ED results (black dashed lines) and MC results (solid coloured lines). Parameters as in Fig. 3. Our first objective is to obtain a relationship between \(F_{4}(X,Y,\theta)\) and \(G_{4}(X,Y,\theta)\) by expanding \(e^{-S_{4}(a,b,c,d)}\) in a power series, averaging with respect to the Gaussian distribution, and then resumming the terms at each order in the expansion that are leading for \(q^{L}\) and \(q^{L(X)}\) large. By this means we will recover Eq. (35). Using the notation of Eq. (32) we require the connected contributions to \(\left[\mathsf{M}]_{X}[\mathsf{M}]_{Y}[\mathsf{M}^{\mathsf{T}}\mathsf{G}_{4} \mathsf{M}^{*}]^{n}\right]_{\mathrm{G}}\) at each order \(n\) in perturbation theory that are leading for large system and subsystem sizes. These come from the Wick contractions that generate the largest number of loops in a decomposition of the type illustrated in Fig. 9. These contractions are illustrated for \(n=0,1\) and \(2\) in Fig. 10, establishing an obvious pattern for general \(n\). Retaining only these terms we have \[\left[\mathsf{[M]}_{X}[\mathsf{M}^{*}]_{Y}\right]_{\mathrm{av}} \approx \sum_{n=0}^{\infty}(-1)^{n}[(\mathsf{G}_{4}^{(0)})^{-1}(\mathsf{G }_{4}(\mathsf{G}_{4}^{(0)})^{-1})^{n}]_{X,Y} \tag{58}\] \[= \left[(\mathsf{G}_{4}^{(0)})^{-1}[\openone+\mathsf{G}_{4}( \mathsf{G}_{4}^{(0)})^{-1}]^{-1}\right]_{X,Y}\] \[= \left[(\mathsf{G}_{4}+\mathsf{G}_{4}^{(0)})^{-1}\right]_{X,Y}\] where \(\mathsf{G}_{4}^{(0)}\) is as defined following Eq. (33). [Note that for these terms the factor of \((n!)^{-1}\) arising from the power Figure 10: Leading connected contributions to \(\left[\mathsf{[M]}_{X}[\mathsf{M]}_{Y}[\mathsf{M}^{\mathsf{T}}\mathsf{G}_{4} \mathsf{M}^{*}]^{n}\right]_{\mathrm{G}}\) for: (i) \(n=0\), (ii) \(n=1\) and (iii) \(n=2\). The pair of squares in (i) and the left-most pairs of squares in (ii) and (iii) represent \([\mathsf{M}]_{X}[\mathsf{M}]_{Y}\); other pairs of squares represent \(\mathsf{M}^{\mathsf{T}}\mathsf{G}_{4}\mathsf{M}^{*}\). The dashed lines in (ii) and (iii) that leave the diagrams on the left are joined to the dashed lines that leave on the right. Figure 9: Diagrammatic representation and evaluation of \([M_{X}(abba)]_{\mathrm{G}}\): (a) full diagram; (b) decomposition into loops for a site in subsystem \(X\); (c) decomposition into loops for a site in subsystem \(\overline{X}\). series expansion of \(e^{-S_{4}(a,b,c,d)}\) is cancelled by a combinatorial factor arising in the pairing of terms in Fig. 10.] Comparing Eqns. (35) and (58) we see that this diagrammatic resummation provides an alternative derivation of the main results of Sec. III.2.1. ### Cross-correlations In our MC studies of the eigenstate JDF [Eq. (29)] we have made two simplifications: one is to omit \(G_{4}(X,Y,\theta)\) when studying \(F_{2}(X,\theta)\), and the other is to omit \(G_{2}(X,\theta)\) when studying \(F_{4}(X,Y,\theta)\). The magnitudes of the resulting errors can be assessed using perturbation theory, as we now discuss. We begin by considering the effect of \(G_{4}(X,Y,\theta)\) on the value of \(F_{2}(X,\theta)\), or equivalently on the value of \([M_{X}(abba)]_{\rm av}\), as follows. To first order in perturbation theory in \(G_{4}(X,Y,\theta)\) we have \[[M_{X}(abba)]_{\rm av}= [M_{X}(abba)]_{\rm G}\] \[-\sum_{X^{\prime}Y^{\prime}}G_{4}(X^{\prime},Y^{\prime}, \theta)[M_{X}(abba)M_{X^{\prime}}(abcd)M_{Y^{\prime}}^{*}(abcd)]_{G,c}\,, \tag{59}\] \[+\mathcal{O}([G_{4}(X^{\prime},Y^{\prime},\theta]^{2})\,,\] where \([\ldots]_{\rm G,c}\) denotes the connected average, defined by \[[M_{X}(abba)M_{X^{\prime}}(abcd)M_{Y^{\prime}}^{*}(abcd)]_{\rm G,c}=\] \[[M_{X}(abba)M_{X^{\prime}}(abcd)M_{Y^{\prime}}^{*}(abcd)]_{\rm G}\] \[-[M_{X}(abba)]_{\rm G}[M_{X^{\prime}}(abcd)M_{Y^{\prime}}^{*}(abcd )]_{\rm G}\quad\,. \tag{60}\] The diagrams that contribute to this connected average are shown in Fig. 11. Evaluating these diagrams for the representative case \(X=X^{\prime}=Y^{\prime}\), we obtain \[[M_{X}(abba) M_{X}(abcd)M_{X}^{*}(abcd)]_{\rm G,c} \tag{61}\] \[=q^{-3L} (q^{-L(X)}+2q^{-L(\overline{X})})\,.\] We compare the zeroth-order term, which is \[[M_{X}(abba)]_{\rm G}=q^{L(X)-L} \tag{62}\] from Eq. (57), with the first-order term, using Eq.(61) and \(G_{4}(X,X,\theta)\sim q^{2L}\) from Eq. (33), which gives \[\sum_{X^{\prime}Y^{\prime}}G_{4}(X^{\prime},Y^{\prime},\theta)[M_{X}(abba)M_{X^{\prime}}(abcd)M_{Y^{\prime}}^{*}(abcd)]_ {G,c}\,, \tag{63}\] \[\sim q^{-L}(q^{-L(X)}+2q^{-L(\overline{X})})\,.\] Hence we see that the effect of \(G_{4}(X^{\prime},Y^{\prime},\theta)\) on \(F_{2}(X,\theta)\) is small provided that \(q^{L(X)}\) and \(q^{L(\overline{X})}\) are large. We have not systematically investigated higher order terms in Eq. (59), but we expect them individually to be small: note that although \(G_{4}(X^{\prime},Y^{\prime},\theta)\sim q^{2L}\), each such contribution is accompanied by a factor of \(q^{-4L}\) from extra dashed lines, as well as diagram-dependent factors from sums over the Hilbert space at each site. A similar study of the effect of \(G_{2}(X,\theta)\) at first order in perturbation theory on \(F_{4}(X,Y,\theta)\), or equivalently on \([M_{X^{\prime}}(abcd)M_{Y^{\prime}}^{*}(abcd)]_{\rm av}\) reaches a different conclusion: the influence is not small in \(q^{L}\) or \(q^{L(X)}\), but only in powers of \(G_{2}(X,\theta)\). This is not unexpected: the functional form for a correlator involving four eigenstates and depending on three quasienergy differences has been discussed previously [46] and in Sec. II.3; it involves factors of both \(F_{2}(X,\theta)\) and \(F_{4}(X,Y,\theta)\) (see Eq. (24)). We do not pursue this further since we have chosen here to study models in which \(F_{2}(X,\theta)\) lies close to its value for Haar-distributed pairs of eigenstates, and \(G_{2}(X,\theta)\) is therefore small. ### Renormalisation of propagators and vertices An alternative perspective is provided by considering the perturbative renormalisation of the propagators and vertices appearing in the JDF. We begin with the former: the bare propagator in the theory, represented using dashed lines in the figures, is generated by Eqns. (55) and (56), and carries a factor of \(q^{-L}\). More generally, it acquires a self-energy \(\Sigma\) and the factor becomes \((q^{L}+\Sigma)^{-1}\). Our aim is to evaluate \(\Sigma\) at leading order in \(G_{2}(X,\theta)\) and Figure 11: Contributions to the connected average defined in Eq. (60) \(G_{4}(X,Y,\theta)\) and compare it to the bare inverse propagator \(q^{L}\): if it is small under this comparison, then the effects of the vertices on the normalisation of eigenstates can be neglected. The diagrams contributing to \(\Sigma\) at this order are shown in Fig. 12. They are diagonal matrices in the site basis and the magnitudes of their largest entries are respectively \[\Sigma^{\rm(i)}_{kk}=q^{-L}\sum_{X}G_{2}(X,\theta)q^{L(X)}\sim q^{2L(X)} \tag{64}\] [where we have used Eqns. (43) and (44) to estimate \(G_{2}(X,\theta)\sim q^{L+L(X)}\)] and \[\Sigma^{\rm(ii)}_{kk}=q^{-4L}\sum_{X,Y}q^{L(X,Y)}G_{4}(X,Y,\theta)\sim\mathcal{ O}(1)\,, \tag{65}\] [where we have used an estimate of \(G_{4}(X,Y,\theta)\) given above Eq. (63)]. From this we see that the renormalisation of the propagator is indeed small if \(q^{L}\) is large and if \(q^{2L(X)}\ll q^{L}\). In the opposite regime (\(q^{2L(X)}>q^{L}\)) we believe that \(G_{2}(X,\theta)\) is small since \(F_{2}(X,\theta)\) approaches the value it takes for Haar-distributed eigenstates, and that \(\Sigma^{\rm(i)}_{kk}\ll q^{L}\) notwithstanding the estimate of Eq. 64. A similar discussion can be developed of renormalised vertices generated by combining contributions from \(S_{2}(a,b)\) and \(S_{4}(a,b,c,d)\) in (29) and forming sufficient internal contractions to generate a new effective contribution to either \(S_{2}(a,b)\) or \(S_{4}(a,b,c,d)\). For example, one (out of three possible terms) contributing in this way to \(S_{2}(a,b)\) at first order in both \(G_{2}(X,\theta)\) and \(G_{4}(X,Y,\theta)\) is shown in Fig. 13. We do not pursue this further because this is exactly the same phenomenon as has been discussed from a different perspective in Sec. IV.3. Note that it is not necessary to consider renormalisation of \(S_{2}(a,b)\) in powers of \(G_{2}(X,\theta)\) alone, or of \(S_{4}(a,b,c,d)\) in powers of \(G_{4}(X,Y,\theta)\) alone, since these effects (which are not generally small) are covered in full by the approaches described in Sec. IV.2 and in Sec. III.2. ## V Results for further models In this section we provide numerical results for brickwork models additional to the one treated in Sec. II. In Sec. V.1 we omit the cutoff in the operator purity of gates introduced in Sec. III.1 and present results for gates drawn from a Haar-distribution. In Sec. V.2 we give results with local Hilbert space dimension \(q=3\) and Haar-distributed gates. ### Results for \(q=2\) with Haar gates As discussed in Sec. III, the determination of the Lagrange multipliers \(G_{4}(X,Y,\theta)\) in the brickwork model with Haar-distributed gates is complicated by the presence of weak links. Specifically, we find that the simplifying assumption used in Sec. III.2.1, namely that the probability distribution of \(M_{X}(abcd)\) is approximately Gaussian, does not hold for Haar-distributed gates. Instead, this distribution exhibits long tails, as we demonstrate in Appendix A.2. To determine \(G_{4}(X,Y,\theta)\) under these circumstances, we use the iterative fitting procedure introduced in Sec. III.2.3, which is not predicated on a particular form for the probability distribution of \(M_{X}(abcd)\). We take all subsystems \(X\) and \(Y\) that can be obtained using a single cut from a system with open boundary conditions. In contrast to Sec. II, where the requirement of an approximately Gaussian distribution for \(M_{X}(abcd)\) led to the restriction \(2<L(X)<L-2\), here we include all subsystem sizes \(1\leq L(X)\leq L-1\). A disadvantage of the iterative fitting procedure is that it requires multiple MC evaluations of \(F_{4}^{\rm MC}(X,Y,\theta)\), which is slow if the total number of degrees of freedom \(q^{L}\) involved is large; this restricts us to \(L=8\) with \(q=2\). In this case estimates \(F_{4}^{\rm MC}(X,Y,\theta)\) are obtained using between 60 and 300 iterations and a step size \(\alpha=0.2\). The MC results shown in Fig. 14 display very good agreement with ED data. Figure 12: Contributions to the self-energy \(\Sigma\) at first order: (i) from \(G_{2}(X,\theta)\), and (ii) from \(G_{4}(X,Y,\theta)\). Figure 13: One of three contributions to the renormalisation at first order in both \(G_{2}(X,\theta)\) and in \(G_{4}(X,Y,\theta)\) of the vertex that appears in \(S_{2}(a,b)\). ### Results for \(q=3\) brickwork model In our study of the brickwork model with \(q=3\) and Haar-distributed gates we use system size \(L=8\) so that ED calculations are straightforward. The effect of weak links becomes less pronounced with increasing bond dimension \(q\) and as a consequence the single-shot approach to obtain \(G_{4}(X,Y,\theta)\) of Sec. III.2.1 becomes more accurate. Conversely, for a given system size the iterative method of Sec. III.2.3 is more difficult to apply with increasing \(q\), because a large number of samples is required to obtain accurate estimates for \(F_{4}^{\rm MC}(X,Y,\theta)\) when the Hilbert space dimension \(q^{L}\) of the system is large (see the discussion of Sec. II.4 and Ref. [21]). We therefore determine Lagrange multipliers for this model using the single-shot approach, taking subsystems \(X\) obtained from a system with open boundary conditions by making a single cut, and with \(L(X)\geq 2\), \(L(\overline{X})\geq 2\). The MC results shown in Fig. 15 display excellent agreement with ED data. Finally, we test how well the JDF fitted to the geometries of Fig. 14(b) can reproduce the behaviour of the OTOC in the geometries of Fig. 16(b). The results shown in Fig. 16(a) display excellent agreement between ED and MC data. ## VI Summary and Outlook In this work we have analysed the interplay between the statistical properties of the time evolution operator for chaotic many-body quantum systems, and the quantum information dynamics that these systems display. The eigenstate thermalisation hypothesis provides an accurate description of some key aspects, in terms of the probability distribution of matrix elements of observables between eigenstates of the time-evolution operator. In its original form, however, it does not capture the conse Figure 16: Test of JDF fitted to behaviour in the geometries of Fig. 15 (b) but applied to geometries of Fig. 16 (b), for the brickwork model with \(q=3\) and \(L=8\). (a) Comparison of data from MC (open circles) and ED (solid lines). (b) Partition used for (a), in which the 8-site system is divided by two spatial cuts into a two-site subsystem \(X\) and its complement \(\overline{X}\), or a two-site subsystem \(\overline{Y}\) and its complement \(Y\). Figure 14: \(F_{4}^{\rm MC}(X,Y,\theta)\) (open circles from MC) and \(F_{4}(X,Y,\theta)\) (lines from ED) vs \(\theta\) for various \(s\), for a brickwork model with Haar-distributed gates, \(q=2\) and \(L=8\). Figure 15: \(F_{4}^{\rm MC}(X,Y,\theta)\) (open circles from MC) and \(F_{4}(X,Y,\theta)\) (lines from ED) vs \(\theta\) for various \(s\), for a brickwork model with Haar-distributed gates, \(q=3\) and \(L=8\). quences of a finite speed for quantum information spreading. To remedy this, we advocate a change of viewpoint. In place of the matrix elements considered within ETH, we construct correlators from eigenstates. We show that the simplest such correlator capturing signatures of quantum information dynamics at long times and large distances is unique and involves a set of four eigenstates. We also propose an Ansatz for the joint probability distribution function of small numbers of eigenstates, in which the values of correlators are controlled by Lagrange multipliers. We support these general ideas using numerical studies of Floquet quantum circuits, showing firstly that correlators have the expected features, and secondly that it is possible to choose values for the Lagrange multipliers so that our Ansatz reproduces these features accurately. We believe our approach is complementary to recent work [36; 37] that generalises ETH using the language of Free Probability theory. An advantage of viewing this problem in terms of correlations between eigenstates rather than matrix elements is that even the simplest assumption, of a Haar distribution for eigenstates, yields correlations with the correct order of magnitude. In this instance, however, the correlations are independent of eigenvalues. By contrast, a microscopic model for local quantum dynamics yields a characteristic dependence of correlators on differences in quasienergies, as we have summarised in Fig. 3. We have shown that our Ansatz for the joint distribution of eigenstates, together with a suitable choice for the Lagrange multipliers, captures this dependence on eigenvalues differences. The quantity that characterises correlations between sets of four eigenstates, denoted \(F_{4}(X,Y,\theta)\) in this paper, is a function not only of the eigenphase difference \(\theta\), but also of subsystem choices \(X\) and \(Y\). This constitutes a large set of possibilities if there are no restrictions on the way the subsystems are selected. We have focussed on the simple set of subsystems that can be obtained from an overall system with open (rather than periodic) boundary conditions by making a single spatial cut, choosing Lagrange multipliers in our Ansatz for the JDF so that \(F_{4}(X,Y,\theta)\) is reproduced accurately for these subsystems. In addition we have shown that the eigenstate correlations imposed in this way are sufficient to reproduce to a good approximation the correlators for some other choices of \(X\) and \(Y\). In particular, we have demonstrated that the behaviour of the OTOC for operators supported on a few sites of the system (requiring choices of \(X\) and \(Y\) each involving two cuts) is well approximated by our JDF. Several obvious directions remain open for future work. Perhaps most importantly, while our discussion in this paper has been restricted to Floquet systems, it would be desirable to extend the approach to systems with a time-independent Hamiltonian. In our context, the significance of such an extension is that diagonal matrix elements of local observables in the basis of Hamiltonian eigenstates are generically functions of energy, a feature absent from Floquet systems. Indeed, ETH is formulated in part to describe this energy dependence. A natural modification of our JDF to impose such an energy dependence is to supplement Eqns. (27) and (29) with an extra factor \(e^{-\sum_{k}S_{1}(a_{k})}\) chosen to bias selected eigenstates towards a pre-specified energy shell. An obvious choice is to take \(S_{1}(a)=\beta\langle a|H|a\rangle\), where \(H\) is the Hamiltonian and \(\beta\) is a Lagrange multiplier. It is worth emphasising that the correlations induced respectively by \(S_{1}(a)\), \(S_{2}(a,b)\) and \(S_{4}(a,b,c,d)\) are associated with widely separated energy scales. Taking the characteristic strength of local interactions as the unit of energy, \(S_{1}(a)\) selects vectors from a broad energy window, which has a width that increases with system size. In turn, \(S_{2}(a,b)\) generates correlations between pairs of vectors within an energy window with width of order unity. Finally \(S_{4}(a,b,c,d)\) generates correlations between groups of four vectors with energy or quasienergy differences lying in a narrow energy window, whose width decreases indefinitely with increasing separation between the subsystems \(X\) and \(Y\) in \(F_{4}(X,Y,\theta)\). A further direction for future work is to examine correlations between sets of \(n\) eigenstates with \(n>4\). This opens up many possibilities, since generalisations of the correlators \(F_{2}(X,\theta)\) and \(F_{4}(X,Y,\theta)\) may involve multiple subsystems in their definition. Restricting to a pair of subsystems \(X\) and \(Y\), these higher-order correlators are required, for example, to describe the higher-order Renyi entropies of the operator entanglement for the time-evolution operator. Beyond this, one can ask whether there are physical phenomena exposed only by higher-order correlators. ###### Acknowledgements. This work was supported by the Deutsche Forschungsgemeinschaft through the cluster of excellence ML4Q (EXC2004, project-id 390534769) and by the UK Engineering and Physical Sciences Research Council through Grants EP/N01930X/1 and EP/S020527/1. We also acknowledge support from the QuantERA II Programme, which has received funding from the European Union's Horizon 2020 research innovation programme (GA 101017733), and from the DFG through the project DQUANT (project-id 499347025), and from the National Science Foundation under Grant No. NSF PHY-1748958. We thank S. Parameswaran for useful comments. ## Appendix A Distribution of \(M_{x}(abcd)\) In this appendix we examine the probability distribution of \(M_{X}(abcd)\), providing further details beyond the information given in Fig. 7. This information is important because one of the two methods that we use for determining the Lagrange multipliers \(G_{4}(X,Y,\theta)\) relies on this distribution having a Gaussian form. In Ap pendix A.1 we show how the form of the distribution varies with the value of \(L(X)\). In Appendix A.2 we investigate the effect on the distribution of the cutoff in the operator entanglement purity of two-qubit gates introduced in Sec. III.1. ### Effect of the subsystem size \(L(x)\) The distribution of \(M_{X}(abcd)\) for different partitions \(X\) is shown in Fig. 17 (a) and (b). With increasing partition size \(L(X)\), the distribution approaches a Gaussian. This supports our observation that the method we use to determine \(G_{4}(X,Y,\theta\) becomes more accurate with increasing \(L(X)\). ### Effect of the cutoff in the operator entanglement purity In Sec. III we presented results for a model (see Sec. III.1) that is well adapted to our procedure for finding \(G_{4}(X,Y,\theta)\). This model is designed to have a probability distribution for \(M_{X}(abcd)\) that is close to Gaussian, and has gates drawn from a truncated version of the Haar distribution, with a cutoff on the operator entanglement purity of \(c\times q^{4}\). In Fig. 18 we examine the effect of the value of this cutoff on the probability distribution of \(M_{X}(abcd)\), considering the range from \(c=0.3\) (the value used in Sec. III) to \(c=1\) (an unrestricted Haar distribution). The distribution shows non-Gaussian tails for small relative phase \(\theta\) and a Haar-random Floquet model. The tails are suppressed with decreasing cutoff in the operator entanglement purity. We attribute this effect to weak links \(i\) on which the gate \(w_{i}\) is close to the identity (especially in small systems or with small subsystems). The effect of such weakly entangling gates on the dynamics of quan Figure 17: Probability distribution of \(|M_{X}(abcd)|\) for different subsystems \(X\) with \(L(X)=k\) and \(L=12\), \(q=2\), and fitted complex Gaussian distributions (dashed black) as guide for the eye: (a) \(\theta=0.1\) and (b) \(\theta=1.7\). Brickwork model with gates sampled from the distribution defined in III.1 and a cutoff \(0.3\times q^{4}\) for the operator purity. With decreasing partition size \(L(X)\), the distribution of \(M_{X}(abcd)\) exhibits non-Gaussian tails. Figure 18: Probability distribution of \(|M_{X}(abcd)|\) at partition \(k=2\) and \(L=12\), \(q=2\), and a complex Gaussian distribution (dashed black) as guide for the eye: (a) \(\theta=0.1\) and (b) \(\theta=1.7\). Brickwork model with unitary gates sampled from the distribution defined in III.1 and values of the cutoff \(c\) as indicated. With increasing cutoff, the distributions show non-Gaussian tails for small relative phases \(\theta\). tum information was studied in more detail in [47]. ## Appendix B Accuracy of our approach with increasing system size Here we provide complete information on deviations between \(F_{4}^{MC}(X,Y,\theta)\) and \(F_{4}(X,Y,\theta)\) with Lagrange Multipliers \(G_{4}(X,Y,\theta)\) determined using the single-shot procedure described in Sec. III.2.1. We compare the results for system sizes \(L=8\), \(10\), \(12\) and all possible partitions defined by one cut that have \(L(X),L(\overline{X})>2\). The relative deviations are shown in Fig. 19. In all cases the relative deviation is less than \(10\%\), showing the accuracy of our approach. Furthermore, the accuracy improves with increasing system size. We see the largest deviations for small partition sizes \(L(X)\) and \(L(Y)\). This is expected, as we discuss in Sec. IV: Correction terms in the perturbation theory and contributions from \(G_{2}(X,\theta)\) are most significant when \(L(X)\) and \(L(\overline{X})\) are small.
2309.12996
Point Cloud Network: An Order of Magnitude Improvement in Linear Layer Parameter Count
This paper introduces the Point Cloud Network (PCN) architecture, a novel implementation of linear layers in deep learning networks, and provides empirical evidence to advocate for its preference over the Multilayer Perceptron (MLP) in linear layers. We train several models, including the original AlexNet, using both MLP and PCN architectures for direct comparison of linear layers (Krizhevsky et al., 2012). The key results collected are model parameter count and top-1 test accuracy over the CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009). AlexNet-PCN16, our PCN equivalent to AlexNet, achieves comparable efficacy (test accuracy) to the original architecture with a 99.5% reduction of parameters in its linear layers. All training is done on cloud RTX 4090 GPUs, leveraging pytorch for model construction and training. Code is provided for anyone to reproduce the trials from this paper.
Charles Hetterich
2023-09-22T16:56:40Z
http://arxiv.org/abs/2309.12996v1
# Point Cloud Network: An Order of Magnitude Improvement in Linear Layer Parameter Count ###### Abstract This paper introduces the Point Cloud Network (**PCN**) architecture, a novel implementation of linear layers in deep learning networks, and provides empirical evidence to advocate for its preference over the Multilayer Perceptron (**MLP**) in linear layers. We train several models, including the original **AlexNet**, using both MLP and PCN architectures for direct comparison of linear layers (Krizhevsky et al., 2012). The key results collected are model parameter count and top-1 test accuracy over the **CIFAR-10** and **CIFAR-100** datasets (Krizhevsky, 2009). AlexNet-PCN\({}_{16}\), our PCN equivalent to AlexNet, achieves comparable efficacy (_test accuracy_) to the original architecture with a **99.5%** reduction of parameters in its linear layers. All training is done on cloud _RTX 4090_ GPUs, leveraging pytorch for model construction and training. Code is provided for anyone to reproduce the trials from this paper. \({}^{\text{a}}\)Master of Data Science Student, University of Texas at Austin, Austin, TX, USA _Keywords--_ Point cloud network, Low-rank factorization, Linear layer ## 1 Introduction The Multilayer Perceptron is the simplest type of Artificial Neural Network (**ANN**). Since its inception in the mid-20th century, it has held firmly as one of the most popular structures in deep learning. MLPs were the first networks used with backpropagation and are relied on heavily in the attention mechanisms of the popular transformer architectures (11, 12). Typically, networks that employ MLPs suffer from an extremely large parameter count. This is because the amount of trainable parameters present in an MLP scale by \(O(n^{2})\) relative to the number of input features. A parameter count so large that models have to be run across several GPUs because the parameters alone cannot fit into just one. GPT-3 and GPT-4 are two well-known models today, both of which rely on MLPs in their transformer architectures, with GPT-3 holding 175 billion trainable parameters [1, 10]. AlexNet, widely regarded as the catalyst of the modern deep learning boom over a decade ago, popularized the convolutional operation [8]-- the key feature of the convolution being its reduction in parameter count in processing image data [9]. Despite the value demonstrated by the parameter reduction present in convolutional networks, MLPs are still prevalent simply because there is currently no accessible alternative implementation of linear layers. The PCNs presented in this paper cut the parameter count present in linear layers by **an order of magnitude**, \(O(n^{2})\to O(n)\), while still maintaining a comparable efficacy to their equivalent MLP counterpart. ### Related Work Low-rank compression of ANNs is an emerging area of research which is closely related to PCNs [3]. Most work in this area relies on _Singular Value Decomposition_ and can be divided into one of two categories (1) finding a low-rank factorization of a pre-trained network [3, 5], or (2) training a low-rank network directly [6, 13]. The latter is more closely related to a PCN. ### Contribution This work offers a rephrasing of the same problem that low-rank factorization networks aim to solve. In low-rank factorization, we start with a weight matrix, \(W\), and look to find an optimal _compression_ that maintains efficacy [3]. The PCN starts with an already small set of parameters, and looks to find an optimal _expansion_ of those parameters that will perform with comparable efficacy to \(W\). We outline a light-weight implementation of the PCN architecture that is practical and generalizable to most existing deep learning networks, with source code that makes it trivial to implement. We also provide a set of key results that demonstrate that a PCN can substantially reduce the number of parameters in linear layers while still maintaining a comparable efficacy to an MLP. ## 2 Background- Multilayer Perceptron Architecture Two terms commonly used in describing ANNs are **neurons** and **weights**. In MLPs, neurons are the space where outputs from one layer and inputs to the next layer may be found. The weights are the _things in between neurons_. They are what processes information from one layer to the next. In most current deep learning architectures this is where nearly all of the trainable parameters can be found. Let's say we have two layers of neurons in a deep neural network, \(l_{i}\), and \(l_{i+1}\), holding \(n\) and \(m\) neurons, respectively. \(l_{i}\) takes input array \(x_{i}\) and processes that through \(l_{i+1}\) into \(x_{i+1}\). Between these two layers there will be trainable parameters \(W_{i}\), a matrix of size \(n\times m\). There is a also bias term, \(b_{i+1}\), an array of size \(m\). We define the MLP forward function as, \[x_{i+1}=x_{i}\cdot W_{i}+b_{i+1}\] noting that this operation contains \(O(mn)\) trainable parameters. ## 3 Point Cloud Network Architecture In contrast to an MLP, the trainable parameters of a PCN are all _neuron-centric_. What is learned are _features of the neurons themselves_, rather than _something in between_. In an MLP, we would say that the bias term, \(b\) is _neuron-centric_, but not \(W\) which contains a large majority of MLP parameters. We will treat the features of neurons as positional information (i.e. each neuron is a point in space, hence the name). The rest of this section explains step-by-step how to use these neuron features to process input data in the same way, and with the same expressiveness, as an MLP. ### Distance Matrix Going back to the prior example network-- this time we'll say \(l_{i}\), and \(l_{i+1}\) are actually trainable parameters, where \(l_{i}\) is of shape \(n\times d\) and \(l_{i+1}\) is of shape \(m\times d\). \(d\) is a hyperparameter representing the number of features each of our neurons have, or we can say this is the _dimensionality_ of the space our neurons exist in. \(d\) is an especially interesting hyperparameter because it allows us to scale up or down the number of Figure 1: visual representation of MLP forward function parameters in our network without affecting the number of features in a given layer. We'll also use bias term \(b_{i+1}\) of size \(m\) again. The \(W\) from an MLP is of shape \(n\times m\). In this step, we can generate an equally shaped distance matrix \(D(l_{i},l_{i+1})\), where \(D_{j,k}\) is the distance between neurons \(l_{i,j}\) and \(l_{i+1,k}\). \[D_{j,k}(l_{i},l_{i+1})=\sqrt{\sum_{c=1}^{d}(l_{i,j,c}-l_{i+1,k,c})^{2}}\] The intention is to replace \(W\) with \(D\) as follows, \[x_{i+1}=x_{i}\cdot D(l_{i},l_{i+1})+b_{i+1}\] However, \(D\) only contains _nonnegative_ numbers, whereas \(W\in\mathbb{R}^{n\times m}\). Using \(W\), a network can choose to _flip_ and _scale_ the signal passed forward from one neuron to another, whereas using \(D\), a network can only _scale_ signals. This would make our network using \(D\) fundamentally less expressive than one using \(W\). \(D\) is also prone to exploding/vanishing gradients. In this work \(D\) is given as the euclidean distance between \(d\)-dimensional points, but \(D_{j,k}\) has many possible implementations. The important feature of \(D\) is that it outputs an appropriately shaped matrix that facilitates interaction between every neuron in \(l_{i}\), with every neuron in \(l_{i+1}\). An example of an alternate implementation would be to omit the square root in the definition above. Another example would be the product \(l_{i}l_{i+1}^{\top}\) giving \(D\) a similar property to the multiplication of _keys and queries_ in transformers[12] or \(UV^{\top}\) in low-rank factorization[5, 6, 13, 3]. There likely exists a more optimal definition of \(D\) than the one defined here. ### Distance-Weight-Function The _distance-weight-function_, denoted here as \(F\), is an element-wise function to pass \(D\) through. The goal of \(F\) is to project \(D\) into a space that makes it as expressive as \(W\) and to provide regularization properties. In this paper the **triangle wave** is selected for \(F\). Let \(F_{\lambda,\epsilon}\) be an element-wise triangle wave function centered around \(0\) with amplitude \(\lambda\) and period \(\epsilon\), with a regularization term included. \[F_{\lambda,\epsilon}(z)=\frac{\mathbf{1}}{\sqrt{\boldsymbol{n}}}\cdot\frac{ \lambda}{\epsilon}\cdot(\epsilon-|z\bmod 2\epsilon-\epsilon|-\frac{\epsilon}{2})\] There is room for simplification, but the above equation is what is used in this work. \(\frac{\mathbf{1}}{\sqrt{\boldsymbol{n}}}\) is selected as the regularization term in order to maintain a stable signal moving forward through the network, agnostic of layer size and network depth. This term was found through a trial-and-error approach observing the variance of signal passed through untrained networks, which can be found in the provided source code. A better regularization term likely exists. **Selection of The Triangle Wave.** The triangle wave is selected for two desirable properties. Firstly, it takes any number \(\in\mathbb{R}\) and clamps it to the range \([-\lambda,\lambda]\). This provides important control over the stability of our signal moving forward through the network, ensuring that no weights are excessively large in magnitude, regardless of how much neurons may explode away from, or implode into one another during the learning process. This in turn allows for a steady flow of gradients during backpropagation. The second property that is specific to the triangle wave is its constant gradient and continuity-- informed by the prevalence of the ReLU shape for nonlinearities [8]. Cos/sin have saddle points where gradients may get stuck. Square waves' gradients are flat and saw waves are discontinuous which may lead to the network _pushing_ or _pulling_ a weight _up_ or _down_ the saw wave's drop off. ### Forward Function The PCN forward function is given as follows, \[x_{i+1}=x_{i}\cdot F_{\lambda,\epsilon}(D(l_{i},l_{i+1}))+b_{i+1}\] which has \(O(n+m)\) trainable parameters, in contrast to the \(O(nm)\) trainable parameters in an MLP. ## 4 Training This section details the model architectures implemented as well as the full training process. Techniques such as random image augmentation, batch normalization, or residual connections are refrained from being used in favor of the direct comparison of linear layer Figure 2: visual representation of PCN forward function performance of MLPs and PCNs over achieving state of the art (**SOTA**) performance. Additionally, a limited compute budget informs several design choices seen in this section. ### Model Definitions A modest variety of model categories are trained to evaluate the PCNs performance in different circumstances. For each model category there is a single baseline model that uses MLPs and one or more equivalent PCN models. All PCN models use hyper-parameters \(\lambda=1,\epsilon=0.1\). Details about the shape and depth of each network can be found in figure 3. #### 4.1.1 LinearNet **Baseline Network.** LinearNet-MLP consists of linear layers followed by ReLUs. A linear layer with no ReLU is applied to produce the final output. **PCN Network.** Four LinearNet-PCN\({}_{d}\) models are trained, each differing only in their dimensionality (\(d\in[4,8,16,32]\)). Each PCN takes the baseline definition and replaces all MLP layers with equally shaped PCN layers with no further modification. Figure 3: Illustration of LinearNet, ConvNet, and AlexNet architectures. On the left side of each network are the sizes of the signal passed forward through the network. #### 4.1.2 ConvNet **Baseline Network.** ConvNet-MLP has a _feature extractor_ network consisting of convolutional layers followed by ReLUs. The feature extractor network is then fed into the _classifier_ network, consisting of linear layers followed by ReLUs with a linear layer at the end. **PCN Network.** ConvNet-PCN\({}_{16}\) uses the same feature extractor as ConvNet-MLP. For the classifier a PCN is used instead of an MLP for linear layers with no further modifications. #### 4.1.3 AlexNet **Baseline Network.** AlexNet-MLP is an untrained replica of the original model with a single modification made to the last linear layer in order to output the appropriate number of class predictions for each of CIFAR-10 and CIFAR-100. Like the previous ConvNet, AlexNet also consists of a convolutional _feature extractor_ network followed by a linear _classifier_ network. The classifier network employs _dropout\({}_{\tt=0.5}\)_ before each linear layer, besides an isolated linear layer at the end [8]. **PCN Network.** AlexNet-PCN\({}_{16}\) uses the same feature extractor as AlexNet-MLP. In the classifier, MLPs are replaced with PCNs for linear layers. AlexNet-PCN\({}_{16}\) also features _dropout\({}_{\tt=0.5}\)_ layers as used in the original. ### Datasets CIFAR-10 and CIFAR-100 are two popular image classification datasets. Both are labeled subsets of the tiny images dataset. CIFAR-10 consists of 60000 32x32 images divided into 10 classes, with 6000 images per class. The dataset is split into 50000 training images and 10000 withheld test images with exactly 1000 images of each class in the test set. CIFAR-100 is the same as CIFAR-10 but with 600 images per class, and follows the same principal for train/test split. The images and classes used in CIFAR-10 are mutually exclusive from those in CIFAR-100 [7]. The CIFAR datasets are chosen for benchmarks in order to strike balance between task difficulty and compute required. The MNIST dataset is too easy to solve-- very small networks can achieve close to 100% test accuracy-- so it is difficult to extract conclusive results about a PCN's efficacy in comparison to an MLP on this dataset. ImageNet, the dataset AlexNet was originally trained on, would require too much compute. The CIFAR datasets are sufficiently difficult tasks, while also being small enough to train the largest models in a reasonable amount of time given compute constraints. Although MNIST is not used as a benchmark in this work, it was a valuable resource in performing rapid preliminary testing of the PCN architecture. The MNIST dataset was used in making all of the architecture and regularization choices seen throughout this paper [2]. ### Preprocessing For training/validation of LinearNet models, images are scaled down from 32x32 to 16x16, reducing the first linear layer's input size from 3072 \(\rightarrow\) 768. Conversely, All im ages are scaled up to 227x227 for AlexNet models to match the original paper [8]. ### Initialization MLP and convolutional parameters use default initializations given by _torch.nn.Linear_ and _torch.nn.Conv2d_, respectively. PCN neuron positional values are initialized uniformly over the range \([-1,1]\) and bias terms uniformly over the range \([-0.1,0.1]\). ### Loss, Gradient, and Optimizers Loss for all models are calculated using _torch.nn.CrossEntropyLoss_, and parameter gradients are calculate using pytorch's autograd feature. MLP and convolutional parameters are updated with _stochastic gradient descent_ (**SGD**), via, _torch.optim.SGD_. PCN parameters are updated with a slightly modified version of SGD that is informed by layer size. Given layer size \(n\), PCN parameters \(l_{i}\), \(b_{i}\), gradients \(\Delta l_{i}\), \(\Delta b_{i}\), and learning-rate \(\gamma\) we perform a PCN's SGD update as follows: \[l_{i} \coloneqq l_{i}-\gamma\Delta l_{i}\frac{\boldsymbol{n}}{\boldsymbol {\log_{2}n}}\] \[b_{i} \coloneqq l_{i}-\gamma\Delta b_{i}\boldsymbol{10^{5}}\] Both of the terms \(\frac{n}{\log_{2}n}\) and \(10^{5}\) are used in order to make parameters throughout the network learn at close to the same rate, agnostic of layer size. These values were selected in early tests by observing the variance in gradients during the training process over a variety of network shapes. This optimization strategy does not account for irregularities in gradients resulting from network depth and artifacts of this fact may become pronounced in the loss/accuracy curves when attempting to train deep PCN networks, although residual connections may alleviate this problem [4]. For the purpose of the trials done in this paper, the above optimization strategy is sufficient. ### Training Loop Details All models are trained with a **batch size of 1024** and **learning-rate of 0.0001**, for **3.5k epochs**. For each training iteration, we aggregate the _loss_, and for each epoch we aggregate both _training accuracy_ and _test accuracy_, seen in figure 5. ## 5 Results This section presents the results of training all LinearNet, ConvNet, and AlexNet architectures over the CIFAR-10 and CIFAR-100 datasets. Key results are collected in table 1. Reported train/test accuracies are generated with the the final models after training. During the training of ANNs, it is normal for accuracies to fluctuate from epoch-to-epoch which introduces minor variance into these results. Loss, training accuracy, and test accuracy curves are collected in figure 5 of the appendix, which display more stable trends. Discussion focuses on linear parameter count and test accuracy. ### LinearNet Four LinearNet-PCN\({}_{d}\) models and LinearNet-MLP are trained. For \(d=4,8\) there is a degradation in performance relative to the MLP. At \(d=16,32\), the PCN outperforms the MLP on both datasets. As \(d\) increases, the PCNs experience more overfitting. The MLP experiences substantially more overfitting than all PCNs. LinearNet-PCN\({}_{32}\), the largest PCN in this class of models, has 161k parameters, which is a **95.9%** reduction from 3.95 million parameters in the MLP. Additionally, figure 5 displays a consistent increase in PCN performance with an increase in \(d\). ### ConvNet Both ConvNet-MLP and ConvNet-PCN\({}_{16}\) have 5.35 million convolutional parameters. ConvNet-PCN\({}_{16}\) outperforms ConvNet-MLP by **1.9%** on CIFAR-10 and underperforms by **1.2%** on CIFAR-100. Similarly to LinearNet, ConvNet-MLP experiences more overfitting than ConvNet-PCN\({}_{16}\). ConvNet-PCN\({}_{16}\) has 35k linear parameters, which is a **96.7%** reduction from 1.06 million linear parameters in the MLP. ### AlexNet Both AlexNet-MLP and AlexNet-PCN\({}_{16}\) have 2.47 million convolutional parameters. AlexNet-PCN\({}_{16}\) outperforms AlexNet-MLP by **0.3%** in CIFAR-10, and underperforms by **3.8%** in CIFAR-100. Both models experience similar amounts of overfitting. AlexNet-PCN\({}_{16}\) has 296k linear parameters, which is a **99.5%** reduction from 54.6 million linear parameters in the MLP. \begin{table} \begin{tabular}{l||l|l|l|l|l} \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ \hline model & \multicolumn{1}{c|}{**\# linear params**} & \multicolumn{1}{c|}{top-1 acc.} & \multicolumn{1}{c|}{**top-1 acc.**} & \multicolumn{1}{c|}{top-1 acc.**} & \multicolumn{1}{c}{**top-1 acc.**} \\ & **(millions)** & \multicolumn{1}{c|}{(train)} & \multicolumn{1}{c|}{**(test)**} & \multicolumn{1}{c|}{(train)} & \multicolumn{1}{c}{**(test)**} \\ \hline LinearNet-PCN\({}_{4}\) & **0.024** & 48.4 & **46.0** & 23.5 & **20.9** \\ LinearNet-PCN\({}_{8}\) & **0.044** & 53.4 & **48.9** & 26.2 & **22.7** \\ LinearNet-PCN\({}_{16}\) & **0.083** & 61.4 & **53.0** & 31.9 & **26.1** \\ LinearNet-PCN\({}_{32}\) & **0.161** & 66.9 & **52.8** & 39.1 & **28.1** \\ LinearNet-MLP & **3.957** & 96.8 & **52.1** & 78.7 & **25.2** \\ \hline \hline ConvNet-PCN\({}_{16}\) & **0.035** & 88.4 & **60.0** & 72.9 & **26.1** \\ ConvNet-MLP & **1.06** & 98.9 & **58.1** & 99.8 & **27.3** \\ \hline \hline AlexNet-PCN\({}_{16}\) & **0.296** & 85.7 & **78.9** & 51.6 & **43.7** \\ AlexNet-MLP & **54.575** & 84.1 & **78.6** & 52.5 & **47.5** \\ \hline \end{tabular} \end{table} Table 1: Train and test accuracies (%) over both CIFAR-10 and CIFAR-100 datasets for each model, along with the parameter counts of their linear layers. Limitations and Future Work ### Memory Requirements As has been demonstrated by this work, the PCN architecture can substantially reduce the number of parameters needed to train linear layers. However, the implementation seen here does not actually reduce the memory requirements. This is due to my reliance on pytorch's native autograd feature and _torch.cdist_ to find \(D\). During the forward pass, \(D\) in its entirety is calculated and stored in memory, which is the same size as \(W\). A fused kernel function for calculating \(x_{i+1,k}\) that never stores \(D\) but instead calculates \(D_{j,k}\) as needed could be used. \[\sigma_{k}(x_{i})=b_{i+1,k}+\sum_{j=1}^{n}x_{i,j}\cdot F_{\lambda,\epsilon}(D_ {j,k}(l_{i,j},l_{i+1,k}))\] Successfully implementing this along with its corresponding gradient functions on accelerated hardware would reduce memory consumption \(O(n^{2})\to O(n)\) during training and inference. ### Compute Requirements Two limiting factors of deep learning are **memory** and **compute**. The PCN architecture can alleviate memory consumption, but requires \(O(d)\) times more compute than an MLP. ### Network Stability As has been stated previously, all regularization terms used in the PCNs presented in this work were found through trial-and-error rather than rigorous math. Because of this, these PCNs are not resilient to their hyperparameters and a more robust PCN definition should be investigated. ### Applying PCNs Elsewhere In this work the PCN architecture is applied to linear layers. The same concept can be applied to the convolutional layers along the _channel_ axis, and to graph layers along the _node-feature_ axis. ### Conjecture-- Why PCNs Work The concept of a PCN can be boiled down to an MLP where we generate a plausible \(W\), similarly to low-rank factorization [5, 6, 13, 3]. Let \(\mathbf{W}=\mathbb{R}^{n\times m}\) be the set of all possible values for \(W\), \(\mathbf{W}^{\star}\subseteq\mathbf{W}\) be the set of all possible values for \(F(D)\), and \(\overline{L}_{\mathbf{W}}\) be the mean loss w.r.t. \(\mathbf{W}\). If \(\overline{L}_{\mathbf{W}^{\star}}=\overline{L}_{\mathbf{W}}\), then \(F(D)\) should have a comparable efficacy to \(W\). Consequently, if \(\overline{L}_{\mathbf{W}^{\star}}<\overline{L}_{\mathbf{W}}\) or \(\overline{L}_{\mathbf{W}^{\star}}>\overline{L}_{\mathbf{W}}\), then \(F(D)\) would be expected to perform better or worse than \(W\), respectively. It may be interesting to investigate \(F(D)\) that _maximizes_\(\overline{L}_{\mathbf{W}}-\overline{L}_{\mathbf{W}^{\star}}\). ## 7 Ethical Concerns With the exception of recent high profile publications, it seems a relatively uncommon practice to include an ethics section in a deep learning paper like this one. I use this section as a platform to attempt to mindfully outline some of my concerns. I include this section to advocate for a culture within academia that normalizes, legitimizes, and prioritizes this conversation-- hoping that a more organized practice forms. **Downstream Consequences.** Deep Learning is a unique technology in that it is largely task-agnostic. Because of this, the set of downstream applications is uncharacteristically large compared to other technology. Although the PCNs presented in this paper are applied to test datasets, the intention is to integrate this into existing deep learning architectures for which there are existing harmful applications. This makes it important to be cognizant of and acknowledge these harmful applications. **Mindful Conversations.** Having productive conversations about A.I. safety is a bit paradoxical. It is surely helpful to be aware of potential negative applications of deep learning, yet it may actually be harmful to indulge in any unnecessary details that don't move the conversation forward. For example, I would consider media outlets echoing unproductive details about harmful applications to be an unethical practice. ## 8 Acknowledgements I would like to thank Ryan Schaake for offering fruitful comments, review, and insight.
2308.00063
Isospectral Reductions of Non-negative Matrices
Isospectral reduction is an important tool for network/matrix analysis as it reduces the dimension of a matrix/network while preserving all its eigenvalues and eigenvectors. The main contribution of this manuscript is a proposed algorithmic scheme to approximate the stationary measure of a stochastic matrix based on isospectral reduction. This scheme can be advantageous when there is more than one eigenvalue near 1, precisely the case where iterative methods perform poorly. In addition we give a partial explanation why this scheme should work well, showing that in some situations isospectral reduction improves the spectral gap.
Alexandre Baraviera, Pedro Duarte, Longmei Shu, Maria Joana Torres
2023-07-31T18:35:41Z
http://arxiv.org/abs/2308.00063v2
# Isospectral Reductions of Non-negative Matrices ###### Abstract Isospectral reduction is an important tool for network/matrix analysis as it reduces the dimension of a matrix/network while preserving all its eigenvalues and eigenvectors. The main contribution of this manuscript is a proposed algorithmic scheme to approximate the stationary measure of a stochastic matrix based on isospectral reduction. This scheme can be advantageous when there is more than one eigenvalue near 1, precisely the case where iterative methods perform poorly. In addition we give a partial explanation why this scheme should work well, showing that in some situations isospectral reduction improves the spectral gap. keywords: isospectral reductions, stochastic matrices, stationary measure Msc: 15A18, 05C50 + Footnote †: journal: Linear Algebra and its Applications ## 1 Introduction Markov chains are a powerful tool for modeling and predicting the behavior of complex systems. They are used in a wide variety of fields, including finance, biology, and computer science. The stationary distribution of a Markov chain describes its long-term behavior and can be used to understand and control the behavior of Markov chains. And how do we compute the stationary vectors for a Markov chain? Typically with iterative methods. While they work well for sparse transition matrices of a Markov chain with a simple eigenvalue 1, the computation can take a long time as the transition matrices become very large or if there are multiple eigenvalues near 1. Are there alternative ways to speed up the computation? The recently developed theory of Isospectral Transformations (IT) of matrices and networks allowed for advances in various areas and led to several surprising results [6; 3; 5; 4; 9; 8; 11; 12; 14; 13; 15]. The theory of isospectral transformations was initially aimed at reduction (i.e. simplification) of networks, while keeping all the information about the spectrum of their weighted adjacency, Laplace, or other matrices. However, it turns out that all the information about the eigenvectors of these matrices also gets preserved under ITs [9; 5]. The eigenvectors of the reduced matrix are bijective projections of the eigenvectors of the original matrix and we have formulas to reconstruct the original eigenvectors from the reduced eigenvectors. Therefore it is natural to ask if we can efficiently use the reductions to compute the stationary measure of a large stochastic matrix in a Markov chain. The main goal of the present paper is to investigate this question. We define isospectral reductions first (section 2), then introduce various measurements involving the eigenvalues of a stochastic matrix (section 3). A new computational scheme is proposed for the computation of stationary measures for stochastic matrices using isospectral reduction (section 4). We run numerical experiments to compare this new scheme with traditional methods. The new scheme is faster and more accurate for stochastic matrices with multiple eigenvalues close to 1. To understand this process better, we construct various examples of stochastic matrices that become positive (all entries are positive) after reduction (section 5). We then show that the reduction of a stochastic matrix is still a stochastic matrix; the semi-norm, which measures how much the columns of a stochastic matrix vary in value, decreases after reduction (section 6). While the semi-norm is an upper bound for the second largest eigenvalue of a stochastic matrix, they may not be equal. We also show that for bi-stochastic matrices the Gershgorin region, which traps the eigenvalues of a matrix by disks around its diagonal entries, shrinks in disk sizes after isospectral reductions. ## 2 Isospectral reductions In this section we recall definitions of the isospectral reductions of graphs and networks. Let \(\mathbb{W}\) be the set of rational functions of the form \(w(\lambda)=p(\lambda)/q(\lambda)\), where \(p(\lambda),q(\lambda)\in\mathbb{C}[\lambda]\) are polynomials having no common linear factors, i.e., no common roots, and where \(q(\lambda)\) is not identically zero. \(\mathbb{W}\) is a field under addition and multiplication [6]. ### Isospectral Graph Reductions Let \(\mathbb{G}\) be the class of all weighted directed graphs with edge weights in \(\mathbb{W}\). More precisely, a graph \(G\in\mathbb{G}\) is an ordered triple \(G=(V,E,w)\) where \(V=\{1,2,\ldots,n\}\) is the _vertex set_, \(E\subset V\times V\) is the set of _directed edges_, and \(w:E\rightarrow\mathbb{W}\) is the _weight function_. Denote by \(M_{G}=(w(i,j))_{i,j\in V}\) the _weighted adjacency matrix_ of \(G\), with the convention that \(w(i,j)=0\) whenever \((i,j)\not\in E\). We will alternatively refer to graphs as networks because weighted adjacency matrices can be used to define static (i.e. non evolving) real world networks. Observe that the entries of \(M_{G}\) are rational functions. Let's write \(M_{G}(\lambda)\) instead of \(M_{G}\) here to emphasize the role of \(\lambda\) as a variable. For \(M_{G}(\lambda)\in\mathbb{W}^{n\times n}\), we define the spectrum, or multiset of eigenvalues to be \[\sigma(M_{G}(\lambda))=\{\lambda\in\mathbb{C}:\det(M_{G}(\lambda)-\lambda I)=0\}.\] Notice that \(\sigma(M_{G}(\lambda))\) can have more than \(n\) elements, some of which can be the same. Throughout the rest of the paper, the spectrum is understood to be a set that includes multiplicities. An eigenvector for eigenvalue \(\lambda_{0}\in\sigma(M_{G}(\lambda))\) is defined to be \(u\in\mathbb{C}^{n},u\neq 0\) such that \[M_{G}(\lambda_{0})u=\lambda_{0}u.\] One can see that the eigenvectors of \(M_{G}(\lambda)\in\mathbb{W}^{n\times n}\) for \(\lambda_{0}\) are the same as the eigenvectors of \(M_{G}(\lambda_{0})\in\mathbb{C}^{n\times n}\) for \(\lambda_{0}\). Similarly the generalized eigenvectors of \(M_{G}(\lambda)\) for \(\lambda_{0}\) are the generalized eigenvectors of \(M_{G}(\lambda_{0})\) for \(\lambda_{0}\). A path \(\gamma=(i_{0},\ldots,i_{p})\) in the graph \(G=(V,E,w)\) is an ordered sequence of distinct vertices \(i_{0},\ldots,i_{p}\in V\) such that \((i_{l},i_{l+1})\in E\) for \(0\leq l\leq p-1\). The vertices \(i_{1},\ldots,i_{p-1}\in V\) of \(\gamma\) are called _interior vertices_. If \(i_{0}=i_{p}\) then \(\gamma\) is a _cycle_. A cycle is called a _loop_ if \(p=1\) and \(i_{0}=i_{1}\). The length of a path \(\gamma=(i_{0},\ldots,i_{p})\) is the integer \(p\). Note that there are no paths of length \(0\) and that every edge \((i,j)\in E\) is a path of length \(1\). If \(S\subset V\) is a subset of all the vertices, we will write \(\overline{S}=V\setminus S\) and denote by \(|S|\) the cardinality of the set \(S\). **Definition 2.1**.: (Structural set). _Let \(G=(V,E,w)\in\mathbb{G}\). A nonempty vertex set \(S\subset V\) is a structural set of \(G\) if_ * _each cycle of_ \(G\)_, that is not a loop, contains a vertex in_ \(S\)_;_ * \(w(i,i)\neq\lambda\) _for each_ \(i\in\overline{S}\)_._ **Definition 2.2**.: _Given a structural set \(S\), a branch of \((G,S)\) is a path \(\beta=(i_{0},i_{1},\ldots,i_{p-1},i_{p})\) such that \(i_{0},i_{p}\in V\) and all \(i_{1},\ldots,i_{p-1}\in\overline{S}\)._ We denote by \(\mathcal{B}=\mathcal{B}_{G,S}\) the set of all branches of \((G,S)\). Given vertices \(i,j\in V\), we denote by \(\mathcal{B}_{i,j}\) the set of all branches in \(\mathcal{B}\) that start in \(i\) and end in \(j\). For each branch \(\beta=(i_{0},i_{1},\ldots,i_{p-1},i_{p})\) we define the _weight_ of \(\beta\) as follows: \[w(\beta,\lambda):=w(i_{0},i_{1})\prod_{l=1}^{p-1}\frac{w(i_{l},i_{l+1})}{ \lambda-w(i_{l},i_{l})}. \tag{1}\] Given \(i,j\in V\) set \[R_{i,j}(G,S,\lambda):=\sum_{\beta\in\mathcal{B}_{i,j}}w(\beta,\lambda). \tag{2}\] **Definition 2.3**.: (Isospectral reduction)_. Given \(G\in\mathbb{G}\) and a structural set \(S\), the reduced adjacency matrix \(R_{S}(G,\lambda)\) is the \(|S|\times|S|-\)matrix with the entries \(R_{i,j}(G,S,\lambda),i,j\in S\). This adjacency matrix \(R_{S}(G,\lambda)\) on \(S\) defines the reduced graph which is the isospectral reduction of the original graph \(G\)._ ### Isospectral Matrix Reductions For any matrix \(M\in\mathbb{W}^{n\times n}\), let \(N=\{1,2,\ldots,n\}\). If the sets \(R,C\subset N\) are nonempty, we denote by \(M_{RC}\) the \(|R|\times|C|\) submatrix of \(M\) with rows indexed by \(R\) and columns by \(C\). Suppose that \(S\subset N\) and its complement \(\overline{S}=N\setminus S\) are nonempty. The isospectral reduction of \(M\) over the set \(S\) is defined as \[R_{S}=M_{SS}-M_{S\overline{S}}(M_{\overline{S}\overline{S}}-\lambda I)^{-1}M_ {\overline{S}S}.\] The only requirement for \(S\) here is that the inverse matrix \((M_{\overline{S}\overline{S}}-\lambda I)^{-1}\) exists. This is a more general condition than that of the isospectral graph reduction. Indeed for the isospectral graph reduction, there must be no non-loop cycles in \(\overline{S}\), which means that after conjugating by a permutation \(M_{\overline{S}\overline{S}}\) is a triangular matrix. Also, the weights of loops in \(\overline{S}\) are not equal to \(\lambda\). This ensures \(M_{\overline{S}\overline{S}}-\lambda I\) is invertible, but it's a stronger condition. When both of these conditions hold the isospectral matrix reduction gives the same reduced matrix as the isospectral graph reduction (theorem 2.1 [6]). Isospectral reductions preserve the eigenvalues (corollary 2.1 [6]) and eigenvectors of a matrix (theorem 1 [9], theorem 3 [5]). One can apply isospectral reductions sequentially and the final result only depends on the indices/vertices left (corollary 2.3 [6]). ## 3 Spectral measurements A matrix \(A\in\mathbb{R}^{n\times n}\) with entries \(A=(a_{ij})\) is (column) _stochastic_ if \(a_{ij}\geq 0\) and \(\sum_{i=1}^{n}a_{ij}=1,\forall j=1,\ldots,n\). If 1 is a simple eigenvalue of \(A\) and there is no other eigenvalue of \(A\) on the unit circle, we call \(A\)_non-critical_, otherwise \(A\) is said to be _critical_. Let \(\Delta^{n-1}\) be the simplex of all probability vectors \[\Delta^{n-1}:=\left\{x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\colon x_{i}\geq 0,\;\sum_{i=1}^{n}x_{i}=1\right\}.\] For an integer \(m\in\mathbb{N}\) we use the notation \(a_{ij}^{m}\) for the entries of the power matrix \(A^{m}\). \(A\) is _primitive_ if there exists \(m\geq 1\) such that \(a_{ij}^{m}>0,\forall i,j=1,\ldots,n\). Every primitive matrix is non-critical but the converse is not true. Given two vertices \(i,j\in\{1,\ldots,n\}\), we say that \(i\)_leads to_\(j\) and write \(i\rightsquigarrow j\) if there exists \(m\geq 1\) such that \(a_{ji}^{m}>0\). We say that \(i\) and \(j\)_communicate_ if \(i\rightsquigarrow j\) and \(j\rightsquigarrow i\), in which case we write \(i\rightsquigarrow j\). A set of vertices \(C\subseteq\{1,\ldots,n\}\) is called a class of \(A\) if 1. \(i\rightsquigarrow j\) for all \(i,j\in C\) 2. \(C\) is saturated for \(\rightsquigarrow\), i.e., \(i\in C\) and \(i\rightsquigarrow j\;\;\Rightarrow\;\;j\in C\). A class \(C\) is _essential_ if for all \(i\in C\) and \(j\in\{1,\ldots,n\}\), \(i\rightsquigarrow j\) implies \(j\in C\). **Definition 3.1**.: (Inner spectral radius) _Let \(\sigma(A)=\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\) be the eigenvalues of \(A\) sorted by its absolute value in a way that \(\lambda_{1}=1\geq|\lambda_{2}|\geq\cdots\geq|\lambda_{n}|\). Then_ \[\rho_{i}(A):=\left|\lambda_{2}\right|\] _is called the inner spectral radius of \(A\)._ **Proposition 3.1**.: _For a stochastic matrix \(A\in\mathbb{R}^{n\times n}\), the following are equivalent:_ 1. \(A\) _is non-critical, i.e.,_ \(\rho_{i}(A)<1\)_,_ 2. \(A\) _admits a unique essential class, which is aperiodic,_ 3. _There is a unique_ \(v_{*}\in\Delta^{n-1}\) _such that_ \(A\,v_{*}=v_{*}\) _and, moreover,_ \(\lim_{m\to\infty}A^{m}\,v_{0}=v_{*},\forall v_{0}\in\Delta^{n-1}\)_._ Proof.: This theorem follows from the classical Theory of Markov Chains, see [7, Chapter V]. We give a rough description of the results involved. Given a stochastic matrix \(A\in\mathbb{R}^{n\times n}\), a vertex \(i\in\{1,\ldots,n\}\) is called transient if \(i\rightsquigarrow i\) does not hold. The relation \(\rightsquigarrow\) is an equivalence relation on the set of non-transient vertexes, whose equivalence classes are precisely the classes defined above. A fixed point \(q\in\Delta^{n-1}\), \(A\,q=q\) is called a stationary measure. The set of all stationary measures is a compact polytope. The extremal points of this compact convex set are the ergodic stationary measures, where a stationary measure \(q\in\Delta^{n-1}\) is said to be ergodic, respectively mixing, if the Markov shift determined by the pair \((A,q)\) is ergodic, respectively mixing, see [16]. The support of a stationary measure is always a union of essential classes, i.e., stationary measures do not see transient vertexes nor non-essential classes. The map that assigns its support to a stationary measure is a one-to-one correspondence between stationary measures and essential classes. The partition of \(\{1,\ldots,n\}\) in classes and transient vertexes gives an upper triangular block representation of \(A\), which becomes block diagonal if we group together non-essential classes and transient states. Hence, the spectrum of \(A\) is the union of the spectra of its essential classes and the spectrum of the submatrices corresponding to non-essential classes and to transient vertexes. A vector \(v\in\Delta^{n-1}\) such that \(A\,v=v\) is called \(A\)-_stationary_. For critical stochastic matrices, \(\rho_{i}(A)=1\). **Definition 3.2**.: (Diameter) _For a stochastic matrix \(A\in\mathbb{R}^{n\times n}\),_ \[\tau(A):=\max_{i,j}\frac{1}{2}\,\sum_{k=1}^{n}\bigl{|}a_{ki}-a_{kj}\bigr{|}\] _is called the diameter of \(A\)._ Consider the stochastic norm \(\|x\|_{1}:=\sum_{j=1}^{n}\bigl{|}x_{j}\bigr{|}\). **Remark 3.1**.: _Notice that \(\tau(A)\) is half of the diameter of the image of the simplex \(\Delta^{n-1}\) by \(A\) w.r.t. the norm \(\|\cdot\|_{1}\). Moreover, the function \(A\mapsto\tau(A)\) is a semi-norm on the space of matrices._ Since \[a_{ki}\wedge a_{kj}+\frac{1}{2}\left|a_{ki}-a_{kj}\right|=\frac{a_{ki}+a_{kj}}{ 2},\] we get \[\tau(A)=\max_{i,j}\sum_{k=1}^{n}[\frac{a_{ki}+a_{kj}}{2}-a_{ki}\wedge a_{kj}]=1 -\min_{i,j}\sum_{k=1}^{n}a_{ki}\wedge a_{kj}.\] **Proposition 3.2**.: _For any stochastic matrix \(A\in\mathbb{R}^{n\times n}\) and probability vectors \(x,y\in\Delta^{n-1}\),_ \[\|A\,x-A\,y\|_{1}\leq\tau(A)\,\|x-y\|_{1}.\] Proof.: Given two probability vectors \(x,y\in\Delta^{n-1}\), averaging we get \[\|A\,x-A\,y\|_{1} =\sum_{k=1}^{n}\bigl{|}\sum_{i=1}^{n}a_{ki}\,x_{i}-\sum_{j=1}^{n }a_{kj}\,y_{j}\bigr{|}\] \[=\sum_{k=1}^{n}\bigl{|}\sum_{i=1}^{n}a_{ki}x_{i}\sum_{j=1}^{n}y_{ j}-\sum_{j=1}^{n}a_{kj}y_{j}\sum_{i=1}^{n}x_{i}\bigr{|}\] \[\leq\sum_{k=1}^{n}\sum_{i=1}^{n}\sum_{j=1}^{n}\bigl{|}a_{ki}-a_{kj }\bigr{|}\,x_{i}\,y_{j}\] \[=\sum_{i=1}^{n}\sum_{j=1}^{n}\left(\sum_{k=1}^{n}\bigl{|}a_{ki}-a _{kj}\bigr{|}\right)\,x_{i}\,y_{j}\leq 2\,\tau(A).\] Next write \(p=x-y\) with \(p=p^{+}-p^{-}\) where \(p^{+}:=(p_{1}^{+},\ldots,p_{n}^{+})\) and \(p^{-}:=(p_{1}^{-},\ldots,p_{n}^{-})\) with \[p_{j}^{+}:=\max\{x_{j}-y_{j},0\}\ \ \text{and}\ \ p_{j}^{-}:=\max\{y_{j}-x_{j},0\}.\] Since \[\sum_{j=1}^{n}(x_{j}-y_{j})=0=\sum_{j=0}^{n}(p_{j}^{+}-p_{j}^{-})=\sum_{j=0}^ {n}p_{j}^{+}-\sum_{j=1}^{n}p_{j}^{-},\] we have \(\alpha=\|p^{+}\|_{1}=\|p^{-}\|_{1}>0\). Applying the previous inequality to the probability vectors \(\alpha^{-1}\,p^{+}\) and \(\alpha^{-1}\,p^{-}\) we get \[\|A\,x-A\,y\|_{1} =\|A\,(x-y)\|_{1}=\|A\,p^{+}-A\,p^{-}\|_{1}\] \[=\alpha\|A\alpha^{-1}p^{+}-A\alpha^{-1}p^{-}\|_{1}\] \[\leq 2\,\alpha\,\tau(A)=\tau(A)\,(\|p^{+}\|_{1}+\|p^{-}\|_{1})\] \[=\tau(A)\,\|x-y\|_{1}.\] **Remark 3.2**.: \(\tau(A)\) _is the operator norm of \(A\) on \((H,\|\cdot\|_{1})\), where \(H:=\left\{x\in\mathbb{R}^{n}\colon\,\sum_{j=1}^{n}x_{j}=0\right\}\). In particular, \(\rho_{i}(A)\leq\|A|_{H}\|=\tau(A)\). If \(\tau(A)<1\) then \(A\) is a contraction on \(\Delta^{n-1}\)._ **Remark 3.3**.: _If \(a_{ij}\geq c>0\) for all \(i,j\in\{1,\ldots,n\}\) then_ \[\tau(A)\leq 1-n\,c<1.\] _If there is an \(i_{0}\in\{1,\ldots,n\}\) such that \(a_{i_{0}j}\geq c>0\) for all \(j\in\{1,\ldots,n\}\) then_ \[\tau(A)\leq 1-c<1.\] Proof.: For instance, the second statement follows because \[\tau(A)=1-\min_{i,j}\sum_{k=1}^{n}a_{ki}\wedge a_{kj}\leq 1-\min_{i,j}a_{i_{0}i} \wedge a_{i_{0}j}\leq 1-c.\] **Definition 3.3**.: _For any stochastic matrix \(A\in\mathbb{R}^{n\times n}\), let_ \[m(A)=\min_{i,j}a_{ij}\] _be the smallest entry of \(A\)._ Since \(A\) is stochastic, if \(m(A)=1/n\), then all entries of \(A\) are the same, exactly \(1/n\). For a more general stochastic matrix whose entries are not all the same, we always have \(m(A)<1/n\). If \(A\) and \(B\) are both matrices with non-negative entries, then we always have \(m(A+B)\geq m(A)+m(B)\). **Definition 3.4**.: (Spectral gap) _Let \(g(A)=1-\rho_{i}(A)\) be the spectral gap for a stochastic matrix \(A\)._ Then \(g(A)\geq 1-\tau(A)\) and since \(a_{ij}\geq m(A)\), we know \(\tau(A)\leq 1-nm(A)\), therefore \(g(A)\geq nm(A)\). ## 4 Computation methods and a new approach In this section we will give a brief explanation of Gaussian elimination, Perron-Frobenius iterations and propose a new approach to compute the stationary measure of a large Markov chain, based on isospectral reductions. ### Iterative methods Let \(A\) be some \(n\times n\) stochastic matrix. In general, if \(A\) is non-critical we can approximate its stationary measure by the iterates \(A^{m}\,v_{0}\) starting from any vector \(v_{0}\in\Delta^{n-1}\) (Proposition 3.1). The Perron-Frobenius method returns an approximation of the stationary measure of \(A\) with an error up to \(10^{-p}\), for some positive integer \(p\). ``` local\(v_{0},v_{1},\Delta\) \(v_{1}:=\) random point in \(\Delta^{n-1}\) repeat \(v_{0}:=v_{1}\) \(v_{1}:=A\,v_{0}\) \(\Delta:=v_{1}-v_{0}\) until\(\|\Delta\|^{2}<10^{-2p}\) return\((v_{1})\) ``` **Algorithm 4**PerronFrobenius(\(A,p\)) The iterates \(v_{m}:=A^{m}\,v_{0}\) converge to the fixed point \(v_{*}=A\,v_{*}\) at a geometric rate dependent on the inner spectral radius of \(A\), \[\left|v_{m}-v_{*}\right|\lesssim\rho_{i}^{m}(A).\] So the number of iterations needed to reach precision \(10^{-p}\) is \(-p\ln 10/\ln\rho_{i}\) while each iteration takes \(n^{2}\) operations, in total the computational cost is \(-n^{2}p\ln 10/\ln\rho_{i}\). Notice that in general, when \(\rho_{i}(A)\ll 1-\frac{1}{n}\), the computational cost can be of order \(O(n^{2})\), much better than Gaussian elimination, which is of order \(O(n^{3})\). On the other hand if \(\rho_{i}(A)\gtrsim 1-\frac{1}{n}\) the cost rises up to at least \(O(n^{3})\), the same order as in Gaussian elimination. In fact, in this case Gauss elimination will likely generate ill conditioned matrices leading to inaccurate answers. The most used methods to approximate an eigenvector (of a general non normal matrix) seem to be iterative methods and among them variations of the Arnoldi method [2]. All these methods for approximating the eigenvector associated with a given eigenvalue work poorly in the presence of a small spectral gap measured from the given eigenvalue [1]. ### Isospectral algorithmic scheme When a stochastic matrix \(A\) has more than one eigenvalue very close to \(1\), no iterative method will work well to approximate the stationary measure of \(A\). This is precisely the case when Isospectral Theory can be useful. The isospectral reduction \[R=A_{SS}-A_{S\overline{S}}(A_{\overline{S}\overline{S}}-I)^{-1}A_{\overline{S}S} \tag{3}\] is a stochastic matrix if \(A\) is stochastic (Theorem 6.1). In fact, the stationary vector \(v_{R}\) of \(R\) is the projection of the stationary measure of \(A\) to the coordinates in the set \(S\). We can also reconstruct \(v_{A}\) from \(v_{R}\) as follows ([9; 5]). \[v_{A}=\begin{bmatrix}v_{R}\\ -(A_{\overline{S}\overline{S}}-I)^{-1}A_{\overline{S}S}v_{R}\end{bmatrix} \tag{4}\] This suggests an alternative scheme to approximate the stationary measure of \(A\). ``` local\(S,v_{R},v_{A}\) (i) Choose a subset \(S\) of \(\{1,\ldots,n\}\) (ii) Reduce \(A\) over \(S\) to get a smaller stochastic matrix \(R\) (iii) Compute the stationary measure \(v_{R}\) of \(R,\text{ with an error up to }10^{-p}\) (iv) Reconstruct the stationary measure \(v_{A}\) of \(A\) from \(v_{R}\) return\((v_{A})\) ``` **Algorithm 4.2**Isospectral\((A,p)\) There are several options for steps (i), (ii) and (iii). 1. Regarding the first step, besides choosing the size \(s=|S|\), we can: * choose \(S=\{1,\ldots,s\}\), * choose \(S\subset\{1,\ldots,n\}\) randomly of size \(s\), * choose \(S\subset\{1,\ldots,n\}\) in some deterministic way that attempts to minimize the cost of step (iii). 2. Regarding the second step there are two possibilities: * Preform the isospectral reduction in one-step using equation (3). The down side of this approach is the possibility of an ill conditioned inversion \((I-A_{\overline{SS}})^{-1}\). Based on empirical evidence, randomizing the choice of \(S\) seems a good way to address this issue. The reduction (3) takes \[(n-s)^{3}+(n-s)^{2}s+s^{2}(n-s)+s^{2}\] operations to compute, where the first term corresponds to inverting of the matrix \(I-A_{\overline{SS}}\) and the others to the multiplications of the matrices involved. For reconstruction, the output of \((n-s)s\) operations (included in the reduction cost) need to be stored in memory. * Perform the isospectral reduction in \(n-s\) steps where at each step the isospectral reduction is performed by choosing and removing a single node. For instance in the first step, choosing the node \(k\), so that \(S=\{1,\ldots,n\}\setminus\{k\}\), the reduced matrix has entries \[r_{ij}=a_{ij}+\frac{a_{ik}\,a_{kj}}{1-a_{kk}}\qquad i,j\in S.\] Avoiding choices of nodes where \(a_{kk}\approx 1\) the problem of computing the isospectral reduction is always well conditioned. By [6, Theorem 2.5], the order in \(\{1,\ldots,n\}\setminus S=\{k_{1},\ldots,k_{n-s}\}\) by which the one-step reductions are performed is indifferent in the sense that they all lead to the same matrix \(R_{S}\) in (3). The time cost of the step by step isospectral reduction is \[\sum_{j=s+1}^{n}(j-1)^{2}+j-1 =\frac{(n+1)n(n-1)-(s+1)s(s-1)}{3}\] \[\sim\frac{n^{3}-s^{3}}{3}\] \[=\frac{(n-s)^{3}}{3}+(n-s)^{2}s+(n-s)s^{2}.\] Notice that this time cost is of the same order as the single step reduction time cost, and even better unless the computational cost of \((I-A_{\overline{SS}})^{-1}\) gets to be less than \((n-s)^{3}/3\). 3. Finally, regarding the third step there are also several options: * Use one of the available iterative methods to approximate the dominant eigenvector of the reduced matrix \(R\). * After reduction, estimate the spectral gap \(\rho_{i}(R)\) and if it is still very close to 1 repeat steps (i) and (ii). Otherwise apply one of existing iterative methods to approximate the dominant eigenvector of the reduced matrix \(R\). In the next subsection we describe a numerical experiment that illustrates the advantage of combining traditonal iterative methods with isospectral reduction through the proposed algorithmic scheme. After this, in the rest of the paper we provide partial evidence to an empirical fact: In most situations isospectral reduction improves the spectral gap. ### Comparison of the methods Let \(P:\mathbb{R}_{+}^{n}\to\Delta^{n-1}\) be the projection \[P(x_{1},\ldots,x_{n}):=\left(\frac{x_{1}}{\sum_{k=1}^{n}x_{k}},\ldots,\frac{x _{n}}{\sum_{k=1}^{n}x_{k}}\right),\] which extends to the projection \(P:\mathbb{R}_{+}^{n\times n}\to\mathcal{M}_{n}\) onto the space of all \(n\) by \(n\) stochastic matrices, where \(P(A)\) has entries \[P_{ij}(A):=\frac{a_{ij}}{\sum_{k=1}^{n}a_{kj}}.\] Given a probability measure \(\lambda\in\mathrm{Prob}(\mathbb{R}_{+})\), let \(\lambda^{d}\in\mathrm{Prob}(\mathbb{R}_{+}^{d})\) be the Cartesian product measure and \(\mathbb{P}_{\lambda}:=P_{*}\lambda^{d}\in\mathrm{Prob}(\mathcal{M}_{d})\) be the push forward probability on the space of stochastic matrices. The probability \(\lambda\) is called _symmetric_ if it is invariant under the involution \(x\mapsto\frac{1}{x}\), or if \[\lambda(0,x)=\lambda(\frac{1}{x},\infty),\forall x\in(0,\infty).\] We say that \(\lambda\) has a _heavy tail_ if there exists \(\alpha\in(0,1)\) such that \[\lim_{x\to\infty}x^{\alpha}\,\lambda(x,\infty)=c>0.\] **Example 4.1** (Burr distribution).: _The Burr distribution is a family of symmetric laws \(\lambda_{\alpha}\in\mathrm{Prob}(\mathbb{R}_{+})\) with heavy tail, cumulative distribution function,_ \[F_{\alpha}(x):=1-\frac{1}{1+x^{\alpha}}\] _and probability density function,_ \[f_{\alpha}(x):=\frac{\alpha}{x^{1-\alpha}\left(1+x^{\alpha}\right)^{2}}.\] We generate random sparse \(1000\times 1000\) matrices \(A\) with 4 nonzero entries per column, whose values are randomly generated by a heavy tail Burr distribution with \(\alpha=0.2\). When \(\alpha=0.2\), we expect the inner spectral radius \(\rho_{i}(A)\) to be around 0.9 (remark 1.3 [17]). Then we use two ways to compute the stationary measure of these matrices. The first is the default method in Mathematica, we will call the vector computed this way \(v_{1}\), and the computation time \(t_{1}\). The other way uses isospectral reduction to reduce \(A\) to a \(90\times 90\) stochastic matrix \(R\) over a randomly selected structural set, then uses the Mathematica default method to compute the stationary measure of \(R\), and reconstructs the stationary measure of \(A\) from the stationary measure of \(R\). We will call the vector computed with isospectral reduction \(v_{2}\) and the computation time \(t_{2}\). Let \(e_{1}=\|Av_{1}-v_{1}\|,e_{2}=\|Av_{2}-v_{2}\|\) be the errors of each method and \(d=\|v_{1}-v_{2}\|\) be the distance between the two solutions. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(\rho_{i}(A)\) & \(t_{1}\) & \(t_{2}\) & \(e_{1}\) & \(e_{2}\) & \(d\) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: Mathematica default method vs. the Isospectral Reductions scheme We run a repeated comparison between the Mathematica default method and the isospectral reduction scheme 36 times each repetition, the results look similar and we list one of them in table 1. Here \(\rho_{i}(A)\) is the inner spectral radius, or the absolute value of the second largest eigenvalue of the original \(1000\times 1000\) stochastic matrices. As one can see, the matrices we generate have pretty big inner spectral radii, very close to 1. With such large inner spectral radii, most current default methods for the computation of stationary measures don't work well and the default method in Mathematica issues warning messages about slow convergence during computation frequently. In table 1, the computation time \(t_{2}\) is shorter than \(t_{1}\), usually around a half or a third of \(t_{1}\). The improvement is not greater because although computing a lower dimensional stationary measure is faster, the reduction and reconstruction are not computation free. On the other hand, the two different methods give very similar answers, both with very small errors and the distance between the two solutions are very small as well for the most part. However, once every so often, the distance between the two solutions is more noticeable. One can check that for all these times, the error of the isospectral reduction scheme is smaller than the error of the Mathematica default method. This shows the isospectral reductions scheme can be an alternative method for computing the stationary measure of stochastic matrices whose inner spectral radius is close to 1. This alternative is not only faster, but also more accurate for such matrices. We have checked computationally that very often, the reduced matrix has a smaller inner spectral radius than the original matrix. But this is not true for every stochastic matrix (see Example 6.1). ## 5 Matrices that become positive after reduction Intuitively, we know that if all the entries of a stochastic matrix are close to each other, its semi-norm is small. If every entry is the same, we have a semi-norm of 0 and can find the stationary measure after one step of iteration. While for a stochastic matrix where two columns have interlacing zero entries we have a semi-norm of 1 and probably need many iterations to get close to the stationary measure. Generally we would like to see fewer zero entries in the matrix after reduction, ideally no zero entries and we have a strictly positive matrix whose semi-norm is small. We give some specific types of examples where the matrix becomes strictly positive after reduction in the following subsections. ### Irreducible non-negative matrices **Proposition 5.1**.: _Let \(m\in\{1,2,\ldots,n\},n\geq 3\), \(S=\{1,2,\ldots,n\}\setminus\{m\}\). \(A\) is an \(n\times n\) non-negative irreducible matrix and \(\lambda>0\) is the dominant eigenvalue of \(A\). If_ 1. _each row and each column of_ \(A\) _has at most one element that is zero,_ 2. _column_ \(m\) _has the lowest column sum,_ 3. _and row_ \(m\) _is positive, that is, it only has positive entries,_ _then \(R_{S}(A,\lambda)\) is a positive matrix._ Proof.: Matrix \(A\) has an eigenvector with non-negative components corresponding to the dominant eigenvalue \(\lambda\), which is larger than the absolute value of any other eigenvalue. Eigenvalue \(\lambda\) satisfies \[\min_{i}\left(\sum_{j=1}^{n}a_{ij}\right)\leq\lambda\leq\max_{i}\left(\sum_{j= 1}^{n}a_{ij}\right)\!.\] Since \(m\in\{1,2,\ldots,n\}\) is where the minimum above is reached and \(n\geq 3\), \[a_{mm}<\sum_{j=1}^{n}a_{mj}\leq\lambda\ \ \Rightarrow\lambda-a_{mm}>0.\] The entries of \(R_{S}(A)\) are \[(R_{S}(A,\lambda))_{ij}=a_{ij}+\frac{a_{im}a_{mj}}{\lambda-a_{mm}},\ \ \ \forall i,j\in S.\] Since \(a_{mj}>0\) for all \(j\), the term \(\frac{a_{mj}}{\lambda-a_{mm}}\) is always positive. If \(a_{im}\) is zero then \(a_{ij}\) (also on row \(i\)) has to be positive. If \(a_{ij}\) is zero then \(a_{im}\) is positive. Either way we have \((R_{S}(A,\lambda))_{ij}>0,\forall i,j\in S\). ### At most \(m\) zeros **Proposition 5.2**.: _Let \(S\subset\{1,\ldots,n\},\overline{S}=\{1,\ldots,n\}\setminus S\). \(A\) is an \(n\times n\) stochastic matrix. If_ 1. \(a_{ii}<1,\forall i=1,\ldots,n\) 2. _each row and each column has at most_ \(m\) _elements that are zero,_ 3. _and the cardinality of the set of removed vertices_ \(|\overline{S}|>2(m-1)\)_,_ _then \(R_{S}(A,1)\) is positive._ Proof.: The entries of \(R_{S}(A,1)\) are \[(R_{S}(A,1))_{ij}=a_{ij}+\sum_{k\in\overline{S}}a_{ik}(1-a_{kk})^{-1}a_{kj}+ \ldots,\forall i,j\in S.\] Here the omitted terms are higher order terms like \(a_{ik}(1-a_{kk})^{-1}a_{kl}(1-a_{ll})^{-1}a_{lj}\) and they are non-negative. If \(a_{ij}\) is positive we are done. If \(a_{ij}\) is zero, then at most \(m-1\) of the \(a_{ik}\) (\(k\in\overline{S}\)) terms can be zero. Similarly, at most \(m-1\) of the \(a_{kj}\) (\(k\in\overline{S}\)) terms can be zeros. Since \(|\overline{S}|>2(m-1)\) then we have at least one non-zero element in the sum above, showing that \((R_{S}(A,1))_{ij}>0\). **Remark 5.1**.: _If \(A\in\mathbb{R}^{n\times n}\) is a non-negative matrix with at most \(m\) zero entries on each row and each column, and \(n>2m\), then \(A^{2}\) is positive._ Indeed, \[(A^{2})_{ij}=\sum_{k=1}^{n}a_{ik}a_{kj}\] and we have at most \(2m<n\) vanishing terms in the sum above. Every entry of \(A^{2}\) is positive. ### Another approach **Proposition 5.3**.: _Consider the non-negative matrix_ \[A=\begin{bmatrix}A_{\overline{S}\overline{S}}&A_{\overline{S}S}\\ A_{S\overline{S}}&A_{SS}\end{bmatrix}.\] _Here \(A_{\overline{S}\overline{S}}\) is_ 1. _primitive, i.e., there exists some integer_ \(l\) _such that_ \((A_{\overline{S}\overline{S}})_{ij}>0\) _for all_ \(i,j\in\overline{S}\)_;_ 2. _the sum of each column is strictly less than_ \(1\)_._ _In addition to that, \(A\) satisfies_ 1. _for_ \(i,j\in S\) _either_ \((A)_{ij}>0\) _or the column_ \(j\) _of_ \(A_{\overline{S}_{S}}\) _has at least one positive element;_ 2. _for_ \(i,j\in S\) _either_ \((A)_{ij}>0\) _or the row_ \(i\) _of_ \(A_{S\overline{S}}\) _has at least one positive element;_ 3. _A is column stochastic._ _Then \(R_{S}(A,1)\) is positive._ Proof.: Observe that \[R_{S}(A,1) =A_{SS}+A_{S\overline{S}}(I-A_{\overline{S}\overline{S}})^{-1}A_{ \overline{S}S}\] \[=A_{SS}+A_{S\overline{S}}A_{\overline{S}S}+A_{S\overline{S}}A_{ \overline{S}S}A_{\overline{S}S}+A_{S\overline{S}}A_{\overline{S}S}^{2}A_{ \overline{S}S}+\ldots\] **Remark 5.2**.: _The reduction of \(A\) over \(\overline{S}\)_ \[R_{\overline{S}}(A)=A_{\overline{S}\overline{S}}+A_{\overline{S}S}(I-A_{SS})^{ -1}A_{S\overline{S}}\] _is primitive when it exists._ **Example 5.1**.: _Let us fix some \(a\in(0,1/2)\) and consider the set \(\overline{S}\) of size two. Let_ \[A_{\overline{S}\overline{S}}=\begin{bmatrix}a&a\\ a&a\end{bmatrix}.\] _Let \(|S|=n\) and_ \[A=\begin{bmatrix}a&a&1&1&\cdots&1\\ a&a&0&0&\cdots&0\\ \dfrac{1-2a}{1-2a}&1-2a&0&0&\cdots&0\\ \dfrac{1-2a}{n}&0&0&0&\cdots&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ \dfrac{1-2a}{n}&0&0&0&\cdots&0\end{bmatrix}.\] _In this case we have \(A_{SS}\) is the zero matrix and_ \[\tau(A)=\max\left\{(1-2a)(1-\frac{1}{n}),1-a\right\}=1-a.\] _The reduced operator over \(S\) is_ \[R(A) =A_{SS}+A_{S\overline{S}}(I-A_{\overline{S}\overline{S}})^{-1}A_{ \overline{S}S}=A_{S\overline{S}}(I-A_{\overline{S}\overline{S}})^{-1}A_{ \overline{S}S}\] \[=\left[\begin{array}{cccc}\frac{1-a}{n}+a&\frac{1-a}{n}+a& \cdots&\frac{1-a}{n}+a\\ \frac{1-a}{n}&\frac{1-a}{n}&\cdots&\frac{1-a}{n}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{1-a}{n}&\frac{1-a}{n}&\cdots&\frac{1-a}{n}\end{array}\right].\] _The columns are all identical so \(\tau(R(A))=0\). The spectrum of \(R(A)\) is the set \(\sigma(R(A))=\{0,\ldots,0,1\}\), here \(0\) has multiplicity \(n-1\)._ _Let's look at the spectrum of \(A\) for two specific cases._ **case \(n=1\):**__ \[A=\begin{bmatrix}a&a&1\\ a&a&0\\ 1-2a&1-2a&0\end{bmatrix}.\] _In this case \(\sigma(A)=\{2a-1,0,1\}\)._ **case \(n=2\):** _Now we get_ \[A=\begin{bmatrix}a&a&1&1\\ a&a&0&0\\ \frac{1-2a}{2}&1-2a&0&0\\ \frac{1-2a}{2}&0&0&0\end{bmatrix}.\] _and \(\sigma(A)=\{2a-1,0,0,1\}\), here \(0\) has multiplicity \(2\)._ We can make this slightly more general. **Example 5.2**.: _Fix \(a\in(0,1/2)\), \(p\in(0,1)\), \(q=1-p\) and a column stochastic \(m\times m\) matrix \(B\). Consider_ \[A=\begin{bmatrix}a&a&p&\cdots&p\\ a&a&0&\cdots&0\\ \frac{1-2a}{m}&1-2a&&&\\ \frac{1-2a}{m}&0&&qB&\\ \vdots&\vdots&&&\\ \frac{1-2a}{m}&0&&\end{bmatrix}.\] _It's easy to show that \(\tau(A)\geq\max\left\{q\tau(B),\left(1-\frac{1}{m}\right)(1-2a)\right\}\)._ _If we reduce \(A\) over the vertices corresponding to \(qB\) with value 1 for \(\lambda\), we get_ \[R(A)=qB+p\begin{bmatrix}\frac{1-a}{m}+a&\frac{1-a}{m}+a&\cdots&\frac{1-a}{m}+a \\ \frac{1-a}{m}&\frac{1-a}{m}&\cdots&\frac{1-a}{m}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{1-a}{m}&\frac{1-a}{m}&\cdots&\frac{1-a}{m}\end{bmatrix}.\] _Since the second matrix has identical columns we have_ \[\tau(R(A))=q\tau(B)\leq\tau(A).\] _This reduction is a positive stochastic matrix, and it is a perturbation of \(qB\) when \(p\) is small._ **Example 5.3**.: _Still let \(a\in(0,1/2)\), \(p,q>0,p+q=1\), and \(B\) is an \(m\times m\) column stochastic matrix. Denote by \(L_{i}\) the average of the \(i-th\) row of \(B\); then we have \(\sum_{i=1}^{m}L_{i}=1\). Assume that \(L_{k}-a/m>0\) for any \(k\in\{1,2,\ldots,m\}\). Consider the matrix_ \[A=\begin{bmatrix}a&a&p&\cdots&p\\ a&a&0&\cdots&0\\ \frac{1-2a}{1-a}(L_{1}-a/m)&\frac{1-2a}{m}&&\\ \frac{1-2a}{1-a}(L_{2}-a/m)&\frac{1-2a}{m}&qB&\\ \vdots&\vdots&&\\ \frac{1-2a}{1-a}(L_{m}-a/m)&\frac{1-2a}{m}&&\end{bmatrix}.\] _We can show that_ \[\tau(A)\geq\max\left\{q\tau(B),\frac{1-2a}{2(1-a)}(|L_{1}-\frac{1}{m}|+\cdots+ |L_{N}-\frac{1}{m}|)\right\}.\] _Again we reduce \(A\) over the vertices corresponding to \(qB\) with value 1 for \(\lambda\),_ \[R(A)=qB+p\begin{bmatrix}L_{1}&L_{1}&\cdots&L_{1}\\ L_{2}&L_{2}&\cdots&L_{2}\\ \vdots&\vdots&\ddots&\vdots\\ L_{m}&L_{m}&\cdots&L_{m}\end{bmatrix}.\] _We have \(\tau(R(A))=q\tau(B)\leq\tau(A)\) and the reduction is a positive stochastic matrix._ **Remark 5.3**.: _Notice that the matrix_ \[L=\begin{bmatrix}L_{1}&L_{1}&\cdots&L_{1}\\ L_{2}&L_{2}&\cdots&L_{2}\\ \vdots&\vdots&\ddots&\vdots\\ L_{m}&L_{m}&\cdots&L_{m}\end{bmatrix}\] _is column stochastic. It has eigenvalue \(0\) with corresponding eigenspace \([1,1,\ldots,1]^{\perp}\) and eigenvalue \(1\), whose eigenvector is \([L_{1},L_{2},\ldots,L_{m}]\)._ **Remark 5.4**.: _Although \(L_{i}\)'s correspond to the averages of the corresponding rows of \(B\), they can be seen as free parameters; in particular, we can consider the choice \(L_{i}=1/m\) for all \(i\in\{1,2,\ldots,m\}\). In this case the reduced matrix is_ \[R(A)=qB+(1-q)\begin{bmatrix}1/m&1/m&\cdots&1/m\\ 1/m&1/m&\cdots&1/m\\ \vdots&\vdots&\ddots&\vdots\\ 1/m&1/m&\cdots&1/m\end{bmatrix},\] _the famous Google matrix. Here \(q\) is the damping factor._ **Example 5.4**.: _Adopting all the same notations as the previous example, we now consider the case where the set \(\overline{S}\) has only one point._ \[A=\begin{bmatrix}a&p&\cdots&p\\ (L_{1}-a/m)&&\\ (L_{2}-a/m)&&qB\\ \vdots&&&\\ (L_{m}-a/m)&&\end{bmatrix},\] _then_ \[\tau(A)=\max\left\{q\tau(B),\max_{j}\left\{\frac{1}{2}[|a-p|+|L_{1}-a/m-qb_{1j} |+\ldots+|L_{m}-a/m-qb_{mj}]\right\}\right\}.\] _When \(q\) is close to zero \(\tau(A)\) approaches \(1-a\)._ _For the reduced operator we have_ \[R(A)=qB+\frac{p}{1-a}\begin{bmatrix}L_{1}-a/m&L_{1}-a/m&\cdots&L_{1}-a/m\\ L_{2}-a/m&L_{2}-a/m&\cdots&L_{2}-a/m\\ \vdots&\vdots&\ddots&\vdots\\ L_{m}-a/m&L_{m}-a/m&\cdots&L_{m}-a/m\end{bmatrix}.\] _Again the reduction is a positive stochastic matrix and \(\tau(R(A))=q\tau(B)\leq\tau(A)\)._ **Example 5.5**.: _For the last example let's consider an \(n\times n\) column stochastic matrix \(A\) such that, for some \(m<n\), the entry \(a_{ik}\) is positive if and only if \(k\in\{i-(m-1),\ldots,i+(m-1)\}\). Take \(S=\{1,2,\ldots,m\}\). In this case, the submatrix \(A_{SS}\) is positive, and so_ \[R(A)=A_{SS}+A_{SS}(I-A_{\overline{SS}})^{-1}A_{\overline{SS}}\] _is also positive._ ### Close to averaging matrix In the final discussion of this section we consider matrices whose entries are close to \(=1/n\); these matrices are highly contractive. We try to estimate how far the reduced matrix is from the averaging matrix whose entries are all identical. Assume the entries of the stochastic matrix \(A\in\mathbb{R}^{n\times n}\) satisfy \[a_{ij}\leq d(n)=\frac{1}{n}+\epsilon(n).\] The reduction of \(A\) with eigenvalue \(1\) is \[R_{S}(A)=A_{SS}+A_{S\overline{S}}A_{\overline{SS}}+A_{S\overline{S}}A_{ \overline{SS}}A_{\overline{SS}}+A_{S\overline{S}}A_{\overline{SS}}^{2}A_{ \overline{SS}}+\cdots\] The entries of the terms in the sum satisfy \[(A_{S\overline{S}}A_{\overline{SS}})_{ij}\leq|\overline{S}|d(n)d (n),\] \[(A_{S\overline{S}}A_{\overline{SS}}A_{\overline{SS}})_{ij}\leq| \overline{S}||\overline{S}|d(n)d(n)d(n),\] \[\ldots\] \[(A_{S\overline{S}}A_{\overline{SS}}^{m}A_{\overline{SS}})_{ij} \leq|\overline{S}|^{1+m}d(n)^{2+m}.\] Hence the entries of the reduction satisfy \[(R_{S}(A))_{ij}\leq d(n)+|\overline{S}|d(n)^{2}+|\overline{S}|^{2}d(n)^{3}+ \ldots=\frac{d(n)}{1-|\overline{S}|d(n)}.\] **Proposition 5.4**.: _Given \(c>0\), let \(\epsilon(n)=e^{-cn}\) and \(d_{c}(n)=\frac{1}{n}+\epsilon(n)=\frac{1}{n}+e^{-cn}\). Then for large \(n\) and \(|\overline{S}|=\frac{3}{4}n\) we have_ \[a_{ij}\leq d_{c}(n)\Rightarrow(R_{S}(A))_{ij}\leq d_{2c}(|S|).\] Proof.: Indeed, from the estimate above we get \[(R_{S}(A))_{ij}\leq\frac{d_{c}(n)}{1-|\overline{S}|d_{c}(n)}=\frac{d_{c}(n)}{1 -\frac{3}{4}nd_{c}(n)}.\] We just need to show that \[\frac{1/n+e^{-cn}}{1-\frac{3}{4}n(1/n+e^{-cn})}=\frac{d_{c}(n)}{1-\frac{3}{4} nd_{c}(n)}\leq d_{2c}(|S|)=\frac{1}{n/4}+e^{-2cn/4}.\] The left hand side can be rewritten as \[\frac{1/n+e^{-cn}}{1-\frac{3}{4}n(1/n+e^{-cn})}=\frac{1/n+e^{-cn}}{1/4-3/4ne^{ -cn}}=\frac{4/n+4e^{-cn}}{1-3ne^{-cn}}.\] Then what we want to show becomes \[\frac{4/n+4e^{-cn}}{1-3ne^{-cn}}\leq\frac{4}{n}+e^{-cn/2}.\] For large \(n\), this is equivalent to showing \[\frac{4}{n}+4e^{-cn}\leq(1-3ne^{-cn})(\frac{4}{n}+e^{-cn/2})=\frac{4}{n}+e^{- cn/2}-12e^{-cn}-3ne^{-3cn/2},\] or \[16e^{-cn}+3ne^{-3cn/2}\leq e^{-cn/2}.\] Multiplying both sides by \(e^{cn/2}\) we get \[16e^{-cn/2}+3ne^{-cn}\leq 1.\] This is true for large \(n\), as claimed. ## 6 Isospectral reductions of stochastic matrices As shown in the previous section, many nonnegative matrices can become positive after isospectral reductions. What can we say in general about isospectral reductions of stochastic matrices? Let's look at an example first. **Example 6.1**.: _For the stochastic matrix_ \[A=\begin{bmatrix}0&0.9&0\\ 0.5&0&1\\ 0.5&0.1&0\end{bmatrix},\tau(A)=1,\rho_{i}(A)=0.6708.\] _Let's reduce \(A\) to the first two nodes and plug in value \(1\) for \(\lambda\), we get_ \[R=\begin{bmatrix}0&0.9\\ 1&0.1\end{bmatrix},\tau(R)=0.9,\rho_{i}(R)=0.9.\] _Here after reduction \(R\) is still a stochastic matrix, its semi-norm is lower than \(A\) and its inner spectrum is higher than \(A\)._ ### Decrease of the semi-norm **Theorem 6.1**.: _Let \(A\in\mathbb{R}^{n\times n}\) be a column stochastic matrix and \(S\subset\{1,\ldots,n\}\). Denote by \(R_{S}(A)\in\mathbb{R}^{S\times S}\) the isospectral reduction of \(A\) onto \(S\) with value \(1\) plugged in for \(\lambda\). Then \(R_{S}(A)\) is a stochastic matrix and_ \[\tau(R_{S}(A))\leq\tau(A).\] Proof.: We can prove this for the special case of \(S=\{1,\ldots,n\}-\{r\}\) where we only remove one node in the reduction. Because sequential isospectral reductions are path-independent, we can always get to \(R_{S}(A)\) for \(|S|<n-1\) by removing one node at a time (Section 1.3 [6]). For \(S=\{1,\ldots,n\}-\{r\}\), let's write \(R_{S}(A)\) as \(R_{r}(A)\). This is the reduction where we remove node \(r\) and plug in \(1\) for \(\lambda\).Then \(R_{r}(A)\) is a stochastic matrix. Indeed, the entries of \(R_{r}(A)\) are \[(R_{r}(A))_{ij}=a_{ij}+\frac{a_{ir}a_{rj}}{1-a_{rr}}\geq 0,\quad i,j\neq r.\] The column sum is \[\sum_{j\neq r}\left(a_{ij}+\frac{a_{ir}a_{rj}}{1-a_{rr}}\right)=\sum_{j\neq r}a_{ ij}+a_{ir}=1.\] Let \(c_{i}(A)\) be column \(i\) of \(A\). Then the semi-norm of \(A\) is \[\tau(A)=\frac{1}{2}\sup_{i\neq j}\|c_{i}(A)-c_{j}(A)\|_{1}.\] Now let's estimate \(\tau(R_{r}(A)\): \[\tau(R_{r}(A)) =\frac{1}{2}\sup_{i\neq j}\|c_{i}(R_{r}(A))-c_{j}(R_{r}(A))\|_{1}\] \[=\frac{1}{2}\sup_{i\neq j}\sum_{l\neq r}\left|a_{li}+a_{lr}(1-a_{ rr})^{-1}a_{ri}-[a_{lj}+a_{lr}(1-a_{rr})^{-1}a_{rj}]\right|\] \[=\frac{1}{2}\sup_{i\neq j}\sum_{l\neq r}\left|(a_{li}-a_{lj})+[a_{ lr}(1-a_{rr})^{-1}a_{ri}-a_{lr}(1-a_{rr})^{-1}a_{rj}]\right|\] \[=\frac{1}{2}\sup_{i\neq j}\sum_{l\neq r}\left|(a_{li}-a_{lj})+a_{ lr}(1-a_{rr})^{-1}(a_{ri}-a_{rj})\right|\] \[\leq\frac{1}{2}\sup_{i\neq j}\sum_{l\neq r}|(a_{li}-a_{lj})|+\sum _{l\neq r}|a_{lr}(1-a_{rr})^{-1}(a_{ri}-a_{rj})|\] \[=\frac{1}{2}\sup_{i\neq j}\left\{\sum_{l\neq r}|(a_{li}-a_{lj})|+ \left(\sum_{l\neq r}a_{lr}\right)(1-a_{rr})^{-1}|a_{ri}-a_{rj}|\right\}\!.\] Since \(\sum_{l\neq r}a_{lr}=1-a_{rr}\) the expression above becomes \[\frac{1}{2}\sup_{i\neq j}\left\{\sum_{l\neq r}|(a_{li}-a_{lj})|+|a_{ri}-a_{rj} |\right\}=\frac{1}{2}\sup_{i\neq j}\left\{\sum_{l=1}^{n}|(a_{li}-a_{lj})| \right\}=\tau(A).\] **Lemma 6.2**.: _Let \(R_{S}(A)\) be the reduction of a stochastic matrix \(A\) over set \(S\) with value 1 for \(\lambda\); then_ \[m(R_{S}(A))\geq\frac{m(A)}{1-|\overline{S}|m(A)}.\] Proof.: We know that \[R_{S}(A) =A_{SS}+A_{S\overline{S}}(I-A_{\overline{S}\overline{S}})^{-1}A_{ \overline{S}S}\] \[=A_{SS}+A_{S\overline{S}}A_{\overline{S}S}+A_{S\overline{S}}A_{ \overline{S}\overline{S}}A_{\overline{S}S}+A_{S\overline{S}}A_{\overline{S} \overline{S}}^{2}A_{\overline{S}S}+\ldots\] All the matrices in this expression are nonnegative. Notice that for a given entry of \(A_{S\overline{S}}A_{\overline{S}S}\) we have \[(A_{S\overline{S}}A_{\overline{S}S})_{ij}\geq|\overline{S}|m(A)^{2}.\] For \(A_{S\overline{S}}A_{\overline{S}S}A_{\overline{S}S}\) we have \[(A_{S\overline{S}}A_{\overline{S}\overline{S}}A_{\overline{S}S})_{ij}\geq| \overline{S}|^{2}m(A)^{3}.\] and, more generally, \((A_{S\overline{S}}A_{\overline{S}\overline{S}}^{k}A_{\overline{S}S})_{ij}\geq| \overline{S}|^{k+1}m(A)^{k+2}\). From those bounds we get \[(R_{S}(A))_{ij}\geq m(A)+|\overline{S}|m(A)^{2}+|\overline{S}|^{2}m(A)^{3}+ \cdots=\frac{m(A)}{1-|\overline{S}|m(A)}.\] Here we are assuming that \(|\overline{S}|m(A)<1\); but this is true since \(|\overline{S}|<n\) and \(m(A)\leq 1/n\). When \(m(A)\) is positive we get that \(m(R_{S}(A))>m(A)\). **Remark 6.1**.: _When \(m(A)\) is large then the image of the cone of non-negative vectors under the stochastic matrix \(A\) is smaller, or \(A\) is more contractive. Since \(m(R_{S}(A))\) is larger than \(m(A)\) we get that \(R_{S}(A)\) is indeed more contractive than \(A\)._ ### Gershgorin estimate For a matrix \(A\in\mathbb{C}^{n\times n}\) we define for each \(i\in\{1,\ldots,n\}\) the \(i-\)th absolute row sum \[r_{i}(A)=\sum_{j\neq i}|a_{ij}|.\] **Theorem 6.3** (Gershgorin [10]).: _Let \(A\in\mathbb{C}^{n\times n}\). Then all eigenvalues of \(A\) are contained in the set_ \[\Gamma(A)=\bigcup_{i=1}^{n}\{\lambda\in\mathbb{C}:|\lambda-a_{ii}|\leq r_{i}( A)\}.\] Now let us assume that \(A\) has positive elements only and is both column and row stochastic, i.e. all its row and column sums are 1. We define the matrix \(R(A)\) using isospectral reduction where vertex \(n\) is eliminated. Then each entry of the reduction is given by \[R(A)_{ij}=a_{ij}+\frac{a_{in}a_{nj}}{1-a_{nn}}\] for \(i,j\in\{1,\ldots,n-1\}\). Each entry of \(R(A)\) is still positive and one can check that the row sums as well as column sums are still 1. Let us compute the first absolute row sum \(r_{1}(R(A))\). \[r_{1}(R(A))= R(A)_{12}+\ldots+R(A)_{1,n-1}=\] \[a_{12}+\ldots+a_{1,n-1}+\frac{a_{1n}}{1-a_{nn}}\left(a_{n2}+ \ldots+a_{n,n-1}\right).\] Notice that \(a_{n2}+\ldots+a_{n,n-1}=1-a_{n1}-a_{nn}\); hence the expression above becomes \[r_{1}(R(A))= a_{12}+\ldots+a_{1,n-1}+\frac{a_{1n}}{1-a_{nn}}\left(1-a_{n1}-a _{nn}\right)\] \[< a_{12}+\ldots+a_{1,n-1}+a_{1n}=r_{1}(A).\] Hence \(r_{i}(R(A))<r_{i}(A)\) when \(i=1\) and we can show that this is true for \(i=2,3,\ldots,n-1\) as well. ## Acknowledgments PD was supported by Fundacao para a Ciencia e a Tecnologia, through the project UID/MAT/04561/2013. MJT was partially financed by Portuguese Funds through FCT (Fundacao para a Ciencia e a Tecnologia) within the Projects UIDB/00013/2020 and UIDP/00013/2020. AB, PD, LS and MJT were partially supported by the Project "New trends in Lyapunov exponents"(PTDC/MAT-PUR/29126/2017).
2309.10159
Electrically coupled optomechanical cavities as a tool for quantum nondemolition measurement
We present a new model of two electrically coupled optomechanical cavities. This model is based on the recently presented [Physical Review A \textbf{103} (2021) 043509]. We found that coupling two optomechanical cavities via Coulomb force leads to cross-Kerr interactions between those cavities. We show that such systems may be ideal for a protocol of quantum non-demolition measurement because it is easy to eliminate the self-phase modulation effect. Moreover, nonlinearities in our model are based on easily adjustable parameters, and therefore, given recent experimental studies, we believe that experimental realization of a cross-Kerr interaction via Coulomb force coupling is feasible.
Jan Wójcik, Grzegorz Chimczak
2023-09-18T21:15:01Z
http://arxiv.org/abs/2309.10159v1
# Electrically coupled optomechanical cavities as a tool for quantum nondemolition measurement ###### Abstract We present a new model of two electrically coupled optomechanical cavities. This model is based on the recently presented [Physical Review A **103** (2021) 043509]. We found that coupling two optomechanical cavities via Coulomb force leads to cross-Kerr interactions between those cavities. We show that such systems may be ideal for a protocol of quantum non-demolition measurement because it is easy to eliminate the self-phase modulation effect. Moreover, nonlinearities in our model are based on easily adjustable parameters, and therefore, given recent experimental studies, we believe that experimental realization of a cross-Kerr interaction via Coulomb force coupling is feasible. ## I Introduction The field of cavity optomechanics has been greatly explored in recent decades. Optomechanics covers the interaction between electromagnetic field and mechanical motion. Review of that field was greatly done by [1] and as they pointed out optomechanical couplings were found to be useful in various experiments. Recently, these studies have been extended to optomechanical cavities connected electrically to a charged body [2; 3; 4; 5]. As the authors of Ref. [5] shown, such systems can provide nonlinearities described by Hamiltonians with a term proportional to the square of \(n\) (photon number operator). Such a term leads to a nonlinear spectrum, thus allowing for observation of the photon blockade effect. A number of experiments and proposals were made exploiting optomechanics and nonlinearity [2; 6; 7; 8; 9; 10; 11; 12; 13]. There are many other phenomena that can be observed when non-linearity is present in optical systems [14; 15; 16]. An example of that is a quantum nondemolition measurement (QND), which has been studied for a few decades now and a variety of protocols have been presented on that topic. This phenomenon makes it possible to count photons without absorption, and thus, QND is still a rapidly explored field [17; 18; 19; 20; 21; 22]. However, as was pointed out by Balybin _et al._[23], schemes which has been proposed to realize QND up to now, are very complicated and experimentally challenging. Over the last decade mostly atom-based QND schemes were developed. Most of the schemes discovered up to now are not ideal for QND in the sense that, apart from the product of photon number operators of different cavities \(n_{1}n_{2}\) in a Hamiltonian, which is the soul of QND, there is also a term proportional to the square of photon number operator. This leads to the self-phase modulation effect [23; 24], which is an obstacle and has to be taken into account when performing QND measurements. Here, we propose to engineer a nonlinear cross-Kerr interaction between two optomechanical cavities by giving electrical charge to their movable mirrors. We also prove that it is possible to eliminate the unwanted self-phase modulation effect by adding two charged bodies on both sides of this device. Therefore, we argue that electrically coupled optomechanical cavities can be perfectly suitable for QND. ## II Model Our scheme consists of two optomechanical cavities (probe cavity \(C_{P}\) and signal cavity \(C_{S}\)) coupled by the Coulomb force to the charged bodies similarly to the scheme proposed in Ref. [5], but with extra coupling between the two cavities. This setup is shown in Fig. 1. Coupling between cavities is also Coulombian and crucial for obtaining cross-Kerr interactions. Hamiltonian of our system is given by \[H\ =\ H_{0}+H_{\rm om}+H_{co}\,, \tag{1}\] where \[H_{0} = \hbar\omega_{c}a_{1}^{\dagger}a_{1}+\hbar\omega_{c}a_{2}^{ \dagger}a_{2} \tag{2}\] \[+\frac{m}{2}\omega_{m}^{2}x_{01}^{2}+\frac{p_{0}^{2}}{2m}+\frac{ m}{2}\omega_{m}^{2}x_{1}^{2}+\frac{p_{1}^{2}}{2m}\] \[+\frac{m}{2}\omega_{m}^{2}x_{2}^{2}+\frac{p_{2}^{2}}{2m}+\frac{m} {2}\omega_{m}^{2}x_{02}^{2}+\frac{p_{02}^{2}}{2m}\] describes the energy of cavities and mechanical oscillators without any interactions between them, \[H_{om}\ =\ -\hbar g_{0}(a_{1}^{\dagger}a_{1}x_{01}+a_{1}^{\dagger}a_{1}x_{1}+a_{2} ^{\dagger}a_{2}x_{2}+a_{2}^{\dagger}a_{2}x_{02})\] describes the coupling between mechanical modes and cavities and \[H_{co}\ =\ H_{\rm col}+H_{\rm co0} \tag{3}\] describes Coulombian interactions with \[H_{co1} = \frac{kq_{1}q_{2}}{r_{0}+x_{2}-x_{1}}\,,\] \[H_{co0} = \frac{kq_{01}q_{00}}{r_{00}+x_{01}}+\frac{kq_{02}q_{22}}{r_{02}- x_{02}}\,, \tag{4}\] where \(a_{1,2}\) (\(a_{1,2}^{\dagger}\)) are annihilation (creation) operators for the first and the second optical mode, respectively, and \(x\), \(p\) are position, momentum operators for mechanical oscillator modes with frequency \(\omega_{m}\) and mass \(m\), \(g_{0}=\omega_{c}/L\) is the coupling strength between cavity of length \(L\) and mechanical oscillator. For simplicity reasons we introduce new notation and we assume that parameters of the system are symmetric * \(k(q_{01}\cdot q_{00})=k(q_{22}\cdot q_{02})=\rho_{0}\) * \(k(q_{1}\cdot q_{2})=\rho\) * \(r_{02}=r_{00}=R_{0}\) To justify omitting interactions between further charges, we assume that closest charges are of different sign \(\rho_{0}<0\), \(\rho<0\) and that both \(r_{0}\) and \(R_{0}\) are much smaller then \(L\). ## III Effective Hamiltonian To deal with Hamiltonian \(H\) first we expand its Coulombian part \(H_{co}\) to second order of \(x/r_{0}\) \[H_{co1} = \frac{\rho}{r_{0}+x_{2}-x_{1}} \tag{5}\] \[\approx V_{0}+\frac{\rho}{r_{0}^{2}}(x_{1}-x_{2})+\frac{\rho}{r_{0}^{3}} (x_{1}-x_{2})^{2}\] and then we shift the equilibrium point by introducing new position operators \(\tilde{x}_{1}=x_{1}-d_{1}\), and \(\tilde{x}_{2}=x_{2}-d_{2}\), where \(d_{1}=-d_{2}=-\alpha r_{0}/2(m\omega_{m}^{2}r_{0}+2\alpha)\), \(\alpha=kq_{1}q_{2}/r_{0}^{2}\). Note that one can find \(d_{1}\) and \(d_{2}\) by calculating the minimum of potential \[V = \frac{\rho}{r_{0}+x_{2}-x_{1}}+\frac{m}{2}\omega_{m}^{2}(x_{1}^{2}+x_{2}^{ 2})\,. \tag{6}\] Finally, we obtain \[H_{co1} = -Q(\tilde{x}_{1}-\tilde{x}_{2})^{2}\,, \tag{7}\] where \(Q=-\rho/r_{0}^{3}\) and same we do for \(H_{co0}\), and we get \[H_{co0} = -Q_{0}(\tilde{x}_{01}^{2}+\tilde{x}_{02}^{2}) \tag{8}\] with \(Q_{0}=-\rho_{0}/R_{0}^{3}\) and \(\tilde{x}_{01}=x_{01}-d_{01}\), and \(\tilde{x}_{02}=x_{02}-d_{02}\), where \[d_{01} = kq_{00}q_{01}/2(m\omega_{m}^{2}r_{00}^{2}+kq_{00}q_{01}/r_{00})\,,\] \[d_{02} = -kq_{22}q_{02}/2(m\omega_{m}^{2}r_{00}^{2}+kq_{22}q_{02}/r_{00})\,. \tag{9}\] Now, using position and momentum operators in the form \[x = \sqrt{\frac{\hbar}{2m\omega_{m}}}(b^{\dagger}+b)\,,\] \[p = i\sqrt{\frac{\hbar m\omega_{m}}{2}}(b^{\dagger}-b)\,, \tag{10}\] with \(b_{i}\) (\(b_{i}^{\dagger}\)) being annihilation (creation) operators for mechanical modes, we can rewrite \(H\) as \[H=\hbar\Delta_{1}a_{1}^{\dagger}a_{1}+\hbar\Delta_{2}a_{2}^{\dagger}a_{2}+H_{I }+H_{Q}, \tag{11}\] with \(H_{I}\) describing interaction between cavities via charges \(q_{1}\) and \(q_{2}\), and \(H_{Q}\) describing interactions be Figure 1: Sketch of the optomechanical setup to perform QND without the self-phase modulation effect. The contribution to this unwanted effect from the electric charges \(q_{1}\) and \(q_{2}\) is compensated for by the contribution of the outside electric charges \(q_{00}\) and \(q_{22}\). tween cavities and charged bodies \(q_{00}\) and \(q_{22}\): \[H_{I} = \hbar(\omega_{m}-2G)(b_{1}^{\dagger}b_{1}+b_{2}^{\dagger}b_{2})\] \[-\hbar ga_{1}^{\dagger}a_{1}(b_{1}^{\dagger}+b_{1})-ga_{2}^{ \dagger}a_{2}(b_{2}^{\dagger}+b_{2})\] \[-\hbar G\left(\,b_{1}^{\dagger 2}+b_{1}^{2}+b_{2}^{\dagger 2}+b_{2}^{ 2}\right.\] \[\left.-2(b_{1}^{\dagger}+b_{1})(b_{2}^{\dagger}+b_{2})\ \right)\,,\] \[H_{Q} = \hbar(\omega_{m}-2G_{0})(b_{01}^{\dagger}b_{01}+b_{02}^{\dagger} b_{02})\] \[-\hbar ga_{01}^{\dagger}a_{01}(b_{01}^{\dagger}+b_{01})\] \[-\hbar ga_{02}^{\dagger}a_{02}(b_{02}^{\dagger}+b_{02})-\hbar G_ {0}\left(\,b_{01}^{\dagger 2}+b_{01}^{2}\right.\] \[\left.+b_{02}^{\dagger 2}+b_{02}^{2}\right)\,, \tag{12}\] where \(\Delta_{1}=\omega_{c}-g_{0}(d_{1}+d_{01})\), \(\Delta_{2}=\omega_{c}-g_{0}(d_{2}+d_{02})\), \(G=Q/(2m\omega_{m})\), \(G_{0}=Q_{0}/(2m\omega_{m})\) and \(g=\omega_{c}\sqrt{\hbar/2m\omega_{m}}/L\). It is possible to simplify the Hamiltonian (11) by applying adiabatic eliminations of all mechanical modes. To this end, we first transform \(H_{Q}\) by introducing squeezed mechanical oscillator modes (see [5] and Appendix A). Then we can eliminate adiabatically these squeezed mechanical oscillator modes provided that \(\omega_{s}\gg g_{s}\)(see Appendix A). After these transformations we get \[H_{Q\rm eff}=\hbar g^{2}\frac{\omega_{m}}{\omega_{m}(\omega_{m}-4G_{0})}(n_{1} ^{2}+n_{2}^{2})\,, \tag{13}\] where \(n_{i}=a_{i}^{\dagger}a_{i}\) is a photon number operator for the \(i\)-th mode. Now we perform similar transformations on \(H_{I}\). To eliminate adiabatically both mechanical modes properly, we first diagonalize the bare mechanical part of the Hamiltonian: \[H_{m} = \hbar(\omega_{m}-2G)(b_{1}^{\dagger}b_{1}+b_{2}^{\dagger}b_{2})\] \[-\hbar G\left(b_{1}^{\dagger 2}+b_{1}^{\dagger 2}+b_{2}^{\dagger 2}+b_{ 2}^{2}-2(b_{1}^{\dagger}+b_{1})(b_{2}^{\dagger}+b_{2})\right)\,.\] To this end, we assume \(\omega_{m}>8\,G\) and use a Hopfield-Bogoliubov transformation: \[B_{1} = \frac{\sqrt{\nu+1}}{2}b_{1}-\frac{\sqrt{\nu-1}}{2}\,b_{1}^{ \dagger}-\frac{\sqrt{\nu+1}}{2}b_{2}+\frac{\sqrt{\nu-1}}{2}b_{2}^{\dagger}\,,\] \[B_{2} = \frac{1}{\sqrt{2}}b_{1}+\frac{1}{\sqrt{2}}b_{2}\,, \tag{14}\] where \(\lambda_{1}=\sqrt{\omega_{m}(\omega_{m}-8\,G)}\), \(\lambda_{2}=\omega_{m}\) and \(\nu=(\lambda_{2}-4\,G)/\lambda_{1}\). It can be checked that these operators satisfy the canonical commutation relations \([B_{i},B_{j}^{\dagger}]=\delta_{i,j}\). In terms of these operators, we can express \(H_{m}\) in diagonal form: \[H_{m} = \hbar\lambda_{1}B_{1}^{\dagger}B_{1}+\hbar\lambda_{2}B_{2}^{ \dagger}B_{2}-\hbar\chi\,, \tag{15}\] where \(\chi=(\omega_{m}-4\,G-\sqrt{\omega_{m}(\omega_{m}-8\,G)})/2\). It is also necessary to express the operators \(b_{1}\) and \(b_{2}\) in terms of \(B_{1}\) and \(B_{2}\): \[b_{1} = \frac{1}{\sqrt{2}}B_{2}+\frac{\sqrt{\nu+1}}{2}B_{1}+\frac{\sqrt{ \nu-1}}{2}B_{1}^{\dagger}\,,\] \[b_{2} = \frac{1}{\sqrt{2}}B_{2}-\frac{\sqrt{\nu+1}}{2}B_{1}-\frac{\sqrt{ \nu-1}}{2}B_{1}^{\dagger}\,, \tag{16}\] Now we can re-express \(H_{I}\) to the form \[H_{I} = \hbar\lambda_{1}B_{1}^{\dagger}B_{1}+\hbar\lambda_{2}B_{2}^{ \dagger}B_{2}-\hbar\chi \tag{17}\] \[-\hbar\frac{g}{2}(\sqrt{\nu-1}+\sqrt{\nu+1})(a_{1}^{\dagger}a_{1} -a_{2}^{\dagger}a_{2})(B_{1}^{\dagger}+B_{1})\] \[-\hbar\frac{g}{\sqrt{2}}(a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2} )(B_{2}^{\dagger}+B_{2})\,.\] After adiabatic elimination shown in Appendix B one obtains \[H_{I\rm eff} = \hbar g^{2}\,\frac{8G}{\omega_{m}(\omega_{m}-8G)}\left(n_{1}\,n_{ 2}\right) \tag{18}\] \[-\hbar g^{2}\,\frac{\omega_{m}-4G}{\omega_{m}(\omega_{m}-8G)} \left(n_{1}^{2}+n_{2}^{2}\right).\] By combining the above results we get \[H_{\rm eff} = \hbar\Delta_{1}a_{1}^{\dagger}a_{1}+\hbar\Delta_{2}a_{2}^{\dagger }a_{2}+\hbar g^{2}\,\frac{8G}{\omega_{m}(\omega_{m}-8G)}\left(n_{1}\,n_{2}\right) \tag{19}\] \[-\hbar g^{2}\,\frac{\omega_{m}-4G}{\omega_{m}(\omega_{m}-8G)} \left(n_{1}^{2}+n_{2}^{2}\right)\] \[+\hbar g^{2}\frac{\omega_{m}}{\omega_{m}(\omega_{m}-4G_{0})}(n_{1} ^{2}+n_{2}^{2})\] One can see that the last two terms in Eq. (19) describe the self-phase modulation effect. The first of these terms depends on charges \(q_{01}\), \(q_{00}\), \(q_{22}\) and \(q_{02}\), while the second depends on charges \(q_{1}\) and \(q_{2}\). Since these two terms have different signs, it is possible to eliminate this unwanted in QND effect just by setting proper values of the charges \(q_{01}\) and \(q_{00}\). The proper values of \(q_{01}\), \(q_{00}\) (and thus also \(q_{22}\) and \(q_{02}\)) can be determined using the condition \[G_{0} = \frac{\omega_{m}G}{\omega_{m}-4G}\,. \tag{20}\] After setting proper values of these charges the effective Hamiltonian, which is ideal for QND measurements [24], takes the form \[H_{\rm eff} = \hbar\Delta_{1}n_{1}+\hbar\Delta_{2}n_{2}+\hbar\gamma(n_{1}\,n_{2})\,, \tag{21}\] where \[\gamma = g^{2}\,\frac{8G}{\omega_{m}(\omega_{m}-8G)}\,. \tag{22}\] To sum up let us collect all the conditions that must be satisfied for the effective Hamiltonian (21) to correctly describe the system shown in Fig. 1. These conditions are given by: \(\omega_{m}>8G\), \(\omega_{s}\gg g_{s}\), \(r_{0}\ll L\), \(r_{00}\ll L\) and \(G_{0}=\omega_{m}G/(\omega_{m}-4G)\). Given similar experimental setups, we believe that these conditions can be met [2; 6; 7; 8; 9; 10; 11]. ## IV Simplified model The setup proposed in the previous section allows for QND measurements without the self-phase modulation effect. However, the experimental realization of this system might be challenging. Therefore, we also propose a simpler version of the system which also makes it possible to perform QND measurements via a cross-Kerr interaction. This simplified system is shown in Fig. 2. The Hamiltonian of this system is given by \[H = \hbar\Delta^{\prime}_{1}a^{\dagger}_{1}a_{1}+\hbar\Delta^{\prime}_ {2}a^{\dagger}_{2}a_{2}+\hbar(\omega_{m}-2G)(b^{\dagger}_{1}b_{1}+b^{\dagger}_ {2}b_{2})\] \[-\hbar ga^{\dagger}_{1}a_{1}(b^{\dagger}_{1}+b_{1})-\hbar ga^{ \dagger}_{2}a_{2}(b^{\dagger}_{2}+b_{2})\] \[-\hbar G\left(\ b^{\dagger 2}_{1}+b^{2}_{1}+b^{\dagger 2}_{2}+b^{2}_ {2}-2(b^{\dagger}_{1}+b_{1})(b^{\dagger}_{2}+b_{2})\ \right)\,,\] where \(\Delta^{\prime}_{1}=\omega_{c}+g_{0}d_{1}\), \(\Delta^{\prime}_{1}=\omega_{c}+g_{0}d_{2}\). Following our previous steps, we get effective Hamiltonian in the form \[H_{\rm eff} = \hbar\Delta^{\prime}_{1}n_{1}+\hbar\Delta^{\prime}_{2}n_{2}+ \hbar g^{2}\,\frac{8G}{\omega_{m}(\omega_{m}-8G)}\left(n_{1}\,n_{2}\right) \tag{23}\] \[-\hbar g^{2}\,\frac{\omega_{m}-4G}{\omega_{m}(\omega_{m}-8G)} \left(n^{2}_{1}+n^{2}_{2}\right).\] The above effective Hamiltonian has a cross-Kerr interaction term, and thus, it also can be used for QND but with so called self-phase modulation effect described in Ref. [24]. ## V QND protocol Let us now illustrate how to use the setup presented in Fig. 1 to perform a QND measurement. The protocol for this measurement is depicted schematically in Fig. 3. This protocol exploits a typical phase-shift measurement using a Mach-Zehnder interferometer with coherent states of light. The effect of MZI on coherent states is given, for example, in Ref. [25]. We want to measure \(n\), i.e., the number of photons in a signal mode, Figure 3: Schematic representation of the protocol for a QND measurement using the setup presented in Fig. 1. This protocol is simple only if the self-phase modulation effect is absent. Figure 2: The basic version of the setup to perform QND measurement via a cross-Kerr interaction. In this system the unwanted self-phase modulation effect is present. without destroying it. Initially, the signal mode is prepared in the \(|n\rangle\) Fock state, and the two other modes are prepared in the vacuum state and the coherent \(|\alpha\rangle\) state, respectively. Therefore, the initial state of the system is given by \[|\Psi_{0}\rangle=|n\rangle_{1}|0\rangle_{2}|\alpha\rangle_{3}. \tag{24}\] Firstly, the coherent light falls on the first beam splitter resulting in \[|n\rangle_{1}|0\rangle_{2}|\alpha\rangle_{3}\longrightarrow|n\rangle_{1}|i \alpha/\sqrt{2}\rangle_{2}|\alpha/\sqrt{2}\rangle_{3}. \tag{25}\] Then, for a time \(T\) the state in the lower path (the mode 2) interacts with the signal state (the mode 1) due to the interaction described by the Hamiltonian (21) \[|n\rangle_{1}|i\alpha/\sqrt{2}\rangle_{2}|\alpha/\sqrt{2}\rangle_ {3}\longrightarrow\] \[\longrightarrow e^{-in\Delta_{1}T}|n\rangle_{1}|ie^{i\theta}\alpha/ \sqrt{2}\rangle_{2}|\alpha/\sqrt{2}\rangle_{3}, \tag{26}\] where \[\theta=-T(\Delta_{2}+\gamma n). \tag{27}\] Next, the beams interfere at the second beam splitter, resulting in \[e^{-inT\Delta_{1}}|n\rangle_{1}|ie^{i\theta}\alpha/\sqrt{2}\rangle_ {2}|\alpha/\sqrt{2}\rangle_{3}\longrightarrow\] \[\longrightarrow e^{-inT\Delta_{1}}|n\rangle_{1}|i(e^{i\theta}+1)\alpha/2 \rangle_{2}|(e^{i\theta}-1)\alpha/2\rangle_{3}\,. \tag{28}\] Let us denote by \(d_{1}\) (\(d_{1}^{\dagger}\)) and \(d_{2}\) (\(d_{2}^{\dagger}\)) anihilation (creation) operators describing modes collected by detectors \(D_{1}\) and \(D_{2}\), respectively. In the last step, we measure the expectation value of the number difference operator defined by \(D=d_{1}^{\dagger}d_{1}-d_{2}^{\dagger}d_{2}\). It is easy to check that this expectation value is given by \[\langle D\rangle=|\alpha|^{2}\cos\theta\,. \tag{29}\] Therefore, if we know the measurement outcome \(\langle D\rangle\) then from Eqs. (27) and (29), we can determine the number of photons in the signal mode \[n=\frac{\arccos\left(\frac{\langle D\rangle}{|\alpha|^{2}}\right)/T-\Delta_{ 2}}{\gamma}. \tag{30}\] Thus, we indeed measured \(n\) without destroying the signal mode. It is worth to note that this protocol is simple, at least in theory, thanks to the absence of terms proportional to square of photon number operators in the Hamiltonian (21). In this case, the time evolution operator \(\exp(-iH_{\rm eff}T)\) just transforms one coherent state into another coherent state. However, in cases where terms proportional to operators \(n_{1}^{2}\) and \(n_{2}^{2}\) are present in a Hamiltonian, like in the Hamiltonian (23), the situation is much more complex. Then, the time evolution operator includes the nonlinear Kerr-type operator \(\exp(-i\chi\dot{n}^{2})\), which significantly changes a coherent state. Even in special cases, the action of the Kerr-type operator on a coherent state results in the transformation of it into a superposition of many coherent states [26]. ## VI Conclusions We have proposed a new setup, in which electrical coupling of two optomechanical cavities leads to cross-Kerr interactions between them, and therefore, this setup can serve as a quantum non-demolition (QND) measurement device. Moreover, we have also proposed a second version of this setup, in which both cavities interact not only with each other but also with charged bodies. We have shown that the contribution from this additional interaction to self-phase modulation terms can compensate for the contribution to this terms from interactions between both cavities, without changing cross-Kerr interactions terms. Therefore, the effective Hamiltonian of the modified setup includes a cross-Kerr interaction term, which plays a key role in QND measurements, but does not include self-phase modulation terms, which is an obstacle in QND. Finally, we have presented a simple protocol using the modified setup to show how helpful is the eliminating self-phase modulation effect in QND measurements. ## Acknowledgements This work was supported by the Polish National Science Centre (NCN) under the Maestro Grant No. DEC-2019/34/A/ST2/00081. ## Appendix A We transform the Hamiltonian \(H_{Q}\) describing interaction between cavities and charged bodies by introducing operators \(b_{s}\) and \(b_{s}^{\dagger}\)[5] \[b=\cosh(r)b_{s}+\sinh(r)b_{s}^{\dagger}, \tag{31}\] which satisfy the canonical commutation relation \([b_{s},b_{s}^{\dagger}]=1\). We also set such a value of the squeezing parameter \(r\) to fulfill the following condition \[r=\frac{1}{4}\log\left[\frac{\omega_{m}}{\omega_{m}-4G_{0}}\right]\,. \tag{32}\] Then, the Hamiltonian \(H_{Q}\) takes the form \[H_{Q} = \hbar\omega_{s}(b_{s01}^{\dagger}b_{s01}+b_{s02}^{\dagger}b_{s02} )-\hbar g_{s}a_{1}^{\dagger}a_{1}(b_{s01}^{\dagger}+b_{s01}) \tag{33}\] \[-\hbar g_{s}a_{2}^{\dagger}a_{2}(b_{s02}^{\dagger}+b_{s02})\,,\] where \[\omega_{s} = (\omega_{m}-4G_{0})\,\exp(2r)\,,\] \[g_{s} = g\,\exp(r)\,. \tag{34}\] Next, we eliminate adiabatically the mechanical mode \(b_{s01}\) assuming that \(\omega_{s}\gg g_{s}\) and \[\dot{b}_{s01} = i\left[H_{Q},b_{s01}\right],\] \[\dot{b}_{s01} = 0\,. \tag{35}\] In the same way, we eliminate adiabatically the mechanical mode \(b_{s02}\) obtaining \[H_{Q\mathrm{eff}} = \hbar\frac{g_{s}^{2}}{\omega_{s}}(n_{1}^{2}+n_{1}^{2})\,. \tag{36}\] ## Appendix B Now, we can apply the adiabatic elimination procedure to the Hamiltonian \(H_{I}\) defined in Eq. (17). To this end, we derive \(B_{k}\) (\(k=1,2\)) from the set of equations \[\dot{B_{k}} = i\left[H_{I},B_{k}\right],\] \[\dot{B_{k}} = 0\,. \tag{37}\] Thus \[B_{1}=B_{1}^{\dagger} = \frac{g}{2\lambda_{1}}(\sqrt{\nu-1}+\sqrt{\nu+1})(a_{1}^{\dagger }a_{1}-a_{2}^{\dagger}a_{2}),\] \[B_{2}=B_{2}^{\dagger} = \frac{g}{\sqrt{2}\lambda_{2}}(a_{1}^{\dagger}a_{1}+a_{2}^{ \dagger}a_{2})\,. \tag{38}\] Substituting the above expressions for \(B_{1}\) and \(B_{2}\) into Eq. (17) we get \[H_{I\mathrm{eff}} = -\hbar\frac{g^{2}}{4\lambda_{1}}(\sqrt{\nu-1}+\sqrt{\nu+1})^{2}( a_{1}^{\dagger}a_{1}-a_{2}^{\dagger}a_{2})^{2} \tag{39}\] \[-\hbar\frac{g^{2}}{2\lambda_{2}}(a_{1}^{\dagger}a_{1}+a_{2}^{ \dagger}a_{2})^{2}.\] Rearranging the above we obtain \[H_{I\mathrm{eff}}=\hbar g^{2}\,\frac{8G}{\omega_{m}(\omega_{m}-8 G)}\left(n_{1}\,n_{2}\right)\] \[-\hbar g^{2}\,\frac{\omega_{m}-4G}{\omega_{m}(\omega_{m}-8G)}\,(n _{1}^{2}+n_{2}^{2}). \tag{40}\]
2309.11247
Hierarchical Multi-Agent Reinforcement Learning for Air Combat Maneuvering
The application of artificial intelligence to simulate air-to-air combat scenarios is attracting increasing attention. To date the high-dimensional state and action spaces, the high complexity of situation information (such as imperfect and filtered information, stochasticity, incomplete knowledge about mission targets) and the nonlinear flight dynamics pose significant challenges for accurate air combat decision-making. These challenges are exacerbated when multiple heterogeneous agents are involved. We propose a hierarchical multi-agent reinforcement learning framework for air-to-air combat with multiple heterogeneous agents. In our framework, the decision-making process is divided into two stages of abstraction, where heterogeneous low-level policies control the action of individual units, and a high-level commander policy issues macro commands given the overall mission targets. Low-level policies are trained for accurate unit combat control. Their training is organized in a learning curriculum with increasingly complex training scenarios and league-based self-play. The commander policy is trained on mission targets given pre-trained low-level policies. The empirical validation advocates the advantages of our design choices.
Ardian Selmonaj, Oleg Szehr, Giacomo Del Rio, Alessandro Antonucci, Adrian Schneider, Michael Rüegsegger
2023-09-20T12:16:00Z
http://arxiv.org/abs/2309.11247v1
# Hierarchical Multi-Agent Reinforcement Learning ###### Abstract The application of artificial intelligence to simulate air-to-air combat scenarios is attracting increasing attention. To date the high-dimensional state and action spaces, the high complexity of situation information (such as imperfect and filtered information, stochasticity, incomplete knowledge about mission targets) and the nonlinear flight dynamics pose significant challenges for accurate air combat decision-making. These challenges are exacerbated when multiple heterogeneous agents are involved. We propose a hierarchical multi-agent reinforcement learning framework for air-to-air combat with multiple heterogeneous agents. In our framework, the decision-making process is divided into two stages of abstraction, where heterogeneous low-level policies control the action of individual units, and a high-level commander policy issues macro commands given the overall mission targets. Low-level policies are trained for accurate unit combat control. Their training is organized in a learning curriculum with increasingly complex training scenarios and league-based self-play. The commander policy is trained on mission targets given pre-trained low-level policies. The empirical validation advocates the advantages of our design choices. Hierarchical Multi-Agent Reinforcement Learning, Heterogeneous Agents, Curriculum Learning, Air Combat. ## I Introduction In defense area, complex air-to-air combat scenarios simulation requires simultaneous real-time control of individual units (troop level) and global mission planning (commander level). _Deep Reinforcement Learning_ (DRL) has achieved superhuman level in various environments, ranging from discrete perfect information scenarios (such as games like Chess and Go) to real-time continuous control and strategic decision-making scenarios with imperfect information (typical of modern war games). However, conventional DRL ignores the structural requirements typical of real-world combat scenarios, where decision-making authority is organized hierarchically. It is crucial for real-world operations that low-level combat decisions (such as fire/duck) are made by individual units and executed at low latency, while abstract mission planning decisions (such as conquer and hold coordinates) at higher hierarchy levels take account of information from all available units. An example is the guidance of drones in modern warfare, where individual units act autonomously even without connection to a centralized intelligent instance. This information abstraction motivates the investigation of _Multi-Agent Deep Reinforcement Learning_ (MARL) techniques for creating artificial agents in a realistic simulation environment. In MARL systems, the hierarchical splitting of mixed planning and control tasks can be achieved by incorporating dedicated algorithms at varying levels of abstraction. This allows each agent to control itself in a decentralized manner while providing sufficient flexibility for the emergence of targeted group behavior. ### _Contributions_ 1. Considering low latency as crucial for DRL problems, we develop a lightweight simulation platform suitable for fast simulation of agent dynamics and interactions. 2. We employ a hierarchical framework for simultaneous planning and control to solve the overall decision-making problem for air-to-air combat scenarios. 3. We realize a fictitious self-play mechanism through curriculum learning with increasing levels of complexity to improve combat performance as learning proceeds. 4. We develop a sophisticated neural network architecture composed of recurrent and attention units. Coordination is achieved without an explicit communication channel. ### _Outline_ Sect. II summarizes previous contributions and describes how our work extends the existing literature. Sect. III details air-to-air engagement scenarios and describes our framework. Our experimental findings are presented in Sect. IV, while our conclusions and possible future works are discussed in Sect. V. ## II Related Work Aerial combat tactics have been discussed extensively in the literature, with a significant portion of research dedicated to the study of engagements with small numbers of units (one to two). Research on small engagements typically focuses on _control_, i.e., it examines how the maneuvering of individual units impacts the overall engagement outcome. A frequent focus lies on achieving an advantage against the opponent: in this position, it is possible to fire at the opponent with little risk of return fire [1]. Popular methods include expert systems [2, 3, 4, 5], control laws for pursuit and/or evasion [6, 7, 8, 9], game-theory [10, 11, 12], but also machine learning [13, 14, 15, 16] and hybrid approaches [17, 18, 19, 20, 21, 22, 23]. Classical research about larger-scale engagements focuses on weapon-target assignment [24] and [1], human-pilot-like decision-making [25], and high-level engagement tactical decisions [26], i.e., on _planning_. _Reinforcement learning_ (RL) techniques gained increasing interest in this context. [27] train a Recurrent _Deep \(Q\)-Network_ (DQN) algorithm [28] and employ a situation evaluation function to revise the decision-making system. Other approaches use _deep deterministic policy gradient_ (DDPG) [29, 30] or A3C [31]. _Cascade learning_ approaches that gradually increase combat complexity are discussed in [32] and [33]. In [34], the combat strategy is learned through a league system to prevent the policy from circling around poor local optima. MARL is currently thriving [35]. In the study of emergent behavior and complexity from coordinated agents, the introduction of centralized training decentralized execution actor-critic methods, such as the _Multi-Agent Deep Deterministic Policy Gradient_ algorithm (MADDPG) [36], has been a milestone [36, 37, 38]. Such methods train actor policies with critics having access to information of all other agents. However, they are not structured to account for the hierarchies present in real-world operations and emerging phenomena such as agent attrition (exit learning process). [39] uses MADDPG combined with potential-based reward shaping [40]. A maneuver strategy for _Unmanned Aerial Vehicles_ (UAV) swarms is developed in [41] using MADDPG, but the discussion is limited to one-to-one or multi-to-one combat. [42] and [43] use attention based neural networks. The former uses a two-stage attention mechanism for coordination. In the latter, the attention layers calculate the relative importance of surrounding aircraft, where opponents are purely controlled by scripts. On the other side stands the concept of _Hierarchical Reinforcement Learning_ (HRL), which divides the overall task into sub-tasks [44]. In HRL, training is organized in nested loops. Inner loop training controls the aircraft, while the outer trains a super-policy for guidance and coordination of individual agents. HRL has been applied in the context of air-to-air combat in [45] and [46]. There appears to be little research in air-to-air combat by combining HRL and MARL. A Hierarchical MARL approach to handle variable-size formations is proposed in [47]. The authors employ an attention mechanism and self-play with a DQN high-level policy trained with QMIX. An approach similar to the one presented in this paper, focusing on heterogeneous agents of two types, was explored in [48]. The high-level target allocation agents are trained using DQN, and the low-level cooperative attacking agents are based on _independent asynchronous proximal policy optimization_ (IAPPO). However, they follow the goal of _suppression of enemy air defense_ (SEAD). SEAD aims to gain air superiority by targeting and disrupting the enemy's ability to detect and engage friendly aircraft. Unlike the concept of SEAD, which focuses on neutralizing enemy air defense systems, doggighting is centered around engaging and defeating enemy aircraft in direct air-to-air combat. This article investigates air-to-air combat scenarios for coordinated doggighting with heterogeneous agents and hierarchical MARL in a cascaded league-play training scheme. To our knowledge, this setup has yet to occur in publications for this kind of application. ## III Method ### _Aircraft Dynamics_ We base our modeling on the dynamics of the _Dassault Rafale_ fighter aircraft.1 We focus on hierarchical coordination of multiple heterogeneous agents in 2D (assuming a constant altitude of our aircraft). There are _beyond_ and _within visual range_ air combat scenarios [49], where we focus on the latter in this article. Since real-world combat scenarios frequently involve different types of aircraft, we add a modified version of the Rafale aircraft with different dynamics: The original aircraft (AC1) is more agile and equipped with rockets, while the modified type (AC2) has no rockets but longer common range. The dynamics of AC1 and AC2 are characterized as: Footnote 1: dassault-aviation.com/en/defense/rafale. * angular velocity [\(deg/s\)]: \(\omega_{AC1}\in[0,5]\), \(\omega_{AC2}\in[0,3.5]\); * speed [knots]: \(v_{AC1}\in[100,900]\), \(v_{AC2}\in[100,600]\); * conical _weapon engagement zone_ (WEZ): angle [\(deg\)]: \(\omega_{WEZ,AC1}\in[0,10]\), \(\omega_{WEZ,AC2}\in[0,7]\), range [\(km\)]: \(d_{a,AC1}\in[0,2]\), \(d_{a,AC2}\in[0,4.5]\); * hit probability \(p_{hit,AC1}=0.70\) and \(p_{hit,AC2}=0.85\). ### _Multi-Agent Reinforcement Learning_ RL is used to solve sequential decision-making problems. Agents interact with an environment to learn a behavior that is evaluated based on a reward function \(r_{t}(s_{t},a_{t},s_{t+1})\). The goal of the agent is to maximize the cumulative reward \(\sum_{t}r_{t}(s_{t},a_{t},s_{t+1})\). In MARL, there are multiple agents interacting in a cooperative or competing fashion, or both. The decision function, called policy \(\pi(a_{t}|s_{t})\), maps states to a distribution over actions. We model the interactions of individual agents as a _partially-observable Markov game_ (POMG) defined by a tuple \((\mathcal{S},\mathcal{O},\mathcal{A}_{1},\ldots,\mathcal{A}_{N},P,R_{1}, \ldots,R_{N},\gamma)\), where: \(\mathcal{S}\) is the state-space representing possible configurations of the environment, \(\mathcal{O}\subset\mathcal{S}\) is the set of observations, \(\mathcal{A}_{i}\) is the set of actions for player \(i\), \(P(s^{\prime}|s,a_{1},\ldots,a_{N})\) represents the dynamics of the environment and specifies the probability of transitioning to state \(s^{\prime}\) when players take actions \(a_{1},\ldots,a_{N}\) in state \(s\), \(R_{i}(s,a_{1},\ldots,a_{N},s^{\prime})\) defines the immediate reward for player \(i\) when the system transitions from state \(s\) to state \(s^{\prime}\) with players taking actions \(a_{1},\ldots,a_{N}\). We adopt a _Centralized Training and Decentralized Execution_ (CTDE) [36] scheme for training agents. Our modeled POMG with CTDE scheme is used to train low-level control policies, from which we define to have two: a fight policy Fig. 1: Aircraft attacking mechanisms. and an escape policy \(\pi_{e}\). Further on, there is a distinct policy for each aircraft type. Overall we have four low-level policies: \([\pi_{f,AC1},\pi_{f,AC2},\pi_{e,AC1},\pi_{e,AC2}]\). Agents of the same type use the same shared policies. Thus all AC1 use \(\pi_{f,AC1}\) and \(\pi_{e,AC1}\), irrespective on the number of agents, and similarly for AC2. In this way, policies are trained with experiences of all agents of the same type and ensures a coherent behavior. ### _Hierarchical Reinforcement Learning_ HRL employs temporal abstraction by decomposing the overall task into a nested hierarchy of sub-tasks, enhancing efficiency in learning and decision-making [44]. Abstract commands are issued from higher hierarchy levels to apply a control policy (so-called _option_) for a limited amount of time. Symmetries within a particular (lower) hierarchy level can be exploited by using the same option for different sub-tasks, e.g. controlling similar airplanes. This results in better scalability (reducing the effective dimensions of state and action spaces) and enhances generalization (generating new skills by combining sub-tasks) [50]. HRL also fits naturally with the hierarchical structure of defense organizations. Formally, our hierarchical system corresponds to a _partially observable semi-Markov Decision Process_ (POSMDP) with options as a tuple \((\mathcal{S},\mathcal{O}_{s},\mathcal{A},R,P,\gamma)\). Similar to the notions of POMG, \(\mathcal{S}\) is the state space, \(\mathcal{O}_{s}\) is the set of sub-strategies (options), \(\mathcal{A}\) is the action space, \(R\) is the reward function and the transition function \(P(s^{\prime},\tau|s,o)\) defines the probability of landing in state \(s^{\prime}\) from state \(s\) after \(\tau\) time steps when executing \(o\). We again use CTDE to train a single high-level commander policy \(\pi_{h}\) to be used for all agents and aircraft types. Fig. 2 illustrates the relations between high and low-level policies. ### _Metrics for Air-to-Air Combat_ We now describe observations, actions and rewards in our hierarchical MARL approach. All observation values are normalized to the range \([0,1]\) and are based on the metrics shown in Fig. 3 Further observations include map position (\(x,y\)), current speed (\(s\)), remaining cannon ammunition (\(c_{1}\)) and remaining rockets (\(c_{2}\)). Indicator (\(w\)) defines if the next rocket is ready to be fired and (\(s_{r}\)) indicates if the aircraft is currently shooting. Subscript \(a\) indicates agent, \(o\) opponent and \(fr\) friendly aircraft (i.e. from the same team). A subscript in a value, e.g., \(\alpha_{off,o}\), defines the angle-off w.r.t. to the opponent. Actions of all policies are discrete. #### Iii-C1 Fight Policy: \(\pi_{f}\) can observe its closest opponent and closest friendly aircraft. \[o_{t,a} := [x,y,s,\alpha_{h},\alpha_{off,o},\alpha_{AA,o},\alpha_{ATA,o},d_{o },c_{1},\overbrace{c_{2},w}^{\text{AC1}},s_{r}]\] \[o_{t,o} := [x,y,s,\alpha_{h},\alpha_{off,a},\alpha_{AA,a},\alpha_{ATA,a},d_{ a},s_{r}]\] \[o_{t,fr} := [x,y,s,\alpha_{off,a},\alpha_{ATA,a},\alpha_{ATA,fr},d_{a},s_{r}]\] \[o_{t,full} := o_{t,a}|o_{t,o}||o_{t,fr}\] The control maneuvers (actions) are: * relative heading maneuvers: turn in range [-90\({}^{\circ}\), 90\({}^{\circ}\)] (\(h\in\{-6,\dots,6\}\rightarrow\alpha_{h}=-15\cdot h+\alpha_{h}\)); * velocity: mapping of \(v\) to velocity ranges of AC1 or AC2 (\(v\in\{0,\dots,8\}\)); * shooting with cannon: (\(c\in\{0,1\}\)); * shooting with rocket (AC1): (\(r\in\{0,1\}\)). In air-to-air combat, facing the opponent's tail is a favorable situation for shooting. We therefore define the reward function based on \(\alpha_{ATA,a}\) of the opponent to the agent. We further encourage the combat efficiency by incorporating the remaining ammunition (\(c_{rem}=c_{1}+c_{2}\)): \[r_{t,k}=\alpha_{ATA,a}+\frac{c_{max}-c_{rem}}{c_{max}}\in[1,2]\,. \tag{1}\] Punishing rewards are given when flying out of environment boundaries \(r_{t,b}=-5\) and when destroying a friendly aircraft \(r_{t,f}=-2\). There is no per-time-step reward given. The total reward is then: \(r_{t}=r_{t,k}+r_{t,b}+r_{t,f}\). Iii-C2 Escape Policy: \(\pi_{e}\) senses two closest opponents and its closest friendly aircraft. The actions remain same as for \(\pi_{f}\). \[o_{t,a} := [x,y,s,\alpha_{h},c_{1},\overbrace{c_{2}}^{\text{AC1}}]\] \[o_{t,o} := [x,y,s,\alpha_{h},\alpha_{off,a},\alpha_{ATA,a},\alpha_{ATA,o},d_{ a}]\] \[o_{t,fr} := [x,y,s,\alpha_{h},\alpha_{ATA,a},\alpha_{ATA,fr},d_{a}]\] \[o_{t,full} := o_{t,a}||o_{t,o_{1}}||o_{t,o_{2}}||o_{t,fr}\] The per-time-step reward depends on distances to opponents: \[r_{t,e}=\begin{cases}-0.01&d<6km\\ +0.01&d>13km\\ 0&\text{otherwise}\end{cases}\,. \tag{2}\] The total reward is finally \(r_{t}:=r_{t,e}+r_{t,b}+r_{t,f}\). Fig. 3: Aircraft metrics: heading (a), heading off (b), aspect angle (c), antenna train angle (d), distance (e). Fig. 2: Hierarchy of policies. #### Iii-D3 Commander Policy \(\pi_{h}\) is called for every agent separately. The observations are based on three closest opponents and two closest friendly aircraft. \[o_{t,a} := [x,y,s,\alpha_{h}]\] \[o_{t,o} := [x,y,s,\alpha_{h},\alpha_{AA,a},\alpha_{AA,o},\alpha_{ATA,a},\alpha_ {ATA,o},d_{a}]\] \[o_{t,fr} := [x,y,s,d_{a}]\] \[o_{t,full} := o_{t,a}||o_{t,o_{1}}||o_{t,o_{2}}||o_{t,o_{3}}||o_{t,fr_{1}}||o_{ t,fr_{2}}\] The commander decides the low-level policy to use for each agent. The action set is \(a_{c}\in\{0,1,2,3\}\), where \(0\) activates \(\pi_{e}\) and \(\pi_{f}\) otherwise. If \(\pi_{f}\) is activated, the commander action (\(1\), \(2\) or \(3\)) determines which of the three observable opponents the agent should attack. The agent then gets the corresponding observation for its low-level policy. In our setup, the commander adapts to only these pre-trained low-level policies. The reward is composed of two parts. First, a killing reward \(r_{t,f}=1\) if an agent with its low-level policy killed an opponent and \(r_{t,f}=-1\) if the agent got killed. The second part, \(r_{t,c}\) should encourage the commander to exploit favorable situations and is defined as follows: \[r_{t,c}=\begin{cases}+0.1&d_{o}{<}5km\wedge\alpha_{ATA,o}{<}30^{\circ}\wedge \alpha_{AA,o}{<}50^{\circ}\wedge a_{c}{>}0\\ 0&\mathrm{otherwise}\end{cases} \tag{3}\] We also include the out-of-boundary reward \(r_{t,b}=-5\). Total reward given to \(\pi_{h}\) is thus \(r_{t}:=r_{t,f}+r_{t,c}+r_{t,b}\). ### _Training Structure_ The overall training loop for our hierarchical MARL algorithm is split into two main stages (Fig 4). We first train the low-level policies with observations \(O_{l}\) and rewards \(R_{l}\). In the second stage, low-level policies are fixed (i.e., no learning is done anymore) and serve as options for the commander. \(\pi_{h}\) is then trained with observations \(O_{h}\) and rewards \(R_{h}+R_{l}\). The training of low-level policies is done in five levels following a _curriculum learning_ scheme. The complexity is increased at each level by making the opponents more competitive. Namely, the opponent behavior of each level is L1 (static), L2 (random), L3 (scripts), L4 (L3 policy), and L5 (policies L1-4). Scripted opponents are programmed to engage the closest agent with \(\alpha_{ATA}\approx 0\) and to randomly escape. When training is completed at a level, we transfer the policy to the next level and continue training. Our neural network is based on Actor-Critic [51] (Fig. 5). Low-level AC1 and AC2 agents have distinct neural network instances (with different input and output dimensions) but share one layer (green box). This layer is further shared between the actor and critic inside the network. Sharing parameters improves agents' coordination [52]. The architecture modifications are marked for the three policy types. \(\pi_{f}\) uses a _self-attention_ (SA) module [53], \(\pi_{h}\) a _Gated-Recurrent-Unit_ (GRU) module [54] and \(\pi_{e}\) does not use any of them. The embedding layer is linear with \(100\) neurons and \(tanh\) activation. High-level commander policy has only one instance for both aircraft types. Since we train the policies with the CTDE scheme, the critic gets the observations of all interacting agents and their actions (global information) as input. Besides parameter sharing, a fully observable critic improves coordination between heterogeneous agents. We update our network parameters using the Actor-Critic approach of Proximal Policy Optimization (PPO) [55] (see Alg. 1). ## IV Experiments ### _Simulation Settings_ We validate our method by simulations.2 For this purpose we developed a dedicated 2D (Python) simulation platform to have full control and low inertia. Our platform is lightweight, fast and simulates the dynamics of our aircraft (Sect. III-A). Trajectories of each aircraft can be visualized and a landmark is set at the position where an aircraft got destroyed (Fig. 9). Map size and number of interacting aircraft can be specified, highlighting the diversity and scalability properties of our model. We refer to time step \(t\) as one simulation round. A simulation episode ends when either the time horizon is reached or there are no alive aircraft of one team. An aircraft is destroyed when getting hit by cannon or rocket or when hitting the map boundary. For each episode, a side of the map (left or right half) is chosen at random for each team, followed by generating random initial positions and headings for each aircraft. Agent training uses the popular libraries _Ray RLlib_ and _Pytorch_. Fig. 4: Hierarchical MARL training loop. ### _Training and Results_ Since we use a shared policy for each aircraft type and the commander, we do not need to restrict our simulations to a fixed number of agents. For every episode, aircraft types are randomly selected, having at least one of each type per group. Map sizes per axis are \(30\) km for low-level and \(50\) km for high-level policy training. Learning curves showing mean rewards include the performance of all agents. Evaluations are done for \(1,000\) episodes. _Win_ is when all opponents are destroyed, _loss_ if all agents got destroyed and _draw_ if at least one agent per team remains alive after the episode ends. The PPO parameters are kept constant for all training procedures: learning rate actor and critic \(lr=0.0001\), discount factor \(\gamma=0.95\), clip parameter \(\epsilon=0.2\), Adam as optimizer, batch size of \(2,000\) for low-level policies and \(1,000\) for high-level policy. We train all policies according to Alg. 1 and set the five levels as discussed in Sect. III-E. We have compared the performance of the proposed architecture to a standard RL system, which showed a very poor performance and is left out. #### Iv-B1 Low-level policies Since each agent can sense only one opponent and one friendly agent at a time, we train our low-level policies in a 2vs2 setting. Since our framework allows an arbitrary number of agents and opponents, we evaluate the performance of each policy type in different combat scenarios. For every new episode, the aircraft ammunition is: \(200\) cannon shots and \(5\) rockets (AC1). We make the opponents stronger by giving them ammunition of \(400\) cannons and \(8\) rockets. As the levels increase, we also increase the time horizon of an episode by \(\Delta T=50\) starting from \(T=200\) on L1. Let us first examine the fight policy \(\pi_{f}\). Training starts in L1 and ends in L5. In the latter, the opponents get assigned one of the previous learned fight policies at random for every episode. To highlight the strength of our network architecture, we consider L3, where training is done against script based opponents. The combat behavior of the opponents at this level is the most deterministic, therefore allowing to compare the performance of different architectures (see Fig. 6). We evaluate the performance of our agents when training has completed (Fig. 7). We deploy every agent with \(\pi_{f}\) of L5 and every opponent with \(\pi_{f}\) of L4. The combat skills of AC1 clearly surpass those of AC2, which is most likely due to rockets as further equipment and having more agile dynamics. Fig. 5: Neural network architecture. We infer that our agents could further improve their combat performance during L5 training and are able to combat in scenarios up to 5vs5, even though training was conducted in a 2vs2 scheme. However, as the number of aircraft increases, the portion of draws also rises. This observation may suggest that the low-level policy has reached its peak learning capacity. An example of a fight scenario is shown in Fig (b)b, showing the circular trajectories to reach the tail of the opponents. The escape policy \(\pi_{e}\) has the purpose of fleeing from opponents. We consider only L3 for training and show the training results in Fig. (a)a. Agents can still fire and destroy opponents, but training results indicate the correct behavior by exploiting the escaping reward more than the killing reward. As the number of aircraft increases in evaluation, the agents are less capable of fleeing successfully from opponents (Fig. (b)b). Fleeing trajectories are visualized in Fig (c)c. #### Iv-B2 High-level policy The purpose of the commander policy \(\pi_{h}\) is to provide strategic commands (attack or escape). Since \(\pi_{h}\) can observe three agents and three opponents per time, we do a 3vs3 combat training. Agents and opponents are equipped either with \(\pi_{f}\) of L5 or \(\pi_{e}\). Opponents are mainly set to fight and randomly to escape. Ammunition is set to \(300\) cannons and \(8\) rockets. We include the low-level policies as part of the environment (low-level dynamics in Fig. 2). We run experiments according to the procedure in Alg. 2. The commander gets invoked dynamically on events or when a low-level horizon is reached. Events are characterized as: * any aircraft got destroyed (by shooting or hitting map boundary); * an agent approaches the map boundary (\(d<6km\)); * an agent _or_ an opponent is in favorable situation as described in Eq. (3); * two opponents are close and face an agent (\(d<5km\), \(\alpha_{ATA,a}<30^{\circ}\)). Training results are in Fig 10. The learning curves of all models quickly saturate (Fig (a)a), but the GRU module improves the result by storing the last state. We choose this model as our commander \(\pi_{h}\) and evaluate the performance. We infer that the commander \(\pi_{h}\) improves combat performance for small team sizes. However, as the number of aircraft increases in evaluation scenarios (Fig (b)b), the result tends to a draw and an equal win-to-loss ratio, which we could also expect when no commander would be involved, since both teams are equally equipped (except for number of AC1 and AC2 per team). The reason for this is the partial observability of only three opponents around an agent. Another reason might be Fig. 8: Training performance of \(\pi_{e}\) at L3 (a) and different combat scenarios (b). _Escaped_ means no agent got killed (including going out of boundary), _killed_ is when at least one agent got killed, _kills_ when at least one opponent got killed. Fig. 6: Training performance of \(\pi_{f}\) at L3: SA-Net is our self-attention network (Fig 5), (_no Curr_) is the same network but trained without curriculum, FC is a fully connected network with two layers of \(500\) neurons and \(tanh\) activation. Fig. 7: Evaluation of \(\pi_{f}\) after finishing L5 training. Destroying an opponent is abbreviated with \(k\), getting destroyed with \(d\) and _fk_ indicates “friendly” kills. The attached numbers indicate the aircraft types, e.g. \(k-1\) kill by AC1, \(d-2\) AC2 destroyed. the stochasticity involved, where an opponent suddenly might switch from fight to escape, affecting the coordination of the commander. A further aspect for not achieving superiority in larger team sizes is the number of AC1 and AC2 per team, since AC1 has stronger combat performance as shown in Fig (a)a. An example of a 2vs4 combat scenario with the commander involved is shown in Fig (a)a, where two opponents could successfully be destroyed within the time horizon. ## V Conclusion We presented a hierarchical, heterogeneous, multi-agent reinforcement learning procedure for air-to-air combat maneuvering. The key ideas are using curriculum learning, fictitious self-play and sophisticated neural networks. The empirical validation shows the promising potential of our design. Our agents can effectively engage in air-to-air combat with solid resilience, while the commander has difficulties in successfully coordinating large team configurations. In future work, we intend further to improve the hierarchical structure for better tactical decisions, regardless of the team size. We also plan to incorporate a dedicated communication mechanism as well as to switch to 3D aircraft models for a more realistic environment and accurate aircraft dynamics.
2309.11521
Incentivized Third Party Collateralization for Stablecoins
Stablecoins, which are primarily intended to function as a global reserve of value are insubstantial in their design and present many failure points. The primary mechanism to enable these coins to hold on to a fixed value is by backing them with collateral. Fiat collateralized stablecoins require users to trust a centralized entity, which breaks the total concept of decentralization. Crypto collateralized stablecoins have issues involving high collateral requirements and introduces risks of auto-liquidation. In this paper we aim to propose an alternative architecture for the creation of a functional and secure stablecoin.
Souradeep Das, Revathi Venkataraman
2023-09-19T19:09:52Z
http://arxiv.org/abs/2309.11521v1
# Incentivized Third Party Collateralization for Stablecoins ###### Abstract Stablecoins, which are primarily intended to function as a global reserve of value are insubstantial in their design and present many failure points. The primary mechanism to enable these coins to hold on to a fixed value is by backing them with collateral. Fiat collateralized stablecoins require users to trust a centralized entity, which breaks the total concept of decentralization. Crypto collateralized stablecoins have issues involving high collateral requirements and introduces risks of auto-liquidation. In this paper we aim to propose an alternative architecture for the creation of a functional and secure stablecoin. keywords: blockchain, automation, cryptocurrency, smart contracts, cryptography, game theory + Footnote †: journal: ## 1 Introduction Stablecoins are cryptocurrencies designed to eliminate volatility by backing them with an asset or a currency that remains stable. To further specify - A stablecoin maintains its value in accordance with a fiat government backed currency. These special purpose tokens have garnered a lot of interest and appreciation due to their property of non-volatility, an attribute that has been a pre-dominant problem in cryptocurrencies of today. However, Stablecoin designs which exist currently are insubstantial and present several failure points, limiting their general acceptability. Fiat collateralized stablecoins require users to trust a central entity, which breaks the entire concept of decentralization and self-reliance. Non-collateralized stablecoins implementing seigniorage share approaches are still new and users putting their trust on them is questionable. Slightly better off - the Crypto-Collateralized stablecoins, although providing a decentralized ecosystem are surprisingly not very efficient in maintaining stability and moreover, have issues in the form of high collateral requirements and liquidation risks during market downturns. For instance, Crypto-backed stablecoins like Dai from Maker, have achieved the feat of decentralization in the system [1], but still contain limitations in their design principles in the form of- * requirement of 1.5 x collateral * Collateral can be auto liquidated All these aspects give rise to a dire need and opportunity to improve this piece of technology for global reach and acceptance. ## 2 State of Existing Problems ### Artificial Markets The market is dependent on supply-demand relationships. As several underlying cryptocurrencies supporting stablecoins are native coins in their own chain, proof of work is the way to mine and produce new coins for the market. While the generation of such new coins is adjusted to happen at regular intervals, and in turn, the mining difficulty of such blockchains is adjusted, this still produces an artificial market to base a value of a stablecoin. This calls for a separation of concerns between a stablecoin and the tokenomics of its underlying cryptocurrency. [2]. ### Market Manipulation Market manipulation techniques allow any entity to try and manipulate or alter the price of the stablecoins. In stark contrast to the effects of manipulating general cryptocurrency markets, controlling or influencing stablecoins could be a cause of destroying the functionality of an entire ecosystem. This could result in an improper game to play out and could take down the underlying value and market instantly. Furthermore, the existential risk of market collapse also destroys the whole concept of a decentralized cryptocurrency. ## 3 An Update to the Architecture These existing problems all direct to a new form of support the architecture of stablecoins require. One way of preserving all the benefits while solving the problems is by creating an additional collateral backing from third-party investments. Broadly speaking, a crypto-collateralized stablecoin where the collateral backing is from third party investments would allow a 1:1 collateralization ratio, while also enabling the use of the surplus investor funds as a security if/when the price of Ethereum goes down. The stability, operation and incentives of the system is to be taken care of by a competitive investment process. ### The process Users deposit (ETH) and receive an equivalent amount of stablecoins in return of the collateral provided. However, unlike DAI and other crypto backed stablecoins, no extra amount of ether over the stablecoin amount has to be provided by the user. After the exchange, the stablecoin amount will need to be secured against volatility in the underlying crypto (Ethereum) by an additional group of people. This external crowdfunded pool ensures a transfer of the liability of the volatile collateral asset to the pool funders instead of the stablecoin buying user. The members of this pool, in return of securing the stablecoin are entitled for the profit (or loss, if any) in denominations of the underlying crypto and this helps keep the stablecoins stable. ### Incentives for Investment The algorithm proposed later, provides better returns to the investors for providing a portion of the collateral on the external pool, than just holding on to their Ethereum (ETH)[3]. The investment made by the investor is returned (along with rewards) when the original stablecoins are redeemed. The amount returned to each of the investors is not linear to the fraction of collateral they provide, but exponential depending on their standing. The algorithm further ensures incentive generation on the external pool is self-sufficient in its distribution. While the external pool makes up for any missing user collateralization, the funds pooled in on the external pool are the same funds redistributed to the investors in ratios determined by the algorithm. ### External Collateral provision from the Pool Providing the collateral for the stablecoin through a method of external funding backed by incentives offers for a structure which has two independent parts that function together to make the system functional. The external collateral pool being a separate entity for controlling the stability ensures that the instability of the market is not a concern for the proper functioning of the application. Separating the collateral pool and disconnecting all relations with the other aspects of the stablecoin structure also allow the system to adapt to an alternate method of gathering collateral, if required. This increases the security aspect and makes sure that the system has a 100% uptime. ## 4 Explaining competition ### Anonymity to enforce competition The portions filled by other investors are anonymous. This provides a competitive glance to filling up the maximal portion of the collateral amount. After all the investors are done filling up the collateral, an internal threshold is formed which partitions the point of profit/loss on either side. ### Risk/Reward Gains The anonymity brings in a concept of a strategic game built into the system. The collateral providers are provided to maximize their lending capacity considering the risk factor to it. Since the rewards are directly related to the amount of investments, without the knowledge of other investor's pooled in amounts, each investor would optimize to try and provide the maximal amount into the pool. The only risk factor is the criteria for slashing the funds if a volatility issue arises. ### Keeping the system alive The competition among the lenders ensures a faster disbursal rate and provides an almost instantaneous rate for providing with the capital/collateral. While crypto-collateralized stablecoins currently in existence provide a method to collateralize instantly, this mechanism comes close to that when the scale gets bigger. This forms a three ended cycle where the speed of disbursal increases with competition, which is further increased by trust in the system. ## 5 Explaining the Money-Money Algorithm ### Designing the algorithm The Algorithm takes care of the return distributions to the investors of the collateral pool. A strong and intelligent mechanism would motivate the investors and borrowers (the token buyers) to play a strategic game together and keep the system functional. The algorithm tends to provide returns in an exponential form with respect to the amount invested or the rate of involvement in the system. ### Description of the design The Algorithm shows the Distribution of the MARGIN (CURRENT ETH VALUE (-) STABLECOIN PURCHASED) in an Exponential way proportional to the Amount invested Let, Inentive[i] = Incentive of individual investor Lsum = Total amount of collected incentives filled[i] = Portion filled by ith person T = Total Limit A = Amount cumulated Figure 1: Collateral Pool structure The intermediary incentive of each individual - \[\mathrm{incentive}[i]=\frac{e^{(\mathrm{filled}[i]-T)}}{\mathrm{filled}[i]} \tag{1}\] The total amount of incentives:- \[L_{\mathrm{sum}}=\sum_{i=1}^{n}\mathrm{incentives}[i] \tag{2}\] Calculating the individual gain fractions of each investor- \[\mathrm{finalIncentiveFractional}[i]=\frac{\mathrm{incentive}[i]}{L_{\mathrm{ sum}}} \tag{3}\] The return from the collateral building:- \[\mathrm{finalIncentive}[i]=\frac{\mathrm{incentive}[i]}{L_{\mathrm{sum}}}\cdot A \tag{4}\] ### Optimal Strategy The core of the algorithm is distributing pooled token amounts in relation to the parts filled by each investor. As each investor tries to fill maximal portions of the pool, the returns generated are always higher for all investors over the inflection point. Hence, the optimal strategy is to fill the highest fraction of the collateral pool amount to get the most significant profits (if any). The competition among the investors for landing on the better side of the curve (filling the most parts of the collateral), will also mean faster fulfillment for potential collateral balancing. To further elaborate the exact advantages of the investors, we list out the two cases (as illustrated in Fig 2) - * _When ETH prices go up, people above the optimal point receive a higher profit than what they could have got by simply holding the ETH_ * _When ETH prices go down, loss is significantly less compared to what they would have incurred by simply holding the ETH, given they are above the optimal point_ ## 6 Statistics The stablecoins structure proposed is non-linear type when it comes to the returns offered against investments. Because of the exponential structure of the profits there exists a threshold which has to be crossed in order to attain the profits. The profits are the reason which forces the lenders to compete for the extra rewards, which in turn motivates everyone to increase their contributions to the pool and increase the stablecoin disbursal capacity. Sample crowdfunded lending data was taken, and results were compared to find incentives to use our proposed model over a flat interest distribution model. The red graph (Fig 2) plots the investments to returns for holding on to a cryptocurrency like Ether (ETH), while the blue graph plots the performance of the proposed platform. The point of intersection (at the center) specifies the central point or the threshold to land with extra rewards. The results indicate - whether it's a bull or bear crypto market, investing in the collateral pool will mean greater profits or lesser losses for investors, and strengthen a stablecoin architecture simultaneously. ## 7 Additional Integrations ### Introduction of a Secondary token A secondary token could be introduced to the ecosystem which could bind both the tokens together when maintaining stability. The secondary token could also be traded in a similar manner with the differences only being in the usage, to- * Adjust the supply and demand manually by investors to maintain the stability of the primary currency * Be used as a token to prevent or distribute losses in a better way in the situation of a high risk volatility The potential of two tokens working together in maintaining stability was first furnished through the inflation adjusting'maker' token in the DAI ecosystem. ### Adding a lending service to integrate with the ecosystem The stablecoin can be considered to be a loan itself, since stablecoins are provided in exchange for a collateral. The ecosystem already uses a method to provide collateral by third-party or entities who are not acquiring the stablecoins. The function of the collateral can be expanded to incorporate a lending structure [4] to make up for the losses by the market volatility. The amount of existent tokens in the collateral pool can serve as assets that can be lent to make up for losses (during market downturns) in the system. The utility of the collateral pool can also be shared with a microfinance entity, or a decentralized lending protocol. This however, could bring up some issues, namely Figure 2: Comparing Incentives vs Investments * Introduce a separate entity of trust within the collateral pool * Provide a point of liquidity risk when collateral pool is depleted enough, lower than the market capacity [5] ### Integrating zero-knowledge privacy A core aspect of the collateral pool design is the anonymity of investor contributions. Anonymity enforces contributions when investors compete to fill the maximum portions of the collateral pool. Hiding the individual contribution can be achieved through a commit-reveal scheme, where each investor makes a commitment of their contribution, and reveals the amount only when funds by all investors have been pooled in. A better way to realize this could be through hiding commitments on-chain and integrating zero-knowledge (ZK) computations for proving the algorithm. Designing a strong model would further ensure the preserved privacy of investors and ensure a fair and competitive game. ## 8 Implementation Discussion The primary components of the ecosystem include:- * Ethereum Smart Contracts * ERC-20 token * Wallet * Interest distribution Algorithm The stablecoin has been designed to be functional at the Ethereum main network as an ERC-20 token standard. The token was created using solidity smart contracts and deployed to the Ethereum network. A secondary token can be added to maintain or further improve the stability of the existing token. The secondary token is the backing token and could also be an investment medium for several third parties. Smart contracts are the piece of code that lives on the world-computer-the Ethereum Blockchain and are the way to implement these tokens on the Ethereum framework. The process of using the token i.e. buying and transferring can be done by interacting with the contract. Instead of direct interactions with the contract, users could also prefer using a wallet service to avoid directly using the function calls of the contract. The application has been tested by creating a wallet and complementary scripts that provide a medium to interact and perform the necessary functions. ## 9 Conclusion This solution provides a reliable and unique strategy to encourage more people into securing a stablecoin architecture and maintain its peg to the value of a fiat currency. This is enforced by the competition in pooling assets as the collateral, and incentives in the form of meaningful returns from the system. This not only helps in creating a stable ecosystem and currency structure but also increases the economic activity and currency flow [6]. Additionally, the platform also ensures total functionality in all the discovered scenarios with proper integrity and completeness throughout all the components in this proposed system. The proposed system can hence be easily integrated to an existing stablecoin structure or be utilized afresh.
2301.13514
Fourier Sensitivity and Regularization of Computer Vision Models
Recent work has empirically shown that deep neural networks latch on to the Fourier statistics of training data and show increased sensitivity to Fourier-basis directions in the input. Understanding and modifying this Fourier-sensitivity of computer vision models may help improve their robustness. Hence, in this paper we study the frequency sensitivity characteristics of deep neural networks using a principled approach. We first propose a basis trick, proving that unitary transformations of the input-gradient of a function can be used to compute its gradient in the basis induced by the transformation. Using this result, we propose a general measure of any differentiable model's Fourier-sensitivity using the unitary Fourier-transform of its input-gradient. When applied to deep neural networks, we find that computer vision models are consistently sensitive to particular frequencies dependent on the dataset, training method and architecture. Based on this measure, we further propose a Fourier-regularization framework to modify the Fourier-sensitivities and frequency bias of models. Using our proposed regularizer-family, we demonstrate that deep neural networks obtain improved classification accuracy on robustness evaluations.
Kiran Krishnamachari, See-Kiong Ng, Chuan-Sheng Foo
2023-01-31T10:05:35Z
http://arxiv.org/abs/2301.13514v1
# Fourier Sensitivity and Regularization of Computer Vision Models ###### Abstract Recent work has empirically shown that deep neural networks latch on to the Fourier statistics of training data and show increased sensitivity to Fourier-basis directions in the input. Understanding and modifying this Fourier-sensitivity of computer vision models may help improve their robustness. Hence, in this paper we study the frequency sensitivity characteristics of deep neural networks using a principled approach. We first propose a _basis trick_, proving that unitary transformations of the input-gradient of a function can be used to compute its gradient in the basis induced by the transformation. Using this result, we propose a general measure of any differentiable model's _Fourier-sensitivity_ using the unitary Fourier-transform of its input-gradient. When applied to deep neural networks, we find that computer vision models are consistently sensitive to particular frequencies dependent on the dataset, training method and architecture. Based on this measure, we further propose a _Fourier-regularization_ framework to modify the Fourier-sensitivities and frequency bias of models. Using our proposed regularizer-family, we demonstrate that deep neural networks obtain improved classification accuracy on robustness evaluations. ## 1 Introduction While deep neural networks (DNN) achieve remarkable performance on many challenging image classification tasks, they can suffer significant drops in performance when evaluated on out-of-distribution (o.o.d.) data. Intriguingly, this lack of robustness has been partially attributed to the frequency characteristics of data shifts at test time in relation to the frequency sensitivity characteristics of the model (Yin et al., 2019; Jo and Bengio, 2017). It is known that distinct spatial frequencies in images contain features at different spatial scales: low spatial frequencies (LSF) carry global structure and shape information whereas high spatial frequencies (HSF) carry local information such as edges and borders of objects (Kauffmann et al., 2014). Moreover, spatial frequencies may also differentially processed in the brain's visual cortex to learn features at different scales (Appendix A). We find that when information in frequencies that a model relies on is corrupted or destroyed, performance can suffer. Hence, understanding the frequency sensitivity of a DNN can help us characterise and improve them. DNNs have been demonstrated to be sensitive to Fourier-basis directions in the input (Tsuzuku and Sato, 2019; Yin et al., 2019) both empirically and using theoretical analysis of linear convolutional networks (Tuszuku & Sato, 2019). In fact, the existence of so-called "universal adversarial perturbations" (Moosavi-Dezfooli et al., 2017), simple semantics-preserving distortions that can degrade models' accuracy across inputs and architectures, is attributed to this structural sensitivity. Yin et al. (2019) also showed that many natural and digital image corruptions that degrade model performance may also be targeting this vulnerability. Hence, understanding and modifying Fourier-sensitivity is a promising approach to improve model robustness. While this problem has been studied empirically, the precise definition and measurement of a computer vision model's _Fourier-sensitivity_ still lacks a rigorous approach across studies. In addition, no principled method has been proposed to study and modify the Fourier-sensitivity of a model. Existing works have applied heuristic filters on convolution layer parameters (Wang et al., 2020; Saikia et al., 2021) and input data augmentations (Yin et al., 2019) to modify a model's frequency sensitivity. In this work, we first propose a novel _basis trick_, proving that unitary transformations of a function's gradient can be used to compute its gradient in the basis induced by the transformation. Using this result, we propose a novel and rigorous measure of a DNN's Fourier-sensitivity using its input-gradient represented in the Fourier-basis. We demonstrate that DNNs are consistently sensitive to particular frequencies that are dependent on dataset, training method and architecture. This observation confirms that DNNs tend to rely on some frequencies more than others, which has implications for robustness when Fourier-statistics change at test time. Further, using our proposed measure, which is differentiable with respect to model parameters, we propose a framework of Fourier-regularization to directly modify the Fourier-sensitivities and frequency bias of a model. We show in extensive empirical evaluations that Fourier-regularization can indeed modify frequency characteristics of computer vision models, and can improve the generalization performance of models on o.o.d. datasets where the Fourier-statistics are shifted. In summary, our main contributions are: 1. We propose a _basis trick_, proving that unitary transformations of the input-gradient of any function can be used to compute its gradient in the basis induced by the transformation 2. We propose a novel and rigorous measure of a model's _Fourier-sensitivity_ based on the unitary Fourier-transform of its input-gradient. We empirically show that Fourier-sensitivity of a model is dependent on the dataset, training method and architecture 3. We propose a framework of _Fourier-regularization_ to directly induce specific Fourier-sensitivities in a computer-vision model, which modifies the frequency bias of models and improves generalization performance on out-of-distribution data where Fourier-statistics are shifted ## 2 Related work ### Frequency perspectives of robustness Yin et al. (2019); Tsuzuku & Sato (2019) characterised the Fourier characteristics of trained CNNs using perturbation analysis of their test error under Fourier-basis noise. They showed that a naturally trained model is most sensitive to all but low frequencies whereas adversarially trained (Madry et al., 2018) models are sensitive to low-frequency noise. They further showed that these Fourier characteristics relate to model robustness on corruptions and noise, with models biased towards low frequencies performing better under high frequency noise and vice versa. Abello et al. (2021) took a different approach by measuring the impact of removing individual frequency components from the input using filters on accuracy, whereas Ortiz-Jimenez et al. (2020) computed the margin in input space along basis directions of the discrete cosine transform (DCT). Wang et al. (2020) made observations about the Fourier characteristics of CNNs in different training regimes including standard and adversarial training by evaluating accuracy on band-pass filtered data. Contrary to these empirical approaches, we propose a rigorous measure of a model's _Fourier-sensitivity_. ### Modifying frequency sensitivity of models Yin et al. (2019) observed that adversarial training (Madry et al., 2018) and Gaussian noise augmentation can induce a low-frequency sensitivity on some datasets. Wang et al. (2020) proposed smoothing convolution filter parameters to induce a low-frequency sensitivity in models. We note that such techniques can, in principle, be undone by subsequent layers of a network. Shi et al. (2022) proposed similar techniques in the context of deep image priors applied to generative tasks. In addition, data augmentations such as Gaussian noise do not provide precise control over the Fourier-sensitivity of a model. In this work, we propose a _Fourier-regularization_ framework to precisely modify the Fourier-sensitivity of any differentiable model. ### Jacobian regularization Methods that regularize the input-Jacobian of a model can be broadly classified into two categories: methods that minimize the norm of the input-Jacobian, and those that regularize its direction or directional derivatives at the input. Drucker & Le Cun (1991) proposed a method that penalized the norm of the input-Jacobian to improve generalization; more recently, this has been explored to improve robustness to adversarial perturbations (Ross & Doshi-Velez, 2018; Jakubovitz & Giryes, 2018; Hoffman et al., 2019). Simard et al. (1992) proposed "Tangent Prop", which minimized directional derivatives of classifiers in the direction of local input-transformations (e.g. rotations, translations; called "tangent vectors") to reduce sensitivity to such transformations. Czarnecki et al. (2017) proposed Sobolev training of neural networks to improve model distillation by matching the input-Jacobian of the original model. Regularizing the direction of the input-Jacobian has also been used to improve adversarial robustness (Chan et al., 2020). In the present work, we regularize frequency components in the input-gradient to improve performance on out-of-distribution tasks. As such, we are interested in modifying the input-gradient along certain directions instead of its total norm. ## 3 Proposed methods **Preliminaries:** Consider an image classification task with input \(x\), labels \(y\), and standard cross-entropy loss function \(\mathcal{L}_{\text{CE}}\). Let \(f\) denote any differentiable model that outputs a scalar loss, \(\mathcal{F}(\cdot)\) the unitary discrete Fourier transform (DFT), \(\mathcal{F}^{-1}(\cdot)\) its inverse, and \(\mathcal{F}^{-1^{*}}(\cdot)\) the adjoint of the inverse-Fourier transform, and let \(x_{f}\) denote the Fourier-space representation of the input, i.e. \(x_{f}=\mathcal{F}(x)\). We denote the input-gradient in the standard basis as \(J_{f}(x)\), and \(J_{f}(x_{f})\) as the input-gradient with respect to the input in the Fourier-basis. Let \(N\) be the height of input images (although not necessary, all images used in this work are square). **DFT notation:** The zero-shifted (rearrange DC component to centre and high frequencies further from center) 2D-DFT of the input-gradient is denoted \(F\). Since the input-gradient typically has three color channels, they are averaged before computing the 2D-DFT. Fourier coefficients in \(F\) are complex numbers with real and imaginary components; \(F(u,v)=Real(u,v)+i\times Imag(u,v)\), where \((u,v)\) are indices of coefficients. The _power_ in a coefficient is its squared amplitude i.e. \(P(u,v)=|F(u,v)|^{2}=Real(u,v)^{2}+Imag(u,v)^{2}\) and the matrix of powers is denoted \(P\) (power-matrix). Each coefficient has a radial distance \(r(u,v)\) from the centre of the matrix, \(r(u,v)=d((u,v),(c_{u},c_{v}))\), where \((c_{u},c_{v})\) denotes the center of \(P\) and \(d(\cdot,\cdot)\) is Euclidean distance rounded to the nearest integer. Distinct radial distances of coefficients in the matrix are the set of integers \(\{1,\dots,N/\sqrt{2}\}\) and correspond to low to high spatial frequencies, the highest frequency being limited by the Nyquist-frequency. We denote \(P_{Total}\) as the total power in \(P\), excluding the zero-frequency coefficient i.e. \(P_{Total}=\sum_{r(u,v)>1}P(u,v)\). Similarly, we define \(\tilde{P}_{Total}\) as the total power in \(P\) excluding the zero-frequency coefficient _and_ coefficients with radial distance \(r(u,v)>N/2\), i.e. coefficients outside the largest circle inscribed in \(P\); \(\tilde{P}_{Total}=\sum\limits_{1<r(u,v)<N/2}P(u,v)\) (see Figure 1 for illustration). We denote \(P_{k}\) as the power at radial distance \(k\) normalized by \(P_{Total}\), \(P_{k}=\frac{1}{P_{Total}}\sum\limits_{r(u,v)=k}P(u,v)\) and \(\tilde{P}_{k}\) as the power at radial distance \(k\) normalized by \(\tilde{P}_{Total}\), \(\tilde{P}_{k}=\frac{1}{P_{Total}}\sum\limits_{r(u,v)=k}P(u,v)\). Figure 1: Power-matrix P of input-gradient. ### _Basis Trick:_ Unitary transformations of the input-gradient In this section, we prove that unitary transformations of the input-gradient of a function provide its gradient in the new basis induced by the transformation. We term this the _basis trick_ and use it to compute the Fourier-sensitivity of a model using the Fourier-transform of its input-gradient. To illustrate the _basis trick_, consider the computation graph in Figure 2 where the input \(x\) in the standard basis is mapped to an output via a function \(f\). We introduce an implicit operation (shown in red) that maps the Fourier-space representation of the input to the standard basis via the inverse Fourier-transform, i.e. \(x_{f}\xrightarrow{\mathcal{F}^{-1}}x\). In order to compute the input-gradient with respect to input in the Fourier-basis, \(J_{f}(x_{f})\), we must differentiate through this _implicit_ operation in the forward graph. Since the inverse-Fourier transform is a unitary operator, we have that \(\mathcal{F}(J_{f}(x))=J_{f}(x_{f})\), due to the chain rule (see Corollary 1 below). Hence, even though we do not explicitly compute the Fourier-space representation of the input, this shows that the Fourier transform of the input-gradient provides the gradient of the model with respect to the input in Fourier-space. Analogous results can be obtained for other unitary operators such as the discrete cosine transform (DCT) and discrete wavelet transform (DWT) (see Proposition 1 below). In addition, this approach can be extended to \(n\)-dimensional input, e.g. time-series or 3D signals, by using the \(n\)-dimensional Fourier-transform. We formalize the _basis trick_ below as a proposition and its corollary when the unitary operator is the Fourier-transform. **Definition 1** (Unitary Operators).: _A bounded linear operator \(U:H\to H\) on a Hilbert space \(H\) is said to be unitary if \(U\) is bijective and its adjoint \(U^{*}=U^{-1}\). Moreover, if \(U\) is unitary, \(U^{-1}\) is also a bounded and unitary linear operator._ **Lemma 1** (Generalized Chain Rule).: _Let \(f\) be a scalar valued function of a vector \(x\), and \(A\) be a bijective linear operator such that \(x=Ax_{a}\). Then, \(A^{*}(J_{f}(x))\) is the gradient of \(f\) with respect to \(x_{a}\) i.e. \(J_{f}(x_{a})=A^{*}(J_{f}(x))\), where \(A^{*}\) is the adjoint of \(A\)._ **Proposition 1** (Basis Trick).: _Let \(f\) be a scalar valued function of a vector \(x\), and \(A\) be a bijective linear operator such that \(x=Ax_{a}\). Then, the gradient vector of \(f\) w.r.t \(x_{a}\), \(J_{f}(x_{a})=A^{-1}(J_{f}(x))\) iff \(A\) is unitary._ _Proof._ Since \(x=Ax_{a}\), \(J_{f}(x_{a})=A^{*}(J_{f}(x))\) due to Lemma 1. Since \(A^{*}=A^{-1}\) iff \(A\) is unitary (Definition 1), we have that \(J_{f}(x_{a})=A^{-1}(J_{f}(x))\) iff \(A\) is unitary. **Corollary 1** (Fourier Basis Trick).: _If \(A=\mathcal{F}^{-1}\), the unitary inverse-Fourier operator such that \(x=\mathcal{F}^{-1}x_{f}\) with \(x_{f}\) being the Fourier-basis representation of \(x\), we have \(J_{f}(x_{f})=\mathcal{F}(J_{f}(x))\) where \(\mathcal{F}=(\mathcal{F}^{-1})^{-1}\)._ ### Fourier-sensitivity of computer vision models In this section, we define the _Fourier-sensitivity_ of any differentiable model using its input-gradient represented in the Fourier-basis. Fourier-sensitivity is a measure of the relative magnitudes of a model's input-gradient with respect to different frequency bands in the input spectrum. As shown in Section 3.1, the input-gradient of a function with respect to the Fourier-basis can be computed by the unitary Fourier-transform of \(J_{f}(x)\). To enable interpretation of the complete input-gradient in the Fourier-basis (see Appendix C.5 for examples), we summarize the information over frequency bands as shown in Figure 3. The Fourier-sensitivity \(f_{SFS}(x,y)\) of a model with respect to an individual input \((x,y)\) is defined as \[f_{SFS}(x,y)=[P_{1},\dots,P_{N/\sqrt{2}}] \tag{1}\] Figure 2: Fourier-transform of input-gradient is the gradient with respect to input in Fourier-space, i.e., \(\mathcal{F}(J_{f}(x))=J_{f}(x_{f})\). Symbols in red represent the input in Fourier-space and need not explicitly computed. where \(P_{k}\) is the proportion of total power in Fourier coefficients at radial distance \(k\) in the power matrix \(P\) of \(J_{f}(x_{f})\). The overall _spatial frequency sensitivity (SFS)_, or simply _Fourier-sensitivity_, of a model is defined as the expectation of \(f_{SFS}(x,y)\) over the data distribution \(p(x,y)\), i.e. \(f_{SFS}(\cdot;\theta)=\mathbb{E}_{(x,y)\sim p}[f_{SFS}(x,y)]\) (Algorithm 1 in Appendix B.1). ### Fourier-regularization of computer vision models In this section, we propose a framework of _Fourier-regularization_. Fourier-regularization enables control over the Fourier-sensitivity of a model by modifying the relative magnitude of a model's sensitivity to different frequency bands in the input spectrum. Fourier-regularization can modify the natural frequency sensitivity of neural networks as well as their generalization behavior. Our Fourier-regularizer augments the usual cross entropy loss: for a single example the new loss is \(\mathcal{L}(x,y)=\mathcal{L}_{\text{CE}}(x,y)\) + \(\lambda_{\text{SFS}}\mathcal{L}_{\text{SFS}}(x,y)\), where \(\mathcal{L}_{\text{SFS}}\) is the proposed regularizer and \(\lambda_{\text{SFS}}\) is a hyperparameter. Our regularizer penalizes the proportion of power in frequency bands based on the target Fourier-sensitivity. As \(\mathcal{L}_{\text{SFS}}\) is a function of the input-gradient, optimizing it requires an additional backpropagation step to compute derivatives with respect to parameters, similar to other gradient-regularization methods. We now define \(\mathcal{L}_{\text{SFS}}\) for four instances of this regularizer: \(SFS\in\{LSF,MSF,HSF,ASF\}\). Low-spatial-frequency (lsf) regularization trains a model to be insensitive to medium and high spatial frequencies, medium-spatial-frequency (msf) regularization trains a model to be insensitive to low and high spatial frequencies, and high-spatial-frequency (hsf) regularization trains a model to be insensitive to low and medium spatial frequencies. These are achieved by penalizing the proportion of power, \(P_{k}\), in the frequencies we wish the model to be insensitive to. All-spatial-frequency (asf) regularization trains a model to be equally sensitive to all frequency bands. The motivation behind asf regularization model is to encourage a model to be sensitive to multiple frequency bands instead of being concentrated in a small frequency range. Hence, the asf-regularizer loss is defined as the negative entropy of the distribution of power over frequency bands. The definitions of _low_, _medium_ and _high_ frequency ranges are based on equally dividing the radius of the largest circle inscribed in the power-matrix \(P\) into three equal parts (Figure 0(b)). For ASF-regularization, very high frequency bands, i.e. \(r(u,v)>N/2\) are excluded, which is reflected in the \(\tilde{P}_{k}\) terms. \(\tilde{P}_{k}\) is the proportion of power in frequency bands within the largest circle inscribed in the power-matrix, P. Concretely, \(\mathcal{L}_{\text{SFS}}\) is defined for each of these three cases as follows: \begin{tabular}{c c c c} \hline \hline & **LSF** & **MSF** & **HSF** & **ASF** \\ \hline \(\mathcal{L}_{\text{SFS}}\) & \(\sum\limits_{k>N/6}P_{k}\) & \(\sum\limits_{k<N/6,k>N/3}P_{k}\) & \(\sum\limits_{k<N/3}P_{k}\) & \(\sum\limits_{k=1}^{N/2}\tilde{P}_{k}\log\tilde{P}_{k}\) \\ \hline \hline \end{tabular} Figure 3: Computing _Fourier-sensitivity_. The input-gradient of the model is Fourier-transformed to obtain sensitivities with respect to frequencies. _Fourier-sensitivity_ is then the vector with components being the proportion of total power in each circular frequency band. ## 4 Experiments We first study below the Fourier-sensitivity of various architectures and training methods across datasets (Section 4.2). We found that both training method and architecture can have a significant impact on Fourier-sensitivity. We then identify an interesting connection between adversarial attacks and Fourier-sensitivity. Further, we study the effects of Fourier-regularization on representation learning (Section 4.3) as well as real o.o.d. benchmarks (Section 4.4). ### Experimental setup **Fourier-sensitivity analysis:** Fourier-sensitivity was computed by averaging across 1000 randomly selected validation samples for all datasets and shaded areas in plots represent two standard-deviations. We computed the Fourier-sensitivity of pre-trained ImageNet architectures obtained from _PyTorch Image Models_(Wightman, 2019). On CIFAR10 and CIFAR100(Krizhevsky & Hinton, 2009), we trained all models for 150 epochs using stochastic gradient descent (SGD) with momentum (0.9), an initial learning rate of 0.1 decayed by a factor of 10 every 50 epochs, weight decay parameter equal to \(5\times 10^{-4}\) and batch size equal to 128. On SVHN (Netzer et al., 2011), we trained models for 40 epochs using Nesterov momentum with an initial learning rate of 0.01 and momentum parameter 0.9. The training batch size was 128, L2 regularization parameter was \(5\times 10^{-4}\) and learning rate was decayed at epochs 15 and 30 by a factor of 10. The following standard data augmentations - random-crop, random-horizontal-flip, random-rotation, and color-jitter - were used during training. **Fourier-regularization experiments:** We demonstrate Fourier-regularization using ResNet50, Efficient-NetB0, MobileNetV2 and DenseNet architectures. To evaluate the proposed regularizer on high-resolution images, we also trained models on a subset of ImageNet derived from twenty five randomly chosen classes with imagesized to 224 \(\times\) 224 (ImageNet-subset). They were trained with SGD till convergence (lr=0.1, weight decay=\(1\times 10^{-4}\); lr mutlplied by 0.1 every 50 epochs); all models converged within 200 epochs. We trained Fourier-regularized models using \(\lambda_{\text{SFS}}=0.5\), which was set as the smallest value that achieved the target Fourier-sensitivity computed on validation samples independently of performance on target distribution data. We benchmarked against methods that have been proposed to modify the frequency sensitivity of models such as adversarial training and Gaussian noise augmentation to induce low-frequency sensitivity Figure 4: Fourier-sensitivity of (a),(b),(c) standard and adversarial training, (d),(e),(f) standard and Gaussian noise augmented training on ImageNet, CIFAR10 and SVHN with ResNet50 backbone. (Yin et al., 2019). For these methods, we used hyperparameter values most popular for training robust models in previous works. For adversarial training (AT), we used standard PGD \(\ell_{2}\) attacks (\(\epsilon=1\) for CIFAR10/CIFAR100 and \(\epsilon=3\) for ImageNet-subset, attack-steps = 7 attack-lr = \(\epsilon/7\)). For Gaussian noise training, we added i.i.d. Gaussian noise drawn from \(\mathcal{N}(0,\sigma^{2})\) to each pixel during training (\(\sigma=0.1\)). We used the _robustness_(Engstrom et al., 2019) library for training. ### Fourier-sensitivity analysis #### 4.2.1 Fourier-sensitivity is dependent on dataset, training and architecture We visualized the _Fourier-sensitivity_ of models trained on ImageNet, SVHN and CIFAR10 (Figure 4; axes vary across datasets due to different image sizes). We observed that models are sensitive to some frequencies more than others and that this bias is consistent across samples (shaded areas represent two standard deviations across samples). Standard trained ImageNet models are in general sensitive to a wide range of the frequency spectrum with peak sensitivity to mid-range frequencies. The InceptionV3 (Szegedy et al., 2016) architecture is more sensitive to low-frequencies while Vision Transformer (ViT) (Dosovitskiy et al., 2021) displays sensitivity to mid-range as well as high frequencies (Figure 5a). In contrast, Big Transfer (BiT) (Kolesnikov et al., 2020), MixNet (Tan and Le, 2019) and ResNet18 (He et al., 2016) models are sensitive to frequencies across the spectrum, with sensitivity tapering off at the high-frequencies (Figure 5a). These results suggest that model architecture can affect Fourier-sensitivity due to their different inductive biases. We further observed consistency of Fourier-sensitivity across popular convolutional architectures trained on ImageNet (Figure 5b). Adversarially trained models (Madry et al., 2018) are most sensitive to low spatial frequencies across datasets and architectures, which suggests that they rely on coarse global features as observed in prior work (Figures 4a, 4b, 4c, 5c). Gaussian noise augmented training slightly biases the model towards lower frequencies (Figures 4d, 4e, 4f, 10b) compared to baseline. Training on Stylized-ImageNet, proposed by Geirhos et al. (2019) to train shape-biased models, induces sensitivity to lower frequencies (Figure 11 in Appendix C.3), which reflects the increased shape-bias of these models. Standard trained CIFAR10 models are most sensitive to high frequencies (Figure 4b), similar to CIFAR100 models (Figure 9 in Appendix C.1). In contrast, standard training on SVHN leads to a low-frequency sensitivity (Figure 4c), which suggests a dataset dependence of Fourier-sensitivity. Interestingly, we observed that models trained on common corruptions of CIFAR10 borrowed from (Hendrycks and Dietterich, 2019) display different Fourier-sensitivities to a model trained on clean CIFAR10 images (Appendix C.2). For example, models trained on images with severe noise corruptions (Gaussian, shot and speckle noise) display increased sensitivity to lower frequencies (Figure 10b), as did models trained on highly Gaussian-blurred, Glass-blurred, JPEG-compressed and pixelated images (Figure 10a, 10c). These changes reflect the shift in the Fourier-statistics of these corrupted images. Finally, Fourier-regularization modifies the Fourier-sensitivity of models across datasets (Figure 6). lsf-regularized models are most sensitive to low-frequencies, msf-regularized models are most sensitive Figure 5: Fourier-sensitivities of multiple architectures after (a),(b) standard training and (c) adversarial training (PGD-\(\ell_{2}\) (\(\epsilon=3\))) on ImageNet. to the mid-frequency range, hsf-regularized models are most sensitive to high-frequencies, and asfregularized models are sensitive to a wide frequency range. Further, Fourier-regularization is demonstrated to be effective across architectures (Figure 12 in Appendix C.4). #### 4.2.2 Fourier-sensitivity and adversarial attacks Adversarial attacks are imperceptible perturbations that can drastically reduce the classification performance of computer vision models. Many methods have been proposed to defend against as well as analyse the properties of such perturbations, including frequency-based approaches. Contrary to opinions that adversarial attacks are strictly a low-frequency or high-frequency phenomenon, we observed that adversarial perturbations closely resemble the target models' Fourier-sensitivity, which varies with dataset, training method and architecture as we have shown. This connection naturally arises from the fact that common gradient-based adversarial attack procedures such as PGD (Projected Gradient Descent) (Madry et al., 2018) typically use the direction of the input-gradient to generate perturbations. Plotting models' Fourier-sensitivities along with the Fourier power-spectra of adversarial perturbations shows this connection (Figure 14 in Appendix C.6). Consistent with observations made by Sharma et al. (2019) that adversarially trained ImageNet models are still vulnerable to low-frequency constrained perturbations in some settings, the Fourier-sensitivity of adversarially trained ImageNet models is concentrated in low-frequencies (Figures 3(d), 3(e), 3(f), 3(c)). Sharma et al. (2019) also observed that low-frequency constrained attacks cannot easily fool standard trained ImageNet models, for which the Fourier-sensitivity has its peak at medium and high frequencies (Figure 3(a), 3(b)) and are hence less vulnerable to low-frequency constrained attacks. We also observed that adversarial attacks against Fourier-regularized models have matching power-spectra (Figure 13(b) in Appendix C.6). For example, PGD adversarial perturbations against msf-regularized models have power-spectra concentrated in mid-range frequencies. This suggests that adversarial perturbations are not a low or high-frequency phenomena but depend on the Fourier-sensitivity characteristics of a model. ### Fourier-regularization modifies the frequency bias of models Here we demonstrate that Fourier-regularization modifies the input frequencies that a model relies on using evaluations in different settings. #### 4.3.1 Validating Fourier-regularization using frequency-specific noise We investigate the sensitivity of Fourier-regularized models to Fourier-basis directions in the input using data-agnostic corruptions, which have also been identified as a threat to model security (Yin et al., 2019; Tsuzuku and Sato, 2019). A Fourier-noise corruption is additive noise containing a single Fourier-mode (frequency). These corruptions are semantics-preserving but affect model performance and can be used to evaluate the sensitivity of a model to individual frequencies (see Figure 7 and Appendix E for examples). We added noise at all frequencies to the respective test sets of SVHN and CIFAR10 to evaluate the sensitivity of models. On CIFAR10, the standard trained model has the highest error at medium-to-high frequencies (Figure 16(a) in Appendix E). The standard trained SVHN model makes the most errors when low-to-medium fre Figure 6: Fourier-sensitivity of models trained on (a) ImageNet-subset, (b) CIFAR10, (c) SVHN. quency noise is added to the input (Figure 17(a) in Appendix E). This is in agreement with their respective Fourier-sensitivities (Figure 4). Similarly, the lsf-regularized model is most sensitive to low-frequency perturbations and less so to medium and high-frequency distortions, across both CIFAR10 (Figure 18(b)) and SVHN (Figure 17(b)). The msf-regularized models are most sensitive to mid-range frequencies (Figures 18(c), 18(d), 18(c) and asf-regularized models are sensitive to frequencies across the spectrum (Figures 18(d), 18(d)). Detailed heat maps of error rates for noise across the all frequencies reflect the modified Fourier-sensitivities of Fourier-regularized models (Appendix E). This validates that the Fourier-regularization framework can indeed modify the sensitivity of models to frequencies in the input spectrum. #### 4.3.2 Learning global image features As low frequency features correspond to large spatial scales while high frequency features are local in nature, Fourier-regularization allows us to bias the scale of features used by a model. Here we explore the extent to which Fourier-regularized models use global features by measuring their classification accuracy on patch-shuffled images, which have previously been used by Mummidi et al. (2021); Zhang and Zhu (2019); Wang et al. (2019). Patch-shuffling involves splitting an image into \(k\times k\) squares and randomly swapping the positions of these squares. This is intended to destroy global features and retain local features; larger values of \(k\) retain less global structure in the image (see Figure 7 and Appendix F for examples). As such, models that rely more on global rather than local structure suffer more from patch-shuffling. Hence, lower accuracy suggests increased reliance on global structure. We observed that lsf-regularized models, which are most sensitive to low-frequencies, as well as adversarially trained models, suffered large drops in accuracy, which suggests they rely on global structure in images (Table 1). This contrasts with standard trained, hsf-regularized and Gaussian noise augmented models, which retain higher accuracy under patch-shuffling. This reflects their bias towards learning local features instead of global structure on these datasets. \begin{table} \begin{tabular}{l c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**CIFAR10**} & \multicolumn{2}{c}{**CIFAR100**} \\ \cline{2-5} & \(k=2\) & \(k=3\) & \(k=2\) & \(k=3\) \\ \hline Std. Train & 66.5 & 45.8 & 39.9 & 21.4 \\ Gaussian Noise & 62.9 & 44.5 & 34.4 & 18.0 \\ hsf-regularized & 60.7 & 38.7 & 37.3 & 19.0 \\ \hline lsf-regularized & **43.2** & **30.6** & 23.4 & 13.0 \\ msf-regularized & 46.8 & 33.1 & 24.1 & 13.0 \\ asf-regularized & 46.8 & 32.6 & 29.0 & 15.0 \\ AT (PGD \(\ell_{2},\epsilon=1\)) & 45.2 & 35.0 & **19.1** & **11.1** \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy of ResNet50 on patch-shuffled CIFAR10 and CIFAR100 test sets. Figure 7: Examples (b): CIFAR10 Fourier-filtered (Section 4.3.3) (c): CIFAR10 Patch-shuffled (Section 4.3.2). (e) - (g): Fourier-noise corruptions on SVHN (Section 4.3.1). More examples in Appendix. #### 4.3.3 Robustness to Fourier-filtering Jo & Bengio (2017) showed that DNNs have a tendency to rely on superficial Fourier-statistics of their training data. In the vein of generalization evaluations they performed, we generated semantics-preserving Fourier-filtered test images using radial masking in frequency space (see Figure 7 and Appendix D.1 for examples). A mask radius \(r\) determines Fourier components that are preserved with larger radii preserving more components. We use \((c_{u},c_{e})\) to denote the centre of the mask and \(d(\cdot,\cdot)\) to denote Euclidean distance. The mask is applied on the zero-shifted output of the Fourier transform of each image, denoted \(X\), followed by the inverse transform, i.e. \(X_{filtered}=\mathcal{F}^{-1}(\mathcal{F}(X)\odot M_{r})\), where \(\odot\) is the element-wise product. Formally, the radial mask is \(M_{r}(u,v):=\begin{cases}1,&\text{if }d((u,v),(c_{u},c_{v}))\leq r\\ 0,&\text{otherwise}\end{cases}\). Fourier-filtering is performed on each color channel independently. On ImageNet-scale images, lsf-regularization is robust to significant low-pass filtering (\(r=37\)), with just a \(\sim\)1% drop in accuracy, whereas the baseline model drops by \(\sim\)7% (Table 2). A standard trained CIFAR10 model suffers up to a 75% drop in accuracy on highly low-pass filtered data as it relies on high frequency information that is no longer present. On the other hand, other Fourier-regularized models perform robustly against Fourier-filtering. On CIFAR10, the lsf-regularized model performs robustly even on severely low-pass filtered images (\(r=5\)), achieving an accuracy of 78.3% compared to the standard trained model's 18.6%. This shows that lsf-regularized CNNs are able to exploit low frequency features more than other models in the absence of high-frequency features. The adversarially trained (AT) model is significantly more robust than the baseline model due to its low-frequency sensitivity but not as robust as the lsf-regularized model. Gaussian noise augmentation does not provide significant robustness to Fourier-filtering. Both msf-regularized and asf-regularized models also provide significant robustness to Fourier-filtering while not as much as the lsf-regularized model. The hsf-regularized model was comparable to more robust than the standard trained model. We further observed that other architectures - EfficientNetB0 (Tan & Le, 2019), MobileNetV2 (Sandler et al., 2018) and DenseNet (Huang et al., 2017)- are also significantly vulnerable to Fourier-filtering. Fourier-regularization can similarly improve robustness over baseline methods on these architectures as well (Table 3, plots in Appendix C.4). ### Fourier-regularization confers robustness to real out-of-distribution data shifts Here we explore the robustness of Fourier-regularization on real o.o.d. data. Image corruptions in deployments of computer vision models can cause unfavorable shifts in the Fourier-statistics of data (Yin et al., 2019). For example, computer vision models deployed in vehicles may encounter motion blur due to movement, which can disrupt high-frequency information in images. Similarly, digital corruptions can cause similar effects on Fourier-statistics, such as JPEG compression artifacts and pixelation in low resolution settings. On ImageNet-C (Hendrycks & Dietterich, 2019), Fourier-regularization confers robustness to multiple corruptions. lsf-reg (\(\lambda\)=1) was most robust to blur corruptions, which carry most information in the low frequencies. asf-reg (\(\lambda\)=0.5) provided significant robustness to weather and digital corruptions (Table 4), which suggests that sensitivity to a broad range of frequencies is needed to be robust to these corruptions. \begin{table} \begin{tabular}{l c c c|c c c|c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**ImageNet-subset**} & \multicolumn{3}{c}{**CIFAR10**} & \multicolumn{3}{c}{**CIFAR100**} \\ \cline{2-11} & clean & \(r=37\) & \(r=20\) & clean & \(r=11\) & \(r=7\) & \(r=5\) & clean & \(r=11\) & \(r=7\) & \(r=5\) \\ \hline Std. Train & 84.6 & 77.2 & 54.2 & 94.9 & 78.1 & 24.9 & 18.6 & 76.2 & 49.7 & 14.1 & 6.6 \\ \hline lsf-regularized & 84.4 & **83.0** & **67.8** & 87.1 & 86.2 & **84.4** & **78.3** & 62.5 & 61.5 & **58.0** & **46.8** \\ msf-regularized & 86.2 & 74.0 & 59.0 & 90.6 & **86.3** & 71.5 & 46.2 & 70.7 & **62.2** & 46.4 & 18.6 \\ hsf-regularized & 87.3 & 78.2 & 52.2 & 93.5 & 76.4 & 34.5 & 25.1 & 75.8 & 50.2 & 18.2 & 9.8 \\ asf-regularized & 88.5 & 82.4 & 65.3 & 87.9 & 85.0 & 69.3 & 45.0 & 67.0 & 62.1 & 41.1 & 19.8 \\ \hline AT-PGD & 81.8 & 75.8 & 54.3 & 81.6 & 80.2 & 76.1 & 67.5 & 58.8 & 56.8 & 50.0 & 40.2 \\ Gaussian-noise & 84.8 & 74.6 & 36.7 & 94.5 & 84.4 & 32.4 & 19.5 & 73.1 & 61.9 & 27.7 & 11.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy of ResNet50 on Fourier-filtered ImageNet-subset, CIFAR10 and CIFAR100 test sets. The hsf-reg (\(\lambda\)=0.5) model was more robust to weather and digital corruptions compared to blurring, which requires a low-frequency bias. Hence, modifying the Fourier-sensitivity can improve robustness under multiple o.o.d. shifts that affect model robustness. ## 5 Discussion ### Fourier-regularizer selection Selecting the regularizer (i.e., lsf, msf, hsf, asf) that gives the best performance on a (shifted) target data distribution may be done using cross-validation if labeled data from the target distribution is available. Otherwise, since model or regularizer selection in the absence of labeled data from the target distribution is generally a hard and unsolved problem, we suggest choosing the Fourier-regularizer based on prior knowledge about the frequency bias of the learning task on the target distribution. For example, we have shown that on many common image corruptions such as various forms of blurring, lsf-regularized models can perform well due to the loss of high-frequency information under blurring (Section 4.4). This agrees with previous work that has analysed the spectra of corrupted images (Yin et al., 2019). As demonstrated in Section 4.3.2, lsf-regularized models are also more reliant on global features, which are generally robust to local changes in image texture (Geirhos et al., 2019). On high-resolution images, we showed that encouraging models to use more frequencies using asf-regularization can improve clean accuracy (Table 4). #### 5.1.1 \(\lambda_{\text{SFS}}\) hyper-parameter selection Fourier-regularization requires choosing the frequency bias as well as the hyperparameter \(\lambda_{\text{SFS}}\). We note that when \(\lambda_{\text{SFS}}=0\), Fourier-regularization is equivalent to standard training. Hence, very small values may not modify the frequency bias significantly. We found that a useful heuristic is to set the parameter as small as possible to achieve the target frequency bias, which can be measured by computing the Fourier-sensitivity of the model using training or validation samples independently of performance on the target distribution. Values larger than this can unnecessarily decrease clean accuracy further without improving accuracy on the target distribution. For lsf-regularization on CIFAR10, we found that increasing \(\lambda_{\text{SFS}}\) from 0 to 0.5 gradually nudges the model towards low-frequencies (Figure 20 in Appendix G). As we increased \(\lambda_{\text{SFS}}\) further from 0.5 to 1, clean accuracy decreased further without modifying the Fourier-sensitivity (Table 7 in Appendix G). Strictly restricting the model to have a particular frequency bias using large values of \(\lambda_{\text{SFS}}\) may overly constrain model capacity. This procedure can be performed to identify optimal \(\lambda_{\text{SFS}}\) values even in the absence of labeled target distribution data. ### Fourier-regularization and clean accuracy Fourier-regularization can affect the frequencies utilized by models in a given dataset. The effect of Fourier-regularization on clean accuracy depends on the dataset _and_ the chosen frequency range (e.g., lsf, hsf, asf). On high-resolution ImageNet-scale images (224 \(\times\) 224) we observed that encouraging the model to use a wide range of frequencies using (asf-regularization) improved generalization performance over \begin{table} \begin{tabular}{l c c c c|c c c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**EfficientNetB0**} & \multicolumn{4}{c}{**MobileNetV2**} & \multicolumn{4}{c}{**DenseNet**} \\ \cline{2-13} & clean & \(r=11\) & \(r=7\) & \(r=5\) & clean & \(r=11\) & \(r=7\) & \(r=5\) & clean & \(r=11\) & \(r=7\) & \(r=5\) \\ \hline Std. Train & 89.9 & 76.1 & 30.2 & 24.0 & 92.6 & 74.5 & 27.6 & 18.3 & 94.0 & 69.6 & 19.3 & 16.4 \\ \hline lsf-regularized & 84.1 & 83.5 & **80.2** & **68.3** & 81.7 & 81.5 & **78.3** & **67.4** & 86.2 & 85.7 & **80.7** & **69.0** \\ msf-regularized & 88.7 & **86.6** & 55.5 & 31.2 & 89.0 & **87.6** & 68.7 & 38.7 & 90.6 & **89.6** & 72.5 & 38.5 \\ hsf-regularized & 90.5 & 73.3 & 28.6 & 19.2 & 90.3 & 83.2 & 52.2 & 36.1 & 92.9 & 80.5 & 35.0 & 23.6 \\ asf-regularized & 89.5 & 77.4 & 30.6 & 24.1 & 79.0 & 73.0 & 44.9 & 28.6 & 88.2 & 85.5 & 69.7 & 46.7 \\ \hline Gaussian-noise & 89.3 & 78.4 & 35.4 & 26.1 & 91.2 & 79.3 & 41.0 & 26.5 & 93.2 & 82.3 & 30.4 & 21.1 \\ AT-PGD & 72.3 & 71.5 & 68.4 & 63.1 & 82.0 & 80.2 & 75.9 & 67.0 & 81.9 & 80.9 & 76.3 & 67.9 \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluating Fourier-filtered CIFAR10 using other architectures. the baseline (88.5% vs 84.6%), as did hsf-regularization (Table 2). On SVHN, an easier task, Fourier-regularization did not have a significant effect as baseline models already achieve high clean accuracies (\(\sim\)96%) (Table 6 in Appendix E.3). CIFAR10 and CIFAR100 are more challenging small image tasks that require high spatial frequencies (hsf) to maximise clean accuracy. Hence, the hsf-regularized model had high clean accuracy while we observed a drop in the clean accuracy of lsf,msf,asf regularized models, although they performed better on o.o.d. data. In summary, Fourier-regularization is a generic framework that can be used to improve performance in both i.i.d. and o.o.d. settings. ### Fourier-regularization vs training on Fourier-filtered data Here we contrast Fourier-regularization and training on Fourier-filtered data to modify the frequency bias of models. We note that Fourier-regularization cannot be replicated by training on Fourier-filtered data. Low-pass filtered images completely discard information in higher frequencies, which may not be desirable. Moreover, in natural images, the amount of energy in frequency bands falls off rapidly at high frequencies (Hyvarinen et al., 2009), hence, medium and high-pass filtered natural images typically appear completely empty to the human eye without additional contrast maximisation and are still not easily recognizable (see Figure 16 in Appendix D.2). Hence, training on medium-pass Fourier-filtered CIFAR10 achieved a clean accuracy of only \(\sim\)33% whereas the msf-regularized model's clean accuracy is \(\sim\)90% (Table 5 in Appendix D.2). Similarly, training on high-pass filtered CIFAR10 training samples achieved only \(\sim\)15% accuracy on clean test samples, while hsf-regularization can achieve 93.5% (Table 2). Due to the energy statistics across frequency bands in natural images, training on Fourier-filtered data is not successful for all but the lowest frequency bands, where most of their energy resides. On the other hand, the Fourier-regularization framework allows controlling the sensitivity to each frequency band. In addition, we note that asf-regularization cannot be realized using Fourier-filtering alone. ## 6 Conclusion We proposed a novel _basis trick_ and proved that unitary transformations of a function's input-gradient can be used to compute its gradient in the basis induced by the transformation. Using this result, we proposed a novel and rigorous measure of the _Fourier-sensitivity_ of any differentiable computer vision model. We explored Fourier-sensitivity of various models and showed that it depends on dataset, training and architecture. We further proposed a framework of _Fourier-regularization_ that modifies the frequency bias of models and can improve robustness where Fourier-statistics of data have changed. We demonstrated that Fourier-regularization is effective on different image resolutions, datasets (Table 2) as well as architectures (Table 3). More broadly, Fourier-sensitivity and regularization can also be extended to other data modalities like audio and time-series, where Fourier analysis of machine learning models may also be useful. As Fourier-analysis is an important and fundamental toolkit, the analysis and control of machine learning models enabled by our work may prove to be valuable for learning tasks beyond those explored in this paper. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Blur} & \multicolumn{4}{c}{Weather} & \multicolumn{4}{c}{Digital} \\ \cline{2-13} Method & Clean & Defocus & Glass & Motion & Zoom & Snow & Frost & Fog & Brightness & Contrast & Elastic & Pixel & JPEG \\ \hline Std. Train & 84.6 & 59.2 & 68.8 & 71.4 & 69.8 & 46.8 & 51.7 & 44.6 & 79.8 & 40.9 & 71.8 & 82.1 & 74.1 \\ \hline lsf-reg (\(\lambda\)=1) & 83.8 & **71.5** & **77.6** & **76.6** & **76.4** & 54.2 & 58.6 & 46.7 & 79.6 & 46.6 & **75.3** & 83.7 & 75.5 \\ msf-reg (\(\lambda\)=1) & 85.8 & 58.3 & 64.4 & 70.1 & 68.2 & 47.5 & 51.0 & 39.6 & 81.2 & 37.8 & 71.1 & 79.7 & 76.0 \\ lsf-reg (\(\lambda\)=1) & 86.9 & 56.3 & 65.0 & 67.6 & 63.9 & 48.1 & 53.7 & 41.8 & 80.5 & 38.1 & 69.9 & 81.1 & 78.2 \\ asf-reg (\(\lambda\)=1) & 85.5 & 62.3 & 68.6 & 72.5 & 70.2 & 50.3 & 55.8 & 40.5 & 80.7 & 42.4 & 72.1 & 81.9 & 75.5 \\ \hline lsf-reg (\(\lambda\)=0.5) & 84.4 & 63.4 & 73.0 & 72.7 & 70.6 & 52.1 & 56.2 & 43.5 & 80.5 & 42.4 & 74.2 & 82.2 & 75.9 \\ lsf-reg (\(\lambda\)=0.5) & 86.2 & 61.6 & 67.5 & 72.0 & 71.3 & 52.0 & 56.8 & 53.1 & 81.6 & 46.7 & 72.2 & 82.1 & 78.0 \\ lsf-reg (\(\lambda\)=0.5) & 87.3 & 60.4 & 68.9 & 71.0 & 70.6 & 52.6 & 58.6 & 51.1 & 83.3 & 49.8 & 72.3 & 83.6 & 77.8 \\ asf-reg (\(\lambda\)=0.5) & **88.5** & 65.8 & 74.3 & 74.7 & 73.9 & **58.2** & **63.5** & **59.7** & **84.6** & **53.0** & 74.6 & **58.8** & 78.1 \\ \hline Gaussian-noise & 84.8 & 38.8 & 55.4 & 61.8 & 57.6 & 47.4 & 51.0 & 35.0 & 81.4 & 27.1 & 67.2 & 78.1 & 71.4 \\ AT-PGD & 81.8 & 58.4 & 65.5 & 67.0 & 66.3 & 50.3 & 47.8 & 13.7 & 79.0 & 18.5 & 66.5 & 78.3 & **79.0** \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy of ResNet50 on clean and corrupted (severity 2) test set of ImageNet-subset. ###### Acknowledgements. We thank Bryan Hooi and Wenyu Zhang for helpful discussions about the project as well as feedback on an early draft of the paper. Individual fellowship support for Kiran Krishnamachari was provided by the Agency for Science, Technology, and Research (A*STAR), Singapore.
2309.07343
Field Theory of the Fermi Function
The Fermi function $F(Z,E)$ accounts for QED corrections to beta decays that are enhanced at either small electron velocity $\beta$ or large nuclear charge $Z$. For precision applications, the Fermi function must be combined with other radiative corrections and with scale- and scheme-dependent hadronic matrix elements. We formulate the Fermi function as a field theory object and present a new factorization formula for QED radiative corrections to beta decays. We provide new results for the anomalous dimension of the corresponding effective operator complete through three loops, and resum perturbative logarithms and $\pi$-enhancements with renormalization group methods. Our results are important for tests of fundamental physics with precision beta decay and related processes.
Richard J. Hill, Ryan Plestid
2023-09-13T22:47:11Z
http://arxiv.org/abs/2309.07343v2
# Field theory of the Fermi function ###### Abstract The Fermi function \(F(Z,E)\) accounts for QED corrections to beta decays that are enhanced at either small electron velocity \(\beta\) or large nuclear charge \(Z\). For precision applications, the Fermi function must be combined with other radiative corrections and with scale- and scheme-dependent hadronic matrix elements. We formulate the Fermi function as a field theory object and present a new factorization formula for QED radiative corrections to beta decays. We provide new results for the anomalous dimension of the corresponding effective operator complete through three loops, and resum perturbative logarithms and \(\pi\)-enhancements with renormalization group methods. Our results are important for tests of fundamental physics with precision beta decay and related processes. + Footnote †: preprint: FERMILAB-PUB-23-453-T CALT-TH/2023-029 **Introduction.** Many precision measurements and new physics searches involve charged leptons interacting with nucleons or nuclei. Examples include neutrino scattering to obtain fundamental neutrino parameters [1; 2; 3; 4; 5; 6]; muon-to-electron conversion to search for charged lepton flavor violation [7; 8; 9; 10]; and beta decay to measure fundamental constants [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23] and search for new physics [24; 25; 26; 27; 28; 29]. It is important to control radiative corrections to these processes [30; 31; 32; 33; 34; 35]. QED corrections are enhanced relative to naive power counting in the fine structure constant \(\alpha\approx 1/137\) for large-\(Z\) nuclei and for small-\(\beta\) leptons (\(Z\) denotes the nuclear charge, and \(\beta\) the lepton velocity). In this _Letter_ we present new results for long-distance QED corrections to beta decay [36; 37; 38]. The Fermi function in beta decay is an extremely large QED effect [39], representing an \(O(1)\) correction to beta decay rates [40]. It describes the enhancement (suppression) for negatively (positively) charged leptons propagating in a nuclear Coulomb field. For a nuclear charge \(Z\) and electron energy \(E\) it is defined as [39; 40]: \[F(Z,E)=\frac{2(1+\eta)}{|\Gamma(2\eta+1)|^{2}}[\Gamma(\eta+{\rm i}\xi)]^{2}{ \rm e}^{\pi\xi}(2pr)^{2(\eta-1)}\,, \tag{1}\] where \(\eta\equiv\sqrt{1-(Z\alpha)^{2}}\), \(\xi=Z\alpha/\beta\), \(p=\sqrt{E^{2}-m^{2}}\) and \(m\) is the electron mass. The quantity \(r\) denotes a short distance regulator identified approximately as the nuclear size [41]. Several questions arise in the application of \(F(Z,E)\) to physical processes: 1) What is the scale \(r^{-1}\) and how does it relate to conventional renormalization in quantum field theory? 2) How can other radiative corrections be included systematically? 3) What is the relation between the Fermi function with \(Z=1\) and the radiative correction to neutron beta decay? Answers to these questions impact the interpretation of modern beta decay experiments. For example, corrections at order \(\alpha(Z\alpha)^{2}\log(\Lambda_{\rm nuc.}/E)\) impact beta decay rates of moderate \(Z\) nuclei at permille level and must be included at the current precision (\(\sim 3\times 10^{-4}\)) of \(|V_{ud}|\) extractions [21]. To answer these questions, we re-formulate the Fermi function in effective field theory (EFT), and study its interplay with subleading radiative corrections. **Factorization and all-orders matching.** Factorization arises from the separation of different energy scales involved in a physical process [42; 43; 44]. In a sequence of EFTs, the components of a factorization formula are identified with a corresponding sequence of matching coefficients, and a final low-energy matrix element. Consider the corrections to a tree-level contact interaction with a relativistic electron in the final state. Ladder diagrams from a Coulomb potential with source charge \(+Ze\) correct the tree level amplitude, \({\cal M}_{\rm tree}\), with explicit loop integrals given by \[\bar{u}(p){\cal M}=\sum_{n=0}^{\infty}(Ze^{2})^{n}\int\frac{{\rm d }^{D}L_{1}}{(2\pi)^{D}}\int\frac{{\rm d}^{D}L_{2}}{(2\pi)^{D}}\cdots\int\frac {{\rm d}^{D}L_{n}}{(2\pi)^{D}}\frac{1}{{\bf L}_{1}^{2}+\lambda^{2}}\frac{1}{( {\bf L}_{1}-{\bf p})^{2}-{\bf p}^{2}-{\rm i}0}\\ \times\frac{1}{({\bf L}_{1}-{\bf L}_{2})^{2}+\lambda^{2}}\frac{1} {({\bf L}_{2}-{\bf p})^{2}-{\bf p}^{2}-{\rm i}0}\cdots\frac{1}{({\bf L}_{n-1}-{ \bf L}_{n})^{2}+\lambda^{2}}\frac{1}{({\bf L}_{n}-{\bf p})^{2}-{\bf p}^{2}-{ \rm i}0}\\ \times\bar{u}(p)\gamma^{0}(\not{p}-\not{L}_{1}+m)\gamma^{0}(\not {p}-\not{L}_{2}+m)\cdots\gamma^{0}(\not{p}-\not{L}_{n}+m){\cal M}_{\rm tree}\,. \tag{2}\] Integrals are evaluated in dimensional regularization with \(D=3-2\epsilon\) dimensions, and we have included a photon mass, \(\lambda\), to regulate infrared divergences [45]. In contrast with the analogous non-relativistic problem [46], the relativistic expression (2) is UV divergent beginning at two-loop order, indicating sensitivity to short-distance structure. The factorization theorem reads [37] \[\mathcal{M}=\mathcal{M}_{S}(\lambda/\mu_{S})\mathcal{M}_{H}(p/\mu_{S},p/\mu_{H })\mathcal{M}_{\rm UV}(\Lambda/\mu_{H})\,, \tag{3}\] counting \(p\sim m\sim E\) and where \(\Lambda\) denotes the scale of hadronic and nuclear structure. After \(\overline{\rm MS}\) renormalization, to all orders in \(Z\alpha\), the soft function is given by \(\mathcal{M}_{S}=\exp\left(i\xi\log\frac{\mu}{\lambda}\right)\)[47; 48]. Our result for the hard function is new [37], and is given (again to all orders in \(Z\alpha\)) by [49] \[\mathcal{M}_{H}=e^{\frac{\pi}{2}\xi+{\rm i}\phi_{H}}\frac{2\Gamma(\eta-{\rm i} \xi)}{\Gamma(2\eta+1)}\sqrt{\frac{\eta-{\rm i}\xi}{1-{\rm i}\xi}\frac{m}{E}} \sqrt{\frac{E+\eta m}{E+m}}\sqrt{\frac{2\eta}{1+\eta}}\left(\frac{2p}{e^{\gamma \kappa}\mu_{H}}\right)^{\eta-1}\left[\frac{1+\gamma^{0}}{2}+\frac{E+m}{E+\eta m }\left(1-{\rm i}\xi\frac{m}{E}\right)\frac{1-\gamma^{0}}{2}\right], \tag{4}\] where \(\phi_{H}=\xi\left(\log\frac{2p}{\mu_{S}}-\gamma_{\rm E}\right)-(\eta-1)\frac{ \pi}{2}\), \(\gamma^{0}\) is a Dirac matrix, and \(\gamma_{\rm E}\approx 0.577\) is the Euler constant. The leading-in-\(Z\) radiative correction to unpolarized observables from the soft and hard functions is given by \[\left\langle|\mathcal{M}_{H}|^{2}\right\rangle=F(Z,E)\big{|}_{r_{H}}\times \frac{4\eta}{(1+\eta)^{2}}\,, \tag{5}\] where we define \(r_{H}^{-1}=\mu_{H}e^{\gamma_{\rm E}}\). The angle brackets denote contraction with lepton spinors, \(\mathcal{M}_{H}\to\bar{e}\mathcal{M}_{H}\gamma^{0}\nu_{L}\), sum over final state spins, and division by the same expression in the absence of Coulomb corrections. Note that there is a finite multiplicative correction relating the \(\overline{\rm MS}\) hard function to \(F(Z,E)\). **Effective operators and anomalous dimension.** The structure-dependent factor \(\mathcal{M}_{\rm UV}\) appearing in Eq. (3) depends on the process of interest. Important examples are beta decay transitions \([A,Z]\to[A,Z+1]e^{-}\bar{\nu}_{e}\) or \([A,Z+1]\to[A,Z]e^{+}\nu_{e}\). Superallowed beta decays are governed by an EFT consisting of QED for electrons, and heavy charged scalar fields [50; 51; 52; 53], \[\mathcal{L}_{\rm eff}=-\mathcal{C}(\phi_{v}^{[A,Z+1]})^{*}\phi_{v}^{[A,Z]} \bar{e}\not{v}(1-\gamma_{5})\nu_{e}+{\rm H.c.}\,, \tag{6}\] where \(\phi_{v}^{[A,Z]}\) denotes a heavy scalar with electric charge \(Z\) whose momentum fluctuations are expanded about \(p^{\mu}=M_{[A,Z]}\nu^{\mu}\), with \(v^{\mu}=(1,0,0,0)\) in the nuclear rest frame. For neutron decay, the EFT involves spin-1/2 heavy fields [50; 51; 52; 53], \[\mathcal{L}_{\rm eff}=-\bar{h}_{v}^{(p)}\left(\mathcal{C}_{V}\gamma^{\mu}+ \mathcal{C}_{A}\gamma^{\mu}\gamma_{5}\right)h_{v}^{(n)}\bar{e}\gamma^{\mu}(1- \gamma_{5})\nu_{e}+{\rm H.c.}\,, \tag{7}\] where \(h_{v}^{(p)}\) and \(h_{v}^{(n)}\) denote spin-1/2 heavy fields with electric charge 1 and 0, respectively. Matching to the EFT represented by Eqs. (6) or (7), we identify the components of (3) in terms of operator coefficients and matrix elements: \(\mathcal{M}_{\rm UV}\) is proportional to (a linear combination of) \(\mathcal{C}_{i}\), while \(\mathcal{M}_{H}\) and \(\mathcal{M}_{S}\) give the hard and soft contributions to the EFT matrix element. In \(\mathcal{M}_{H}\), at each order in \(\alpha\), the leading power of \(Z\) is given by the explicit expression (4). We may proceed to analyze the renormalization group properties of weak-current operators in the EFT. Radiative corrections enhanced by large logarithms, \(L\sim\log(\Lambda_{\rm nuc.}/E)\), are determined by the anomalous dimensions of the operators in (6) and (7), which are spin-structure independent, i.e., \(\gamma_{A}=\gamma_{V}=\gamma_{\mathcal{O}}\). Writing \[\gamma_{\mathcal{O}}=\frac{d\log\mathcal{C}}{d\log\mu} =\sum_{n=0}^{\infty}\sum_{i=0}^{n+1}\left(\frac{\alpha}{4\pi} \right)^{n+1}\gamma_{n}^{(i)}Z^{n+1-i} \tag{8}\] \[\equiv\gamma^{(0)}(Z\alpha)+\alpha\gamma^{(1)}(Z\alpha)+\ldots\,,\] we note several interesting all-orders properties: * Powers of \(Z\) greater than the power of \(\alpha\) do not appear [54]. * The leading series involving \((Z\alpha)^{n}\) sums to \[\gamma^{(0)}=\sqrt{1-(Z\alpha)^{2}}-1\,.\] (9) This result is obtained by differentiating Eq. (4) with respect to \(\mu_{H}\). * At each order in perturbation theory, the leading and first subleading powers of \(Z\) are related [55], \[\gamma_{2n-1}^{(1)}=n\gamma_{2n-1}^{(0)}\ \,\ \ \gamma_{2n}^{(2)}=n\gamma_{2n}^{(1)}\ \ \ (n\geq 1)\,.\] (10) When \(Z=0\), the problem reduces to a heavy-light current operator. Using our new result for \(\gamma_{2}^{(1)}=16\pi^{2}(6-\pi^{2}/3)\)[36] and property (10), the complete result through three-loop order at arbitrary \(Z\) is \[\gamma_{\mathcal{O}} =\frac{\alpha}{4\pi}\gamma_{0}^{(1)}+\left(\frac{\alpha}{4\pi} \right)^{2}\left[-8\pi^{2}Z(Z+1)+\gamma_{1}^{(2)}\right] \tag{11}\] \[\quad+\left(\frac{\alpha}{4\pi}\right)^{3}\left[16\pi^{2}\,Z(Z+1) \left(6-\frac{\pi^{2}}{3}\right)+\gamma_{2}^{(3)}\right]\,,\] where \(\gamma_{n-1}^{(n)}\), \(n=1,2,3\), are known from the heavy quark literature [56]. Our result for \(\gamma_{2}^{(1)}\) disagrees with Ref. [31]. Note that properties (9) and (10) also determine the anomalous dimension at order \(Z^{4}\alpha^{4}\) and \(Z^{3}\alpha^{4}\). **Renormalization group analysis.** Consider the solution to the renormalization group equation \[\mathrm{d}\log\mathcal{C}=\frac{\gamma(\alpha)}{\beta(\alpha)}\mathrm{d}\alpha\,, \tag{12}\] where \(\alpha\) is the \(\overline{\mathrm{MS}}\) QED coupling (for one dynamical electron flavor) and \(\beta=\mathrm{d}\alpha/\mathrm{d}\log\mu=-2\alpha[\beta_{0}\alpha/(4\pi)+ \beta_{1}\alpha^{2}/(4\pi)^{2}+\ldots]\)[57]. Expanding \(\gamma\) and \(\beta\) in powers of \(\alpha\) and \(Z\), then integrating, we obtain a systematic expansion for the ratio of the renormalized operator coefficient at different scales, \(C(\mu_{H})/C(\mu_{L})\). Setting \(\mu_{H}\sim\Lambda\) and \(\mu_{L}\sim m\), we thus resum large logarithms \(\log(\Lambda/m)\). Let us consider several regimes of \(Z\): * _Large \(Z\) asymptotics._ Consider a large \(Z\) nucleus, counting \(\log^{2}(\Lambda/m)\sim\alpha^{-1}\) and \(Z\sim\alpha^{-1}\). Through \(O(\alpha^{1/2})\), \[\log\left(\frac{C(\mu_{L})}{C(\mu_{H})}\right)=\] (13) \[\left[-\,\gamma^{(0)}(Z\alpha_{L})L\right]+\left[b_{0}\alpha_{L} L^{2}\frac{(Z\alpha_{L})^{2}}{2\sqrt{1-(Z\alpha_{L})^{2}}}\right]\] \[+\left[b_{0}^{2}\alpha_{L}^{2}L^{3}\frac{(Z\alpha_{L})^{2}(3-2(Z \alpha_{L})^{2})}{6(1-(Z\alpha_{L})^{2})^{\frac{3}{2}}}-\alpha_{L}L\gamma^{(1) }(Z\alpha_{L})\right],\] where \(\alpha_{H,L}\equiv\alpha(\mu_{H,L})\), \(L=\log(\mu_{H}/\mu_{L})\), and \(b_{0}=-\beta_{0}/(2\pi)\). Resummation in \(Z\alpha\) is an important effect for large \(Z\) nuclei [40]. Consider separately the terms in \(\gamma^{(1)}\) with odd and even powers of \((Z\alpha)\). Using property (10), \[\gamma^{(1)}_{\mathrm{odd}}=\frac{\partial}{\partial(Z\alpha)}\gamma^{(0)}= \frac{-Z\alpha}{2\sqrt{1-(Z\alpha)^{2}}}\,.\] (14) The corresponding decay rate corrections involve (less the known \(Z\alpha^{2}\) correction) \[\delta\frac{|C(\mu_{L})|^{2}}{|C(\mu_{H})|^{2}}-\alpha(Z\alpha) \log\frac{\Lambda}{E}\] (15) \[=\alpha\log\frac{\Lambda}{E}\bigg{[}\frac{1}{2}(Z\alpha)^{3}+ \frac{3}{8}(Z\alpha)^{5}+\ldots\bigg{]}\,.\] This exact result replaces the ansatz of Wilkinson [58; 40], with differences beginning at order \(Z^{3}\alpha^{4}\). The even series, \(\gamma^{(1)}_{\mathrm{even}}\), is determined through three loop order by Eq. (11). * _Intermediate \(Z\)._ Consider a medium \(Z\) nucleus, counting \(\log^{2}(\Lambda/m)\sim Z^{2}\sim\alpha^{-1}\). Through \(O(\alpha^{3/2})\), the scale dependence is \[\log\left(\frac{C(\mu_{L})}{C(\mu_{H})}\right) =\frac{\gamma^{(1)}_{0}}{2\beta_{0}}\Bigg{\{}\bigg{[}\log\frac{a_ {H}}{a_{L}}+\frac{Z^{2}\gamma^{(0)}_{(1)}}{\gamma^{(1)}_{0}}\left(a_{H}-a_{L} \right)\bigg{]}+\bigg{[}\frac{Z\gamma^{(1)}_{(1)}}{\gamma^{(1)}_{0}}(a_{H}-a _{L})\bigg{]}\] (16) \[+\bigg{[}\left(\frac{\gamma^{(2)}_{1}}{\gamma^{(1)}_{0}}-\frac{ \beta_{1}}{\beta_{0}}\right)(a_{H}-a_{L})+\left(\frac{Z^{2}\gamma^{(1)}_{2}}{ \gamma^{(1)}_{0}}-\frac{\beta_{1}}{\beta_{0}}\frac{Z^{2}\gamma^{(0)}_{1}}{ \gamma^{(1)}_{0}}\right)\frac{1}{2}(a_{H}^{2}-a_{L}^{2})+\frac{Z^{4}\gamma^{(0 )}_{3}}{\gamma^{(1)}_{0}}\frac{1}{3}(a_{H}^{3}-a_{L}^{3})\bigg{]}\Bigg{\}}\,,\] where \(a_{H,L}=\alpha(\mu_{H,L})/(4\pi)\) and the square brackets account for effects at order \(\alpha^{\frac{i}{2}}\), \(\alpha^{1}\), \(\alpha^{\frac{i}{2}}\), etc. Figure 1: Radiative correction to the beta decay rate as a function of nuclear charge, normalized to leading Fermi function. Black, red, blue and green curves show results correct through resummed order \(\alpha^{0}\), \(\alpha^{\frac{i}{2}}\), \(\alpha\) and \(\alpha^{\frac{3}{2}}\) respectively. Illustrative values \(E_{0}=2\,\mathrm{MeV}\), \(\Delta_{0}=5\,\mathrm{MeV}\), \(\Lambda_{0}=100\,\mathrm{MeV}\) are used for the electron energy, nuclear mass difference (which enters the one-loop matrix element [30]), and fixed renormalization scale \(\mu_{H}=\Lambda_{0}\). The width of the curves is given by varying \(m_{e}<\mu_{L}<\Delta_{0}\). This "intermediate" regime of \(Z\) applies to typical nuclei employed for \(V_{ud}\) determinations using super-allowed nuclear beta decay, with \(\alpha^{-\frac{1}{2}}\sim Z\sim\log\bigl{(}\Lambda^{2}/m^{2}\bigr{)}\sim 10\). Achieving permille precision thus demands proper treatment of terms through resummed order \(\alpha^{\frac{3}{2}}\) in Eq. (16). This result replaces (and disagrees with) logarithmically enhanced contributions at order \(Z^{2}\alpha^{3}\) in the "heuristic estimate" of Sirlin and Zucchini [59]. Using our new result for \(\gamma_{2}^{(1)}\)[36] we investigate the convergence of perturbation theory in Fig. 1. Here we fix \(\mu_{H}\), and plot the product of \(|C(\mu_{L})/C(\mu_{H})|^{2}\) and the squared operator matrix element at \(\mu_{L}\), varying \(\mu_{L}\) as an estimate of perturbative uncertainty [60]. We note that the results presented here are in fact sufficient for a resummation valid through \(O(\alpha^{2})\), although for practical applications this would demand currently unknown operator matrix elements. * _Neutron beta decay._ Neutron beta decay corresponds to the case \(Z=0\); we therefore define \(\gamma_{n-1}\equiv\gamma_{n-1}^{(n)}\). Again counting \(\log^{2}(\Lambda/m)\sim\alpha^{-1}\), the resummation is [61] \[\begin{split}&\log\left(\frac{C(\mu_{L})}{C(\mu_{H})}\right)=\\ &\frac{\gamma_{0}}{2\beta_{0}}\Bigg{\{}\log\frac{a_{H}}{a_{L}}+ \left(\frac{\gamma_{1}}{\gamma_{0}}-\frac{\beta_{1}}{\beta_{0}}\right)(a_{H}- a_{L})\Bigg{\}}\,,\end{split}\] (17) where the first term is of order \(\alpha^{\frac{1}{2}}\), and the second term is of order \(\alpha^{\frac{3}{2}}\). The complete result, correct through order \(\alpha^{\frac{3}{2}}\), is obtained using (17) together with the one-loop low-energy matrix element Even after resumming logarithms in the ratio of hadronic and electron mass scales, \(\log(\Lambda/m)\), large coefficients remain in the perturbative expansion of the hard matrix element. While the class of amplitudes summed in the Fermi function are enhanced at small \(\beta\) and large \(Z\), neither limit holds for neutron beta decay [62]. The large coefficients can instead be traced to an analytic continuation of the decay amplitude from spacelike to timelike values of momentum transfers. The enhancements are systematically resummed by renormalization of the hard factor \(\mathcal{M}_{H}\) in the factorization formula (3) from negative to positive values of \(\mu_{S}^{2}\)[38] (_cf._ Refs. [63; 64]). The dependence of \(\mathcal{M}_{H}\) on \(\mu_{S}\) is governed (to all orders) by the cusp anomalous dimension, \[\mathcal{M}_{H}(\mu_{S+})=\mathrm{e}^{\mathrm{i}\alpha\phi_{\beta}}\exp\left[ \frac{\pi\alpha}{2\beta}\right]\mathcal{M}_{H}(\mu_{S-}^{2})\,,\] (18) with \(\mu_{S\pm}^{2}=\pm 4p^{2}-\mathrm{i}0\) and the phase \(\phi_{\beta}=\frac{1}{2}\bigl{(}-1+\frac{1}{2\beta}\log\frac{1+\beta}{1-\beta} \bigr{)}\). The hard contribution to the physical matrix element is given on the right hand side of Eq. (18) in terms of an irrelevant phase factor, an enhancement factor \(\exp[\pi\alpha/(2\beta)]\), and \(\mathcal{M}_{H}\) evaluated at negative \(\mu_{S}^{2}\). The latter matrix element is free of \(\pi\)-enhancements. This analysis systematically resums \(\pi\)-enhanced contributions, and does not rely on a non-relativistic approximation. The result (18) differs from the nonrelativistic Fermi function ansatz [65; 32] beginning at two loop order. **Discussion.** We have studied the all-orders factorization of long-distance QED corrections in beta decay. This includes leading-\(Z\) resummation traditionally treated with the Fermi function, and subleading corrections in \(\alpha\). Our EFT analysis allows us to systematically resum large perturbative logarithms, and to incorporate corrections that are suppressed by \(1/Z\) or \(E/\Lambda\). New results include: 1. New coefficients in the expansion of the anomalous dimension for beta decay operators. We have computed the order \(Z^{2}\alpha^{3}\) coefficient for the first time [66], and found a new symmetry linking leading-\(Z\) and subleading-\(Z\) terms in the perturbative expansion. Using our new result, and the existing HQET literature, we show that the first unknown coefficient occurs at four loops, at order \(Z^{2}\alpha^{4}\)[36]. 2. New results for the large-\(Z\) asymptotics of QED radiative corrections to beta decay. We supply the infinite series of terms of order \(\alpha(Z\alpha)^{2n+1}\log(\Lambda/E)\), replacing Wilkinson's ansatz, and present a new result for the term of order \(\alpha(Z\alpha)^{2}\log(\Lambda/E)\), replacing Sirlin's heuristic estimate. We provide the EFT matrix element to all orders in \(Z\alpha\) and clarify its relation to the historically employed Fermi function [37]. 3. An all-orders resummation of "\(\pi\)-enhanced" terms in neutron beta decay, replacing the Fermi function ansatz. This substantially improves the convergence of perturbation theory, and is important for modern applications to neutron beta decay [38]. Each of these results has important phenomenological implications for ongoing and near-term precision beta decay programs [13; 16; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82]. Detailed computations are presented elsewhere [36; 37; 38]. Related work on new eikonal identities for charged current processes is presented in Ref. [83]. The same formalism applies to any situation where a charged lepton appears in a reaction with a nucleus, provided its energy is small compared to the inverse nuclear radius. Future work will address factorization at subleading power, and investigate the impact on phenomenology including hadronic [12; 35] and nuclear [15; 84] matching uncertainties. Acknowledgments.We thank Susan Gardner and Oleksandr Tomalak for useful discussions regarding radiative corrections for beta decays, and Peter Vander Griend for collaboration on Ref. [38]. RP thanks the Institute for Nuclear Theory at the University of Washington for its kind hospitality and stimulating research environment during program INT 23-1B. This research was supported in part by the INT's U.S. Department of Energy grant No. DE-FG02-00ER41132. This work was supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award DE-SC0019095. Fermilab is operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. RP is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number Number DE-SC0011632 and by the Walter Burke Institute for Theoretical Physics. RP acknowledges support from the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and the Neutrino Theory Network Program Grant under Award Number DEAC02-07CH11359 and the US DOE under Award Number DE-SC0020250. RJH gratefully acknowledges support from the Institute for Advanced Study, where a part of this work was completed.
2309.05554
Concentration of Submodular Functions and Read-k Families Under Negative Dependence
We study the question of whether submodular functions of random variables satisfying various notions of negative dependence satisfy Chernoff-like concentration inequalities. We prove such a concentration inequality for the lower tail when the random variables satisfy negative association or negative regression, partially resolving an open problem raised in (Qiu and Singla [QS22]). Previous work showed such concentration results for random variables that come from specific dependent-rounding algorithms (Chekuri, Vondrak, and Zenklusen [CVZ10] and Harvey and Olver [HO14]). We discuss some applications of our results to combinatorial optimization and beyond. We also show applications to the concentration of read-k families [Gav+15] under certain forms of negative dependence; we further show a simplified proof of the entropy-method approach of [Gav+15].
Sharmila Duppala, George Z. Li, Juan Luque, Aravind Srinivasan, Renata Valieva
2023-09-11T15:43:11Z
http://arxiv.org/abs/2309.05554v2
# Concentration of Submodular Functions Under Negative Dependence ###### Abstract We study the question of whether submodular functions of random variables satisfying various notions of negative dependence satisfy Chernoff-like concentration inequalities. We prove such a concentration inequality for the lower tail when the random variables satisfy negative association or negative regression, resolving an open problem raised in (Qiu and Singla [14]). Previous work showed such concentration results for random variables that come from specific dependent-rounding algorithms (Chekuri, Vondrak, and Zenklusen [13] and Harvey and Olver [12]). We discuss some applications of our results to combinatorial optimization and beyond. ## 1 Introduction Concentration inequalities are ubiquitous in discrete mathematics and theoretical computer science [1, 2]. The most canonical examples are the Chernoff-Hoeffding bounds, which show strong concentration for linear combinations of independent random variables [1, 2]. In some applications, the condition of independence is too restrictive, so weaker notions have been considered [1, 13, 14]. Of interest to us is the setting where the random variables are negatively correlated, which arises naturally, for example, in designing approximation algorithms by solving a linear or semidefinite program and applying some dependent randomized rounding algorithm [1]. For this setting, Panconesi and Srinivasan [21] showed that the Chernoff-Hoeffding bounds can be shown under the weak notion of _negative cylinder dependence_: this and other standard notions of negative dependence are defined in Section 2.1. For some applications in combinatorial optimization, algorithmic game theory, and machine learning, one needs to consider the more general class of _submodular functions_\(f\) of the random variables, rather than simple linear combinations. When the binary random variables \(X_{1},\ldots,X_{n}\) are independent, it was shown that \(f(X_{1},\ldots,X_{n})\) still satisfies Chernoff bounds exactly [13]. When there is dependence between the random variables, the results are much weaker. The only known results are for random variables that are output by specific dependent-rounding algorithms, known as swap rounding and pipage rounding [13, 12]. These results showed that a Chernoff-like lower-tail bound also holds for submodular functions for their specific dependent rounding procedure. As noted in the work of Garbe and Vondrak [12], it is not clear how to generalize either of these proofs to any general notion of negative dependence. We introduce a new notion of negative dependence, called \(1\)-negative association, which is weaker than negative association and negative regression but stronger than negative cylinder dependence. Our main result is that the Chernoff-like bound shown in Chekuri, Vondrak, and Zenklusen [13] and Harvey and Olver [12] also hold under \(1\)-negative association (see Section 3.2). In particular, this implies the following: **Theorem 1.1**.: _Let \(X_{1},\ldots,X_{n}\) be binary random variables with mean \(x_{1},\ldots,x_{n}\) satisfying negative association (or negative regression). Let \(f\) be a non-negative monotone submodular function with marginal values in \([0,1]\) and let \(F\) be the multilinear extension of \(f\). If we let \(\mu_{0}=F(x_{1},\ldots,x_{n})\), then we have the following:_ \[\Pr[f(X_{1},\ldots,X_{n})\leq(1-\delta)\cdot\mu_{0}]\leq\exp(-\mu_{0}\delta^{ 2}/2).\] A few remarks are in order. First, we highlight that the concentration in the above theorem is with respect to the value of the multilinear extension \(F(x_{1},\ldots,x_{n})\), rather than the true expected value \(\mathbb{E}[f(X_{1},\ldots,X_{n})]\). This suffices for applications relating to submodular maximization, and is the same type of concentration result shown in previous work. Second, recall that negative cylinder dependence does not suffice to show this concentration bound [20, p. 583]. As a result, our results are, in some informal sense, almost tight in terms of the condition on negative dependence. In addition to providing submodular concentration results for a wide class of rounding algorithms and distributions, our results also give a new path towards understanding why pipage rounding and swap rounding satisfy the lower tail Chernoff bound. By proving that the rounding algorithms output random variables which are \(1\)-negatively associated, we immediately obtain a new proof of the lower tail bounds. This can be viewed as evidence that the two rounding algorithms satisfy \(1\)-negative association or even negative association/regression. We leave this as an interesting open question. Techniques.We use the standard method of bounding the exponential moments for lower-tail Chernoff bounds. Our idea is to show that the exponential moments for our negatively-correlated random variables is upper bounded by the exponential moments for independent copies of the random variables. Formally, let \(X_{1},\ldots,X_{n}\) be random variables satisfying \(1\)-negative association and let \(X_{1}^{*},\ldots,X_{n}^{*}\) be independent copies of the random variables. We show for any \(\lambda<0\), we have \[\mathbb{E}[\exp(\lambda\cdot f(X_{1},\ldots,X_{n}))]\leq\mathbb{E}[\exp( \lambda\cdot f(X_{1}^{*},\ldots,X_{n}^{*}))].\] Since the exponential-moments method has been used to prove Chernoff bounds for submodular functions in the independent case [20], we can then repeat their proof and conclude with our desired result. We believe this proof idea may be of independent interest. For example, the same ideas can show that for a supermodular function \(g\) and any \(\lambda>0\), we have \[\mathbb{E}[\exp(\lambda\cdot g(X_{1},\ldots,X_{n}))]\leq\mathbb{E}[\exp( \lambda\cdot g(X_{1}^{*},\ldots,X_{n}^{*}))].\] In other words, we have morally proven the following statement: any upper-tail concentration bound which can be proven for a supermodular function \(g\) under independence based on the exponential-moments method also holds when the underlying random variables are negatively associated. As an example, we can apply1 this to a read-\(k\) family of supermodular functions \(g_{1},\ldots,g_{r}\) for negatively associated random variables [17]. This gives the first concentration results for a class of supermodular functions under negatively correlated random variables, and is detailed in Section 3.3. Footnote 1: The proof of concentration for read-\(k\) families given in Gavinsky et al. [17] doesn’t use the exponential-moments method, so our results don’t immediately apply. In a working manuscript [16], we give a simpler proof of their results, this time using the exponential moments method. We will need to make use of this proof in the paper. Applications.Our motivation for studying the problem comes from the randomized-rounding paradigm in approximation algorithms for converting a fractional solution to a linear program into an integral one. In many such randomized-rounding schemes, the output random variables have been shown to satisfy strong negative dependence properties, such as negative association [18, 19]. For all such rounding algorithms, our results immediately imply the submodular Chernoff lower-tail bound. A particularly interesting algorithm is given in the work of Peres, Singh, and Vishnoi [21]; they show that a fractional point in a matroid polytope can be rounded to an integral one such that the resulting distribution preserves marginals and satisfies negative association. Unfortunately, their algorithmic results only guarantee approximate negative association (with small additive error) and is insufficient to apply our results. It remains an interesting open question to efficiently sample negatively dependent distributions for a wider class of set systems. As a concrete application, we consider the maximum coverage problem under group fairness constraints. Here, we have a universe of elements \(\{1,\ldots,n\}\), a collection \(S_{1},\ldots,S_{m}\) of subsets of the universe, and a budget \(k\). We are further given subsets \(C_{1},\ldots,C_{\ell}\subseteq[n]\) (which should be thought of as demographic groups) along with thresholds \(w_{1},\ldots,w_{\ell}\). Our goal is to choose \(k\) sets from the collection to maximize the number of elements covered subject to the fairness constraint each demographic group is sufficiently covered (i.e., at least \(w_{j}\) elements from \(C_{j}\) are covered). Since this is a special case of multiobjective submodular maximization, there exists a \((1-1/e-\epsilon)\)-approximation to the problem such that each fairness constraint is approximately satisfied [12, 13]. Unfortunately, these results rely on the randomized swap rounding algorithm due to its submodular concentration properties, which requires a super-linear time complexity. In contrast, the dependent-rounding algorithm of Srinivasan [10] requires linear work and \(O(\log n)\) depth to implement, which can improve the efficiency dramatically. Observe that the pre-processing step in Udwani [13] only requires \(O(n\ell)\) time. Since we can solve the linear program for fair maximum coverage in near-linear time [1], we obtain a near-linear time algorithm for the problem after using the efficient rounding algorithm of Srinivasan [10]. These same ideas can be used to improve the time complexity of the algorithm by Tsang et al. [11] for influence maximization with group fairness constraints. Since the proofs are similar to previous work, we defer the details to a future version of the paper. More generally, negatively-associated random variables show up naturally in many settings (see e.g., the primer by Wajc [12]). Dubhashi and Ranjan [14] studied the canonical example of _balls and bins_, and showed that it satisfied both negative association and negative regression. Another example satisfying the negative-association conditions are any product measure over the set of bases of a balanced matroid, as shown by Feder and Mihail [10]. A final setting where such random variables occur are random spanning trees, which have been vital in the recent improvements to approximation algorithms for the traveling salesperson problem (see, e.g., [13]). Random spanning trees are known to be strongly Rayleigh, which immediately implies that they are negatively associated. Our results may be interesting here as well. Related Work.The concentration of negatively-dependent random variables was first formally studied by Newman [11], which showed a central limit theorem for a certain notion of negative dependence. Later on, Panconesi and Srinivasan [12] showed that cylinder negatively dependent random variables yield the Chernoff-Hoeffding concentration inequalities, just like independent random variables. In the context of our paper, these results are somewhat specialized since they focus on linear combinations of random variables. For non-linear functions of the random variables, the majority of work has focused on the concentration of Lipschitz functions under various notions of negative dependence. Pemantle and Peres [15] showed that for strong Rayleigh measures, one has Gaussian concentration for any Lipschitz function. Later on, Garbe and Vondrak [14] corrected an earlier proof of Dubhashi and Ranjan [14], showing that McDiarmid-like concentration results hold for Lipschitz functions of random variables satisfying negative regression. These results are complementary to ours since we are trying to give dimension-free concentration results. ## 2 Preliminaries ### Notions of Negative Dependence We begin by defining the notion of negative dependence commonly found in the literature. Negative Cylinder Dependence.A collection of Boolean random variables \(X_{1},\ldots,X_{n}\) is said to be negative cylinder dependent if for every \(S\subseteq[n]\), \[\mathbb{E}\left[\prod_{i\in S}X_{i}\right]\leq\prod_{i\in S}\mathbb{E}\left[X _{i}\right]\] and \[\mathbb{E}\left[\prod_{i\in S}\left(1-X_{i}\right)\right]\leq\prod_{i\in S} \mathbb{E}\left[1-X_{i}\right].\] Negative cylinder dependence is the weaker notion considered here. It is known to imply Chernoff bounds for linear combinations of \(X_{1},\ldots,X_{n}\) but it is insufficient to show our submodular concentration results. Negative Association.A collection of random variables \(X_{1},\ldots,X_{n}\) is said to be negatively associated if for any \(I,J\subset[n],I\cap J=\emptyset\) and any pair of non-decreasing functions \(f:\mathbb{R}^{I}\to\mathbb{R},g:\mathbb{R}^{J}\to\mathbb{R}\), \[\mathbb{E}\left[f\left(X_{I}\right)g\left(X_{J}\right)\right]\leq\mathbb{E} \left[f\left(X_{I}\right)\right]\mathbb{E}\left[g\left(X_{J}\right)\right].\] Here and in the following, \(X_{S}\) refers to those random variables that are indexed by the elements in \(S\), \(X_{S}=\{X_{i}\ :\ i\in S\}\). Negative association is a significant strengthening of negative cylinder dependence, and has many additional useful closure properties. This will be one of the focuses of the paper. Negative Regression.A collection of random variables \(X_{1},\ldots,X_{n}\) is said to satisfy negative regression, if for any \(I,J\subset[n],I\cap J=\emptyset\), any non-decreasing function \(f:\mathbb{R}^{I}\to\mathbb{R}\) and \(a\leq b\in\mathbb{R}^{J}\), \[\mathbb{E}\left[f\left(X_{I}\right)\mid X_{J}=a\right]\geq\mathbb{E}\left[f \left(X_{I}\right)\mid X_{J}=b\right].\] Negative regression is a strengthening of negative cylinder dependence, but its relationship with negative association is not yet well understood. It is known that negative association doesn't imply negative regression [10], but the opposite implication is not known. This will be the other focus of the paper. Strong Rayleigh.A collection of random variables \(X_{1},\ldots,X_{n}\) is said to satisfy the strong Rayleigh property if the generating function \[F(z_{1},\ldots,z_{n})=\mathbb{E}[\prod_{j=1}^{n}z_{j}^{X_{j}}]\] is a real stable polynomial (i.e., it has no root \((z_{1},\ldots,z_{n})\in\mathbb{C}^{n}\) with all positive imaginary components). The strong Rayleigh property is the strongest notion of negative dependence, and has been shown to imply all other studied negative dependence definitions [1]. As a result, all of our results apply here as well. ### Submodular Functions We also give a quick review of the basics of submodular functions. Submodular Functions.We say that a function \(f:\{0,1\}^{n}\to\mathbb{R}\) is submodular if \[f(X_{1},\ldots,X_{i-1},1,X_{i+1},\ldots X_{n})-f(X_{1},\ldots,X_{i-1},0,X_{i+1 },\ldots X_{n})\] is a non-increasing function of \(X_{1},\ldots,X_{i-1},X_{i+1},\ldots,X_{n}\) for each \(i\in[n]\). When viewing the binary input of \(f\) as the indicator vector for a set, this is equivalent to the more common definition that \(f\) is submodular if for any \(X,Y\subseteq[n]\) with \(X\subseteq Y\) and any \(x\not\in Y\), we have \[f(X\cup\{x\})-f(X)\geq f(Y\cup\{x\})-f(Y).\] Supermodular Functions.We say that a function \(g:\{0,1\}^{n}\to\mathbb{R}\) is supermodular if \[g(X_{1},\ldots,X_{i-1},1,X_{i+1},\ldots X_{n})-g(X_{1},\ldots,X_{i-1},0,X_{i+1 },\ldots X_{n})\] is a non-decreasing function of \(X_{1},\ldots,X_{i-1},X_{i+1},\ldots,X_{n}\) for each \(i\in[n]\). When viewing the binary input of \(g\) as the indicator vector for a set, this is equivalent to the more common definition that \(g\) is supermodular if for any \(X,Y\subseteq[n]\) with \(X\subseteq Y\) and any \(x\not\in Y\), we have \[g(X\cup\{x\})-g(X)\leq g(Y\cup\{x\})-g(Y).\] Mutlilinear Extension.The multilinear extension of a function \(f\) is \[F(x)=\mathbb{E}[f(x)]=\sum_{S\subseteq N}f(S)\prod_{i\in S}x_{i}\prod_{i\notin S }(1-x_{i}),\] for \(x\in[0,1]^{n}\). If we view \(x\) as a probability vector, the multilinear extension \(F\) is simply the expected value of \(f\) when each coordinate is rounded independently in \(\{0,1\}\). Submodular Chernoff Bounds ### 1-Negative Association and Weak Negative Regression We first define the weaker notion of negative dependence which we work with, called 1-negative association, and prove some simple properties about it. We also define a related notion of weak negative regression, which is the analogue of 1-negative association for the notion of negative regression, and we show the equivalence between the two for binary random variables and show that weak negative regression is strictly stronger in general. After an initial draft, we discovered that Qiu and Singla [12] had already introduced the notion of weak negative regression for binary random variables in a context complementary to ours. Using their work, we can immediately show nice properties about 1-negative association. **Definition 3.1**.: _A collection of random variables \(X_{1},\ldots,X_{n}\) is said to satisfy 1-negative association if for any two monotone functions \(f\) and \(g\), where \(g\) depends on a single random variable \(X_{i}\) and \(f\) depends on the remaining random variables \(\{X_{j}\}_{j\in[n]\setminus\{i\}}\), we have \(\mathbb{E}[fg]\leq\mathbb{E}[f]\mathbb{E}[g]\)._ **Definition 3.2**.: _A collection of random variables \(X_{1},\ldots,X_{n}\) is said to satisfy weak negative regression if for any index \(i\) and any monotone function \(f\) depending on the remaining random variables \(\{X_{j}\}_{j\in[n]\setminus\{i\}}\), we have \(\mathbb{E}[f|X_{i}=b]\leq\mathbb{E}[f|X_{i}=a]\) for all \(a\leq b\)._ In the following lemmata, we show that weak negative regression implies 1-negative association in general. We then show that the reverse implication holds for binary random variables, but give an example showing that it does not hold in general. **Claim 3.3**.: _If a collection of random variables \(X_{1},\ldots,X_{n}\) satisfies weak negative regression, then it satisfies 1-negative association._ Proof.: Assume \(X_{1},\ldots,X_{n}\) satisfy weak negative regression; we will prove that it also satisfies 1-negative association. Let \(f\) and \(g\) be monotone functions such that \(f\) depends on \(X_{I}\) for some subset \(I\subseteq[n]\) and \(g\) depends on \(X_{i}\) for \(i\not\in I\). Without loss of generality, let us assume that \(f\) and \(g\) are non-decreasing. Let random variables \(Y_{1},\ldots,Y_{n}\) be an independent copy of \(X_{1},\ldots,X_{n}\) which has the same joint distribution and consider the expression \[\mathbb{E}\big{[}[f(X_{I})-f(Y_{I})]\cdot[g(X_{i})-g(Y_{i})]\big{]}.\] On one hand, we can use the fact that \(X\) and \(Y\) are independent and have the same joint distribution to expand the expression and see that: \[\mathbb{E}\big{[}[f(X_{I})-f(Y_{I})]\cdot[g(X_{i})-g(Y_{i})]\big{]}=2\mathbb{ E}[f(X_{I})g(X_{i})]-2\mathbb{E}[f(X_{I})]\mathbb{E}[g(X_{i})].\] On the other hand, observe that we have \[\mathbb{E}\big{[}\mathbb{E}[f(X_{I})-f(Y_{I})|X_{i},Y_{i}]\big{|}X_{i}\leq Y _{i}\big{]}\geq 0\quad\text{ and }\quad\mathbb{E}\big{[}\mathbb{E}[f(X_{I})-f(Y_{I})|X_{i},Y_{i}] \big{|}X_{i}>Y_{i}\big{]}\leq 0 \tag{1}\] by definition of weak negative regression and since \(X,Y\) have the same joint distributions. Using the above inequalities and the fact that \(g\) is a non-decreasing function, this implies that \[A_{1}=\mathbb{E}\big{[}\mathbb{E}[(f(X_{I})-f(Y_{I}))\cdot(g(X_{i})-g(Y_{i}) )|X_{i},Y_{i}]\big{|}X_{i}\leq Y_{i}\big{]}\leq 0\] and \[A_{2}=\mathbb{E}\big{[}\mathbb{E}[(f(X_{I})-f(Y_{I}))\cdot(g(X_{i})-g(Y_{i}) )|X_{i},Y_{i}]\big{|}X_{i}>Y_{i}\big{]}\leq 0.\] As a result, we can expand using the law of total expectations to obtain \[\mathbb{E}\big{[}[f(X_{I})-f(Y_{I})]\cdot[g(X_{i})-g(Y_{i})]\big{]}=\Pr[X_{i} \leq Y_{i}]\cdot A_{1}+\Pr[X_{i}>Y_{i}]\cdot A_{2}\leq 0. \tag{2}\] Combining the equality in Equation 1 with the inequality in Equation 2 and rearranging gives the desired inequality to conclude that \(X_{1},\ldots,X_{n}\) are 1-negatively associated. **Claim 3.4**.: _If a collection of binary random variables \(X_{1},\ldots,X_{n}\) satisfies 1-negative association, then it satisfies weak negative regression._ Proof.: Recall that we wish to prove that for any non-decreasing function \(f\) depending on some subset \(I\subseteq[n]\) and any \(i\not\in I\), we have that \[\mathbb{E}[f(X_{I})|X_{i}=0]\geq\mathbb{E}[f(X_{I})|X_{i}=1].\] By the definition of 1-negative association, we obtain that for any monotone functions \(g\) which depends on \(X_{i}\), the following inequality holds: \[\mathbb{E}[f(X_{I})g(X_{i})]\leq\mathbb{E}[f(X_{I})]\cdot\mathbb{E}[g(X_{i})]. \tag{3}\] Without loss of generality, we may assume that \(\mathbb{E}[f(X_{I})|X_{i}=1]=0\) by shifting \(f\) by a constant. By choosing \(g\) to be the identity function, we can apply the law of total probability to obtain \[\mathbb{E}[f(X_{I})\cdot g(X_{i})]=\Pr[X_{i}=0]\cdot\mathbb{E}[f(X_{I})|X_{i} =0]\cdot g(0)+\Pr[X_{i}=1]\cdot\mathbb{E}[f(X_{I})|X_{i}=1]\cdot g(1)=0.\] Plugging this into Equation 3, we obtain \[0\leq\mathbb{E}[f(X_{I})]\cdot\mathbb{E}[g(X_{i})]=\mathbb{E}[f(X_{I})|X_{i} =0]\Pr[X_{i}=0]\cdot\mathbb{E}[X_{i}],\] again by the law of total probability. Since \(\mathbb{E}[X_{i}]>0\) and \(\Pr[X_{i}=0]>0\), this implies that \[\mathbb{E}[f(X_{I})|X_{i}=0]\geq 0,\] which concludes the proof since \(\mathbb{E}[f(X_{I})|X_{i}=1]=0\). **Claim 3.5**.: _There exists (non-binary) distributions over 2 random variables which satisfy 1-negative association but not weak negative regression. In other words, 1-negative association is strictly more general than weak negative regression for non-binary random variables._ Proof.: Let's first discuss the intuition for the construction of the counterexample. One can show via algebra that \(\mathbb{E}[f(X_{I})g(X_{i})]-\mathbb{E}[f(X_{I})]\mathbb{E}[g(X_{i})]\) can be expanded as the following expression: \[\sum_{x<y}\Pr[X_{i}=x]\Pr[X_{i}=y]\cdot\big{(}\mathbb{E}[f(X_{I})|X_{i}=x]- \mathbb{E}[f(X_{I})|X_{i}=y]\big{)}\big{(}g(x)-g(y)\big{)},\] where the summation is over the values that \(X_{i}\) takes with non-zero probability. Now, suppose that for some values \(x<y\) we have that \[\mathbb{E}[f(X_{1},\ldots,X_{n})|X_{i}=x]-\mathbb{E}[f(X_{1},\ldots,X_{n})|X_ {i}=y]<0,\] violating the weak negative regression property. This would imply that some of the summands are positive (since \(g(x)<g(y)\) by monotonicity). In the case of binary random variables, the summation would only consist of a single summand so 1-negative association would be violated. For general random variables, the summation consists of multiple terms so the summation may still be negative even when a single summand is positive. Consequently, the random variables may still satisfy 1-negative association. We now give the example. Consider the random variables \((X_{1},X_{2})\) which are uniformly distributed on their support set \(\{(0,3),(1,1),(2,2),(3,0)\}\). By considering an identity function \(\mathds{1}_{x}:\{0,1,2,3\}\to\{0,1,2,3\}\), we can show that that the distribution of \((X_{1},X_{2})\) does not satisfy weak negative regression: \[\mathbb{E}[\mathds{1}_{x}(X_{2})|X_{1}=1]=\mathds{1}_{x}(1)=1<2=\mathds{1}_ {x}(2)=\mathbb{E}[\mathds{1}_{x}(X_{2})|X_{1}=2].\] However, for any pair of non-decreasing functions \(f,g:\{0,1,2,3\}\to\mathbb{R}\), we have \[\mathbb{E}[f(X_{1})]\mathbb{E}[g(X_{2})]-\mathbb{E}[f(X_{1})g(X_{2})]=\frac{f( 1)+f(2)+f(3)}{4}\cdot\frac{g(1)+g(2)+g(3)}{4}-\frac{f(1)g(1)+f(2)g(2)}{4},\] where we have again assumed without loss of generality that \(f(0)=g(0)=0\). We claim that the quantity on the right hand side is always non-negative. In order to see this, observe that \(f(2)g(2)\leq f(i)g(j)\) for any \(i,j\geq 2\) by monotonicity. As a result, we have \[\frac{f(2)g(2)}{4}\leq\frac{(f(2)+f(3))(g(2)+g(3))}{16}.\] Further, we observe that \(f(1)g(1)\leq f(i)g(j)\) for any \(i,j\geq 1\) by monotonicity. As a result, we have \[\frac{f(1)g(1)}{4}\leq\frac{f(1)(g(2)+g(3))+g(1)(f(2)+f(3))}{16}.\] Combining these two inequalities immediately and observing that \(f(1)g(1)\geq 0\) by monotonicity implies our desired result. Hence, the distribution is 1-negatively associated. Since 1-negative association and weak negative regression are equivalent for binary random variables and 1-negative regression has been shown to be strictly stronger than cylinder negative dependence [14, Proposition 2.4], we also have that 1-negative association is strictly stronger than cylinder negative dependence. Additionally, since weak negative regression is strictly stronger than 1-negative association for general random variables and weak negative regression has been shown to be strictly weaker than negative association and negative regression [14, Proposition 2.4], we have that 1-negative association is strictly weaker than negative association and negative regression. We summarize these in the following corollaries. **Corollary 3.6**.: _1-negative association is a strictly weaker condition than negative association._ **Corollary 3.7**.: _1-negative association is a strictly weaker condition than negative regression._ **Corollary 3.8**.: _1-negative regression is a strictly stronger condition than negative cylinder dependence._ ### Proof of Submodular Concentration We will now prove our main result. As mentioned in the introduction, our proof is based on the standard technique of bounding the exponential moments. The following lemma contains our main technical contribution, stating that the exponential moments of \(f(X_{1},\ldots,X_{n})\) under 1-negative association is dominated by that under independence. Our results will follow easily afterwards. **Lemma 3.9**.: _Let \(X_{1},\ldots,X_{n}\) be 1-negatively associated random variables and let \(X_{1}^{*},\ldots,X_{n}^{*}\) be independent random variables with the same marginal distributions. Also let \(f\) be a non-negative monotone function._ * _If_ \(f\) _is a submodular function and_ \(\lambda<0\)_, we have_ \(\mathbb{E}[\exp(\lambda f(X_{1},\ldots,X_{n}))]\leq\mathbb{E}[\exp(\lambda f (X_{1}^{*},\ldots,X_{n}^{*}))]\)_._ * _If_ \(f\) _is a supermodular function and_ \(\lambda>0\)_, we have_ \(\mathbb{E}[\exp(\lambda f(X_{1},\ldots,X_{n}))]\leq\mathbb{E}[\exp(\lambda f (X_{1}^{*},\ldots,X_{n}^{*}))]\)_._ Proof.: Fix \(\lambda<0\) if \(f\) is submodular and \(\lambda>0\) if \(f\) is supermodular. Observe that in order to prove the lemma, it suffices to prove \[\mathbb{E}[\exp(\lambda\cdot f(X_{1},\ldots,X_{i},\ldots,X_{m}))]\leq\mathbb{ E}[\exp(\lambda\cdot f(X_{1},\ldots,X_{i}^{*},\ldots,X_{m}))], \tag{4}\] since we can iteratively apply the above inequality to each \(X_{i}\) (note that we can do this because independent variables are also negatively associated). For simplicity of notation and without loss of generality, we will prove the inequality for \(i=1\). By considering the cases of \(X_{1}=0\) and \(X_{1}=1\) separately, we have \[\exp(\lambda\cdot f(X_{1},\ldots,X_{n}))=X_{1}\cdot\exp(\lambda\cdot f(1,X_{2 },\ldots,X_{n})))+(1-X_{1})\cdot\exp(\lambda\cdot f(0,X_{2},\ldots,X_{n}))),\] where the equality holds pointwise on the underlying probability space. Via simple algebraic manipulations, we can further rewrite the above as \[X_{1}\cdot[\exp(\lambda\cdot f(1,X_{2},\ldots,X_{n})))-\exp(\lambda\cdot f(0, X_{2},\ldots,X_{n}))]+\exp(\lambda\cdot f(0,X_{2},\ldots,X_{n}))\] Taking expectations, we now have that \(\mathbb{E}[\exp(\lambda f(X_{1},\ldots,X_{n}))]\) can be written as \[\mathbb{E}\big{[}X_{1}\cdot[\exp(\lambda\cdot f(1,X_{2},\ldots,X_{n})))-\exp( \lambda\cdot f(0,X_{2},\ldots,X_{n}))]\,\big{]}+\mathbb{E}\big{[}\exp(\lambda \cdot f(0,X_{2},\ldots,X_{n}))\big{]} \tag{5}\] Observe that \(X_{1}\) is clearly an increasing function of \(X_{1}\). We claim that if either \((i)\)\(f\) is submodular and \(\lambda<0\) or \((ii)\)\(f\) is supermodular and \(\lambda>0\), we have that \(\exp(\lambda\cdot f(1,X_{2},\ldots,X_{n})))-\exp(\lambda\cdot f(0,X_{2}, \ldots,X_{n}))\) is an increasing function in \(X_{2},\ldots,X_{n}\). Indeed, we first rewrite the function as \[\exp(\lambda\cdot f(0,X_{2},\ldots,X_{n}))\cdot[\exp(\lambda\cdot(f(1,X_{2}, \ldots,X_{n})-f(0,X_{2},\ldots,X_{n})))-1]\eqqcolon A_{1}\cdot A_{2}\] for simplicity of notation. Let us first consider the case when \(\lambda<0\) and \(f\) is submodular. We have that \(A_{1}\) is \((i)\) positive because the exponential function is always positive and \((ii)\) non-increasing in \(X_{2},\ldots,X_{n}\) because \(f\) is non-decreasing and \(\lambda<0\). We also have that \(A_{2}\) is \((i)\) negative because the argument in \(\exp(\cdot)\) is negative, so the exponential is in \((0,1)\)\((ii)\) non-decreasing \(\lambda<0\) and the difference of \(f\) evaluated at \(X_{1}=1\) and \(X_{1}=0\) is non-increasing by definition of submodularity. Hence, our expression of interest is the product of a function \(A_{1}\) which decreases towards \(0\) and a function \(A_{2}\) which increases towards \(0\). The product will be negative and monotonically increasing towards \(0\). Now, let us consider the case when \(\lambda>0\) and \(f\) is supermodular. We have that \(A_{1}\) is \((i)\) positive because the exponential function is always positive and \((ii)\) non-decreasing since \(\lambda>0\), \(f\) is monotone, and \(\exp(\cdot)\) is also monotone. We also have that \(A_{2}\) is \((i)\) positive because the argument of \(\exp(\cdot)\) is positive since \(f\) is monotone so the exponential is greater than \(1\) and \((ii)\) non-decreasing since \(\lambda>0\) and the difference of \(f\) evaluated at \(X_{1}=1\) and \(X_{1}=0\) is non-decreasing by definition of supermodularity. As a result, the product will be positive and non-decreasing, as desired. Since we have shown that the \(A_{1}A_{2}\) is also monotone, we now have that the first term in Equation 5 can be written as the product of monotone functions of disjoint subsets, one of which is the singleton set. By \(1\)-negative association, we have that the first term is upper bounded by \[\mathbb{E}[X_{1}]\cdot\mathbb{E}[\exp(\lambda\cdot f(1,X_{2},\ldots,X_{n}))- \exp(\lambda\cdot f(0,X_{2},\ldots,X_{n}))].\] Consequently, the entire expression in (5) is upper bounded by \[\mathbb{E}[X_{1}]\cdot\mathbb{E}[\exp(\lambda\cdot f(1,X_{2},\ldots,X_{n}))- \exp(\lambda\cdot f(0,X_{2},\ldots,X_{n}))]+\mathbb{E}[\exp(\lambda\cdot f(0, X_{2},\ldots,X_{n}))].\] Since \(X_{1}\) and \(X_{1}^{*}\) have the same marginal distributions, the above is exactly equal to \[\mathbb{E}[X_{1}^{*}]\cdot\mathbb{E}[\exp(\lambda\cdot f(1,X_{2},\ldots,X_{n}) )-\exp(\lambda\cdot f(0,X_{2},\ldots,X_{n}))]+\mathbb{E}[\exp(\lambda\cdot f( 0,X_{2},\ldots,X_{n}))].\] And since \(X_{1}^{*}\) is independent with \(X_{2},\ldots,X_{m}\) by assumption, the above is equal to \[\mathbb{E}[X_{1}^{*}\cdot\exp(\lambda\cdot f(1,X_{2},\ldots,X_{n}))-\exp( \lambda\cdot f(0,X_{2},\ldots,X_{n}))]+\mathbb{E}[\exp(\lambda\cdot f(0,X_{2}, \ldots,X_{n}))].\] In particular, observe that this is in the exact same form as Equation 5, except with \(X_{1}\) replaced with \(X_{1}^{*}\). Note that when we transformed the left-hand side of Equation 4 to Equation 5, we never used any properties of the random variables \(X_{1},\ldots,X_{n}\) (or even the function \(f\), other than it being a Boolean function). As a result, we can reverse the direction of all of the equalities to show that the above expression is equal to \[\mathbb{E}[\exp(\lambda\cdot f(X_{1}^{*},X_{2},\ldots,X_{n}))],\] which completes the proof of the lemma. Now, we will complete the proof of our main result. Combining the theorem below with Claims 3.6 and 3.7 immediately gives a proof of Theorem 1.1. Here, our proof will rely heavily on the proof of the Chernoff bound for submodular functions under independence given in Chekuri, Vondrak, and Zenklusen [2]. **Theorem 3.10**.: _Let \(X_{1},\ldots,X_{n}\) be binary random variables with mean \(x_{1},\ldots,x_{n}\) satisfying 1-negative association. Let \(f\) be a non-negative monotone submodular function with marginal values in \([0,1]\) and let \(F\) be the multilinear extension of \(f\). If we let, \(\mu_{0}=F(x_{1},\ldots,x_{n})\), then we have the following:_ \[\Pr[f(X_{1},\ldots,X_{n})\leq(1-\delta)\cdot\mu_{0}]\leq\exp(-\mu_{0}\delta^{2 }/2).\] Proof.: Let \(X_{1}^{*},\ldots,X_{n}^{*}\) be independent random variables with the same respective marginals as \(X_{1},\ldots,X_{n}\) and let \(\lambda<0\) be a parameter to be set later. Let us decompose \(f(X_{1}^{*},\ldots,X_{n}^{*})=\sum_{i=1}^{n}Y_{i}^{*}\), where \[Y_{i}^{*}=f(X_{1}^{*},\ldots,X_{i}^{*},0,\ldots,0)-f(X_{1}^{*},\ldots,X_{i-1}^ {*},0,\ldots,0).\] Let us denote \(\mathbb{E}[Y_{i}^{*}]=\omega_{i}\) and \(\mu_{0}=\sum_{i=1}^{n}\omega_{i}=\mathbb{E}[f(X_{1}^{*},\ldots,X_{n}^{*})]\). By the convexity of the exponential and the fact that \(Y_{i}^{*}\in[0,1]\), we have that \[\mathbb{E}[\exp(\lambda\cdot Y_{i}^{*})]\leq\omega_{i}\cdot\exp(\lambda)+(1- \omega_{i})=1+[\exp(\lambda)-1]\cdot\omega_{i}\leq\exp[(\exp(\lambda)-1)\cdot \omega_{i}].\] Combining the above with Lemma C.1 from Chekuri, Vondrak, and Zenklusen [10], we have that \[\mathbb{E}[\exp(\lambda\cdot f(X_{1}^{*},\ldots,X_{n}^{*})]=\mathbb{E}[\exp (\lambda\cdot\sum_{i=1}^{n}Y_{i}^{*})]\leq\prod_{i=1}^{n}\mathbb{E}[\exp( \lambda\cdot Y_{i}^{*})]\leq\exp[(\exp(\lambda)-1)\cdot\mu_{0}]. \tag{6}\] Now, we can follow the proof of the standard Chernoff bound: \[\Pr[f(X_{1},\ldots,X_{n})\leq(1-\delta)\cdot\mu_{0}] =\Pr[\exp(\lambda\cdot f(X_{1},\ldots,X_{n}))\leq\exp(\lambda(1- \delta)\cdot\mu_{0})]\] \[\leq\frac{\mathbb{E}[\exp(\lambda\cdot f(X_{1},\ldots,X_{n}))]}{ \exp(\lambda(1-\delta)\cdot\mu_{0})}\] \[\leq\frac{\mathbb{E}[\exp(\lambda\cdot f(X_{1}^{*},\ldots,X_{n}^ {*}))]}{\exp(\lambda(1-\delta)\cdot\mu_{0})}\] \[\leq\frac{\exp[(\exp(\lambda)-1)\cdot\mu_{0}]}{\exp(\lambda(1- \delta)\cdot\mu_{0})}\] The first equality follows since \(\exp(\lambda\cdot x)\) is a monotone function, the first inequality follows by Markov's inequality, the second inequality follows by Lemma 3.6, and the final inequality follows Equation 6. Finally, we can choose \(\lambda\) such that \(\exp(\lambda)=1-\delta\), which gives \[\Pr[f(X_{1},\ldots,X_{n})\geq(1-\delta)\mu_{0}]\leq\frac{\exp(-\delta\mu_{0}) }{(1-\delta)^{(1-\delta)\mu_{0}}}\leq\exp(-\mu_{0}\cdot\delta^{2}/2),\] where we used \((1-\delta)^{1-\delta}\leq\exp(-\delta+\delta^{2}/2)\) for \(\delta\in(0,1]\) in the final inequality. ### Proof of Supermodular Concentration In this subsection, we illustrate an application of our proof technique to give concentration for a read-\(k\) family of supermodular functions. Read-\(k\) families arise naturally in problems such as subgraph counting in random graphs, and can be seen as a complementary weak dependence notion to that of low-degree polynomials [10]. Our work gives the first concentration results for these problems under negative dependence. Let's consider this notion of weak dependence defined in Gavinsky et al. [1]. Let \(Y_{1},\ldots,Y_{n}\) be random variables and assume that they can factored as functions of random variables \(X_{1},\ldots,X_{m}\). We say that \(Y_{1},\ldots,Y_{n}\) are a read-\(k\) family of \(X_{1},\ldots,X_{m}\) if there exists a factorization such that each \(X_{i}\) influences at most \(k\) of the variables \(Y_{j}\). Formally, we have the following. **Definition 3.11**.: _Let \(X_{1},\ldots,X_{m}\) be random variables. For each \(j\in[n]\), let \(P_{j}\subseteq[m]\) and let \(f_{j}:\{0,1\}^{P_{j}}\to[0,1]\) be functions of \(X_{P_{j}}\). We say that \(Y_{j}=f_{j}(X_{P_{j}})\) are a read-\(k\) family if \(|\{j:i\in P_{j}\}|\leq k\) for each \(i\in[m]\) (i.e., each variable \(X_{i}\) influences at most \(k\) functions)._ When \(X_{1},\ldots,X_{m}\) are independent, Gavinsky et al. [15] showed that we have \[\Pr[\sum_{j=1}^{n}f_{j}(X_{P_{j}})\leq(p-\epsilon)n]\leq\exp(-D(p-\epsilon||p) \cdot n/k),\] where \(p=(1/n)\sum_{j=1}^{n}\mathbb{E}[Y_{j}]\) and \(D(\cdot||\cdot)\) is the Kullback-Leiber divergence. For supermodular functions \(f_{j}\), we will show that this inequality still holds. **Theorem 3.12**.: _Let \(X_{1},\ldots,X_{m}\) be \(1\)-negatively associated random variables and let \(X_{1}^{*},\ldots,X_{m}^{*}\) be independent random variables with the same respective marginal distributions as \(X_{1},\ldots,X_{m}\). Suppose that \(f_{j}(X_{P_{j}})\) for \(j\in[n]\) are a read-\(k\) family, where \(f_{j}\) are supermodular functions. If we let \(p_{0}=(1/n)\sum_{j=1}^{n}\mathbb{E}[f_{j}(X_{P_{j}}^{*})]\) denote the expectation when the underlying random variables are independent, we have_ \[\Pr[\sum_{j=1}^{n}f_{j}(X_{P_{j}})\leq(p_{0}-\epsilon)n]\leq\exp(-D(p_{0}- \epsilon||p_{0})\cdot n/k),\] Proof.: Let \(f(X_{1},\ldots,X_{m})=\sum_{j=1}^{n}f_{j}(X_{P_{j}})\) be the quantity of interest, and note that \(f\) is the sum of supermodular functions so it is supermodular as well. We will follow the standard proof via exponential moments. Let \(\lambda>0\); we have \[\Pr[f(X_{1},\ldots,X_{m})\leq(p_{0}-\epsilon)n] =\Pr[\exp(\lambda\cdot f(X_{1},\ldots,X_{m}))\leq\exp(\lambda \cdot(p_{0}-\epsilon)n)] \tag{7}\] \[\leq\mathbb{E}[\exp(\lambda\cdot f(X_{1},\ldots,X_{m}))]/\exp( \lambda\cdot(p_{0}-\epsilon)n), \tag{8}\] where the inequality follows by Markov's. Since \(f\) is supermodular, we have by Lemma 3.9 that \[\mathbb{E}[\exp(\lambda\cdot f(X_{1},\ldots,X_{m}))]\leq\mathbb{E}[\exp( \lambda\cdot f(X_{1}^{*},\ldots,X_{m}^{*}))]=\mathbb{E}[\exp(\lambda\cdot\sum _{j=1}^{n}f_{j}(X_{P_{j}}^{*}))]. \tag{9}\] In a working manuscript by a subset of the authors [14], we show that \[\mathbb{E}[\exp(\lambda\cdot\sum_{j=1}^{n}f_{j}(X_{P_{j}}^{*}))]\leq\left( \prod_{j=1}^{n}\mathbb{E}[\exp(\lambda\cdot f_{j}(X_{P_{j}}^{*}))^{k}]\right) ^{1/k}. \tag{10}\] Combining equations 7-10, we have \[\Pr[f(X_{1},\ldots,X_{m})\leq(p_{0}-\epsilon)n]\leq\left(\prod_{j=1}^{n} \mathbb{E}[\exp(k\lambda\cdot f_{j}(X_{P_{j}}^{*}))/\exp(k\lambda(p_{0}- \epsilon)n]\right)^{1/k}.\] Let \(\lambda^{\prime}=k\lambda\); since \(\lambda>0\) is a parameter we set, we can view \(\lambda^{\prime}>0\) as a parameter as well. We will abuse notation and replace \(\lambda^{\prime}\) with \(\lambda\), so we have \[\Pr[f(X_{1},\ldots,X_{m})\leq(p_{0}-\epsilon)n]\leq\left(\prod_{j=1}^{n} \mathbb{E}[\exp(\lambda\cdot f_{j}(X_{P_{j}}^{*}))/\exp(\lambda(p_{0}-\epsilon )n]\right)^{1/k},\] for any \(\lambda>0\). Now, observe that the right hand side of the inequality is the exact same as in the proof of the standard Chernoff bound under independence, except with an additional exponent \(1/k\). As a result, we can follow the original proof of the Chernoff bound to show that \[\Pr[\sum_{j=1}^{n}f_{j}(X_{P_{j}})\leq(p_{0}-\epsilon)n]\leq\exp(-D(p_{0}- \epsilon||p_{0})\cdot n/k),\] which was our desired result.
2309.15747
Differentiable Machine Learning-Based Modeling for Directly-Modulated Lasers
End-to-end learning has become a popular method for joint transmitter and receiver optimization in optical communication systems. Such approach may require a differentiable channel model, thus hindering the optimization of links based on directly modulated lasers (DMLs). This is due to the DML behavior in the large-signal regime, for which no analytical solution is available. In this paper, this problem is addressed by developing and comparing differentiable machine learning-based surrogate models. The models are quantitatively assessed in terms of root mean square error and training/testing time. Once the models are trained, the surrogates are then tested in a numerical equalization setup, resembling a practical end-to-end scenario. Based on the numerical investigation conducted, the convolutional attention transformer is shown to outperform the other models considered.
Sergio Hernandez, Ognjen Jovanovic, Christophe Peucheret, Francesco Da Ros, Darko Zibar
2023-09-27T16:02:32Z
http://arxiv.org/abs/2309.15747v2
# Differentiable Machine Learning-Based Modeling ###### Abstract End-to-end learning has become a popular method for joint transmitter and receiver optimization in optical communication systems. Such approach may require a differentiable channel model, thus hindering the optimization of links based on directly modulated lasers (DMLs). This is due to the DML behavior in the large-signal regime, for which no analytical solution is available. In this paper, this problem is addressed by developing and comparing differentiable machine learning-based surrogate models. The models are quantitatively assessed in terms of root mean square error and training/testing time. Once the models are trained, the surrogates are then tested in a numerical equalization setup, resembling a practical end-to-end scenario. Based on the numerical investigation conducted, the convolutional attention transformer is shown to outperform the other models considered. Optical communication, machine learning, directly modulated laser, transformer, modeling ## I Introduction Directly-modulated lasers (DMLs) play a crucial role as part of intensity-modulation and direct-detection (IM/DD) systems in short-reach communication links. Due to their inherent simplicity, DMLs have the potential to achieve efficiency gains in both power consumption and cost-effectiveness compared to alternative transmitter technologies [1]. However, their modulation bandwidth (around 30 GHz at 25 \({}^{\circ}\)C) limits the symbol rate of commercial DMLs to the 50 Gbaud range, making them a less compelling option as Ethernet throughput requirements increase [2]. Apart from the ever-present phase and intensity noise, effects such as waveform distortion or frequency chirping dominate when pushing their modulation rate, hindering their potential in terms of transmission distance and data throughput. One can benefit from an increased modulation bandwidth by driving the laser with higher current values, at the cost of a lower extinction ratio and degraded sensitivity. Finding an optimal balance between signal degradation mechanisms is therefore a complex task. This has lead to the investigation of different mitigation techniques to increase the symbol rate while maintaining a sufficient signal quality. Equalization (EQ) and pre-distortion have been broadly used in this context, as they force the received signal to resemble the original unaltered waveform. Nevertheless, previous equalization solutions have relied on the separate optimization of the transmitter and receiver, disregarding the potential gains obtained through their simultaneous optimization [3, 4]. To achieve further data throughput improvements, joint optimization of the transmitter and receiver using end-to-end (E2E) learning has gained traction as an optimization approach for optical communication systems, pushing their performance closer to their theoretical capacity [5, 6]. The conventional approach to gradient-based optimization in E2E learning is based on a differentiable channel model [5]. The DML small-signal response can be easily approximated by differentiable methods, at the cost of constraining the peak-to-peak amplitude of the modulated signal to impratically low levels in a realistic scenario. The more suitable large-signal dynamics are however governed by nonlinear differential rate equations, for which no analytical solution can be obtained [7]. This limitation poses challenges in achieving a differentiable channel. Although ordinary differential equation (ODE) solvers and optimization approaches (reinforcement learning, gradient-free) have been proposed as gradient estimators, they often require considerable computational overhead [8]. To enable E2E learning and facilitate the estimation of gradients within the communication system, a locally accurate DML model is required [9]. This letter builds upon our work in [10] showcasing the application of data-driven optimization techniques to derive differentiable DML models. The model performance analysis is based on four different data-driven models, namely time-delay neural networks (TDNN), Volterra filters, long-short term memory (LSTM) and convolutional attention transformers (CATs). In this work we integrate each model into a new system optimization setup and conduct a comparative analysis of the generated signals with the laser rate equation output. The objective is to evaluate the models' gradient estimation performance from a more contextualized perspective as part of a larger optimization system, instead of assessing them as mere function estimators. A quantitative comparison between the models is conducted in terms of normalized root mean square error (NRMSE) and train/test time, while providing a visual qualitative comparison through the use of eye diagrams. Comparing the different architectures, the results show that the CAT model is able to achieve improved NRMSE performance in training and testing throughout the analyzed symbol rates while maintaining a GPU processing time comparable to its alternatives. CATs are therefore expected to offer an efficient solution for optimizing DML-based communication systems where the direct use of ODE solvers would be impractical. ## II Numerical Setup ### _Data-Driven Modeling_ Any data-driven model design presents two fundamental choices: the model structure to be used and the data features fed into it. Domain knowledge is key in both decisions, as the characteristics of the task to be performed may better suit some architectures over others. Alternatively to more established techniques in laser modeling, like circuit-level models [6], we propose the utilization of CATs [11]. CATs employ convolutions to model the dependencies within temporal sequences. This approach offers several advantages: (i) it restricts the utilization of past sequence samples in prediction, (ii) it captures waveform patterns rather than individual sample relations, and (iii) it has strong awareness of the order of the samples within the sequence. Although recurrent architectures are also based on temporal context, they need to calculate previous states sequentially in order to infer future samples. CATs break this bottleneck by processing the full time sequences at once, making better use of parallelization hardware and memory resources [11]. To maximize the accuracy of the data-driven model across various scenarios, the input data must encompass a wide range of waveforms and amplitudes, providing deep understanding of the laser's dynamic behaviour. Ideally, the pulse shaping block generating the data should use few input parameters while yielding a large amount of different output waveforms to avoid overfitting. This is addressed, as shown in Fig. 1a, by alternating between two types of pulse shapes: super-Gaussian pulses and random pulses. The random pulses are sampled as vectors from a folded normal distribution \(\mathcal{N}(0.5,1)\). The parameters for the super-Gaussian pulses, namely the temporal full width \(e^{-0.5}\), \(T_{0}\) and the order \(n\) are sampled from a folded \(\mathcal{N}(0.25T_{\mathrm{sym}},T_{\mathrm{sym}})\) and uniform \(\mathcal{U}(1,6)\) distributions, respectively. \(T_{\mathrm{sym}}\) is the symbol period, reciprocal of the symbol rate. The amplitude of the pulses is modulated according to equiprobable pulse-amplitude modulation (4PAM) symbols. Subsequently, the pulses undergo min-max normalization and low-pass filtering (LPF) to prevent out-of-band leakage. The pulse shaping is randomized again every 8 symbols (with 32 samples per symbol) until completing a 1024-sample sequence of mixed pulse shapes. The training data set comprises \(2^{13}\) sequences, totaling \(2^{23}\) samples, while the validation set consists of \(2^{17}\) samples. The target training data is obtained based on numerical simulations obtained from the general laser rate equations [12], using the aforementioned stochastic waveforms as input. The symbol rate of the driving signal is varied to introduce different levels of distortion, obtaining a distinct model for every symbol rate investigated. To solve the laser rate equations, a fifth-order Runge-Kutta (RK4,5) solver is utilized. As shown in Fig. 1b, the solution obtained from the solver serves as the ground truth for the surrogate models, establishing the relationship between the input modulation current (generated waveform) and the laser output after quadratic detection (optical power). The proposed CAT model adopts a decoder-only structure, consisting of three main blocks: learned positional embeddings (LPEs), convolutional attention sublayers, and 2-layer multilayer perceptrons (MLP) with ReLU hidden activation. A linear layer is employed to reduce the dimensionality of the hidden features. For comparison purposes, three additional models were investigated: a second-order Volterra filter with 16-sample memory, a TDNN, and an LSTM. The specific values for each network hyperparameter are summarized in Table I. ### _Equalization setup_ To demonstrate the proof-of-concept, all the trained surrogate models were evaluated within a numerical back-to-back transmission setup with a receiver equalizer. Thus, their potential to achieve link gains in a real optimization environment can be showcased. A simple FIR-based equalizer trained on NRMSE was selected as testing scenario, as its simplicity allows to focus on the accuracy of the DML model. It is important to note that the optimality of the equalizer structure for this task is not a primary concern within the present scope, as the focus lies on the predictive potential of the surrogate models. The equalization task is performed on a per-sample basis, using square-pulse-shaped 4PAM symbols. It should be emphasized that none of the surrogate models was trained specifically on pure square waveforms, ensuring a fair assessment of their inference capabilities. As depicted Fig. 1: Block diagrams of a) data acquisition and b) model setup in Fig. 1b, the loss is calculated by calculating the NRMSE between the low-pass-filtered waveforms at the input of the DML and the signal at the output of the equalizer. Since the studied surrogates cannot perfectly replicate the laser rate equations, the learned equalizer coefficients are also tested on the output waveforms from the ODE solver to gain insight into the extent of metric distortion induced by the models. The comparison between the losses obtained from the surrogate models and the rate equation serves as the primary benchmark, as it reflects the generalization capabilities of each model in handling a previously unseen scenario. ## III Numerical results ### _Surrogate models_ The numerical simulations performed can be divided into surrogate and equalizer optimization. The DML response exhibits spectral characteristics that cause waveform distortion to increase as the symbol rate \(R_{s}\) increases, especially beyond the relaxation frequency \(f_{R}\). The high-rate region close to \(f_{R}\) is therefore of particular interest. Within the surrogate training, we evaluated models across six distinct symbol rates, spanning the range \(0.1f_{R}\) to \(1.2f_{R}\). In each instance, we used an Adam optimizer with recommended decay values (\(\beta_{1}=0.9,\beta_{2}=0.999\)), along with min-max normalized mean square error (NMSE) as loss metric, later converted into its squared root form, NRMSE, for unit matching purposes. As a proof of concept, laser phase and intensity noise were not considered for simplicity. Each surrogate model was trained for 400 epochs, selecting the one achieving the lowest test loss. The optimal hyperparameters for each model were obtained through grid search. The selected models were then utilized for the equalization task, so their potential as part of a link optimization setup could be verified. The most straightforward validation in a supervised time series inference model is to compare its predictions with the desired sequences sample by sample. This approach is shown in Fig. 2, where the NRMSE value is plotted as a function of the symbol rate. Over the symbol rates analyzed, most of the models deliver similar performance, although the CAT seems to deliver better performance than its peers and falls below the \(10^{-2}\) mark. This is especially true for higher symbol rates, where only the LSTM is able to approach its performance. The Volterra-based model is the main outlier, with an error over 10% in the high rate region. Despite the substantial differences between the models, all the loss curves hint at the correlation between \(R_{s}\) and the waveform distortion introduced, showing that as the symbol duration becomes shorter, the output sequences are more difficult to match. The TDNN showcases this tendency, showing relatively good NRMSE figures at low \(R_{s}\) that worsen gradually as the symbol rate is increased. Further context is given in Fig. 3, where the average processing time for a train and test epoch on the utilized NVIDIA A100 GPU is compared. The ODE solver score shows the elapsed time for the generation of the target sequences. The figure pictures the importance of the model architecture in the inference speed of the networks: although the CAT has significantly more training parameters than the other surrogates, it operates at comparable times, even outperforming the LSTM. It is also clear that all of the proposed models add substantial time savings compared to the ODE solver. Another useful insight can be obtained by looking at Fig. 4, where the response of each model to a train of 4PAM Gaussian pulses is represented in the shape of eye diagrams. The output of the ODE solver to the same signal was added as a reference. Although all 4 models show reasonable convergence compared to the ODE case, there are some noticeable outliers. While the Volterra filter and the TDNN seem to oversimplify the DML dynamics compared to the ODE solver, the CAT is more sensitive to small changes in position and amplitude in samples. The former effect is probably due to the relatively low number of training parameters in the models, while the latter may be due to the one-to-many mapping in the positional encoding of the CAT. However, this drawback may be less relevant in real scenarios where noisy input data will affect the results of the output waveform to some extent. It must be noted that even if the eye diagrams give a good intuition of the behaviour of each model, they show the response to a very specific input, while the NRMSE scores yield a broader analysis throughout the different waveforms. Fig. 3: Time elapsed (per epoch) by the presented models Fig. 2: NRMSE scores of the studied models ### _Equalizer optimization_ Additional insights can be extracted from the equalization setup, where the MSE between the received and transmitted waveforms was obtained. The FIR equalizer is based on a trainable 31-tap filter, using random 4PAM transmitted symbols as a data source. In all cases, the symbol sequences utilized are identical for fair comparison. Fig. 5 establishes a comparison between the loss calculated based on the response of each of the models and the loss obtained when using the ODE solver as estimator of the laser response. It becomes apparent that, even though all of the surrogates yield similar overall performance, the difference between the two plots is significant in certain cases. While the LSTM and the CAT show almost identical curves in each case, the TDNN and Volterra MSE loss is significantly poorer when tested on themselves than on the ODE solver. This could be due to waveform artifacts (hinted in the eye diagrams) that distort the signal only when the testing is performed on the model, but make the equalizer more robust towards impairments during training. The Volterra filter seems to deliver a relatively solid response in the low-rate region, but it progressively degrades when approaching higher symbol rates. Even if the TDNN delivers the best ODE-based testing performance, its scores differ noticeably from the self-testing case, making it potentially unreliable as estimator of the DML response. ## IV Conclusions This study has proposed a series of differentiable surrogate models for directly modulated laser links. In addition to the usual loss metrics, the models were tested in an equalizer-based optimization setup to showcase their prospects in a real setting. The analysis shows the complexity of choosing a model that resembles the laser response under every scenario and the variety of factors that must be taken into account. Throughout the metrics obtained, the convolutional attention transformer has shown high resilience to different waveforms and symbol rates, while maintaining relatively low inference times thanks to its parallelization capabilities. Our results show the potential of data-driven models as faster substitutes for ODE solvers and derivative-free gradient approximators in the context of link optimization. ## V Acknowledgements This work was financially supported by the ERC-CoG FRECOM project (no. 771878) and the Villum YIP OPTIC-AI project (no. 29334).
2303.18238
Stability of singularly perturbed hybrid systems with restricted systems evolving on boundary layer manifolds
We present a singular perturbation theory applicable to systems with hybrid boundary layer systems and hybrid reduced systems {with} jumps from the boundary layer manifold. First, we prove practical attractivity of an adequate attractor set for small enough tuning parameters and sufficiently long time between almost all jumps. Second, under mild conditions on the jump mapping, we prove semi-global practical asymptotic stability of a restricted attractor set. Finally, for certain classes of dynamics, we prove semi-global practical asymptotic stability of the restricted attractor set for small enough tuning parameters and sufficiently long period between almost all jumps of the slow states only.
Suad Krilašević, Sergio Grammatico
2023-03-31T17:55:55Z
http://arxiv.org/abs/2303.18238v1
Stability of singularly perturbed hybrid systems with restricted systems evolving on boundary layer manifolds ###### Abstract We present a singular perturbation theory applicable to systems with hybrid boundary layer systems and hybrid reduced systems with jumps from the boundary layer manifold. First, we prove practical attractivity of an adequate attractor set for small enough tuning parameters and sufficiently long time between almost all jumps. Second, under mild conditions on the jump mapping, we prove semi-global practical asymptotic stability of a restricted attractor set. Finally, for certain classes of dynamics, we prove semi-global practical asymptotic stability of the restricted attractor set for small enough tuning parameters and sufficiently long period between almost all jumps of the slow states only. S + Footnote †: footnote]This work was partially supported by the ERC under research project COSMOS (802348). E-mail addresses: {s.krilasevic-1, s.grammatico}@tudelft.nl. ingular perturbations, boundary layer, multi-agent game ## 1 Introduction A realistic modeling of many control systems requires high-order nonlinear differential equations that might be difficult to fully analyze. To alleviate this problem, we often design control systems with various parameters that with proper tuning can effectively reduce the order of the model and thus simplify the stability analysis. The main theoretical framework for such analysis is singular perturbation theory [8], [4]. The associated model reduction is accomplished by splitting the states into fast and slow states; for each constant value of the slow states, the fast states should converge to an equilibrium point defined by the slow states, and the union of these equilibrium points for all possible slow states defines the so-called boundary layer manifold. Then, the reduced system contains just the slow states and their dynamics assuming they are evolving along that manifold. Singular perturbation theory has been successfully applied to equilibrium seeking in optimization and game theory. One common method of applying zeroth-order algorithms to dynamical systems with cost measurements as output is through a time-time scale separation of the controller and the plant, as demonstrated in [7] and [14]. Time-scale separation can be useful for algorithms where consensus on specific states must be reached before initiating the equilibrium seeking process [1], [9], [23], [18]. Furthermore, in some works [14], [12], [11], via singular perturbation analysis the (pseudo)gradient estimate is filtered before being incorporated into the algorithm. Singular perturbation theory is also used to demonstrate algorithm convergence in problems with slowly varying parameters [2]. Several extensions of singular perturbation theory are known for hybrid systems. In [16], the authors examine a singularly perturbed system in which the boundary layer system is continuous, and the reduced system is hybrid, and both render the corresponding sets globally asymptotically stable. While the work in [21] proposes averaging theory results, in can also be used to prove stability in singularly perturbed systems. Similarly to [16], the authors assume that the boundary layer system is continuous and that the averaged system, which plays the role of the reduced system, is hybrid. In [22], the same authors extend the results for the case when the boundary layer system itself is hybrid. In the aforementioned works, the reduced system is derived by assuming that the slow states "flow" along the boundary layer manifold, while the slow states _do not_ jump from that manifold. Therefore, the reduced system jumps cannot use the properties of the boundary layer manifold to support stabilization; essentially only the continuous dynamics are used to prove stability, "despite" the jumps. In order for the discrete-time dynamics to support stabilization of singularly perturbed systems, we can design the dynamics so that we jump when we are in the proximity of the manifold. This scenario in principle is similar to that in [20], [5] where the authors prove that there exists a sampling period such that a discrete-time optimization-based controller (the reduced system) can find a neighborhood of the optimum of a steady-state output map of a continuous system with an input (boundary layer system). In [15], the authors take a step further and design an event-triggered framework to accomplish the same task by measuring the changes in the output and in turn to determine when the system has approached the boundary layer manifold. Although these methods better incorporate discrete-time reduced system dynamics, the boundary layer system is still only continuous. In this paper, we instead deal with a hybrid boundary layer system and thus extend the current state of the art. _Contribution_: In view of the above literature, our theoretical contributions are summarized next: * We propose a singular perturbation theory for hybrid systems, where the reduced system takes into account the jumps from the boundary layer manifold, differently from [16], [21] where jumps are assumed not to interfere with stability. Furthermore, we allow for the set of fast variables not to be bounded a priori, thus enabling the use of reference trajectories and counter variables in the boundary layer system. * We prove semi-global practical asymptotic stability of the restricted attractor set, under certain mild assumptions on the jump mapping. This attractor set includes only the steady-state values of the fast states that correspond to the slow attractor states, rather than the complete range of possible fast variables. * We show that, in a system resembling the one described in [22], where a distinction is made between jumps in the slow and fast states, the aforementioned results remain valid if there are sufficiently long intervals between nearly all jumps in the _slow states_. Our theory enables the analysis of multiple timescale control systems where both the controller and the plant are hybrid. Furthermore, as the jumps occur at the boundary layer, it would be also possible to incorporate state/output feedback into the controller jump mappings. _Notation_: The set of real numbers and the set of nonnegative real numbers are denoted by \(\mathbb{R}\) and \(\mathbb{R}_{+}\), respectively. Given a set \(\mathcal{Z}\), \(\mathcal{Z}^{n}\) denotes the Cartesian product of \(n\) sets \(\mathcal{Z}\). For vectors \(x,y\in\mathbb{R}^{n}\) and \(\mathcal{A}\subset\mathbb{R}^{n}\), \(\left\langle x\mid y\right\rangle\), \(\left\|x\right\|\) and \(\left\|x\right\|_{\mathcal{A}}\) denote the Euclidean inner product, norm, weighted norm and distance to set respectively. Given \(N\) vectors \(x_{1},\ldots,x_{N}\), possibly of different dimensions, \(\operatorname{col}\left(x_{1},\ldots x_{N}\right)\coloneqq\left[x_{1}^{ \top},\ldots,x_{N}^{\top}\right]^{\top}\). Collective vectors are denoted in bold, i.e, \(\boldsymbol{x}\coloneqq\operatorname{col}\left(x_{1},\ldots,x_{N}\right)\) as they collect vectors from multiple agents. We use \(\mathbb{S}^{1}\coloneqq\left\{z\in\mathbb{R}^{2}:z_{1}^{2}+z_{2}^{2}=1\right\}\) to denote the unit circle in \(\mathbb{R}^{2}\). Id is the identity operator; \(I_{n}\) is the identity matrix of dimension \(n\) and \(\mathbf{0}_{n}\) is vector column of \(n\) zeros; their index is omitted where the dimensions can be deduced from context. The unit ball of appropriate dimensions depending on context is denoted with \(\mathbb{B}\). A continuous function \(\gamma:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is of class \(\mathcal{K}\) if it is zero at zero and strictly increasing. A continuous function \(\alpha:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is of class \(\mathcal{L}\) if is non-increasing and converges to zero as its arguments grows unbounded. A continuous function \(\beta:\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) is of class \(\mathcal{KL}\) if it is of class \(\mathcal{K}\) in the first argument and of class \(\mathcal{L}\) in the second argument. UGAS refers to uniform global asymptotic stability, as defined in [13, Def. 2.2, 2.3]. We define semi-global practical asymptotic stability (SGPAS) similarly as in [19]. Definition 1 (Sgpas): The set \(\mathcal{A}\) is SGPAS as \(\left(\varepsilon_{1},\ldots,\varepsilon_{k}\right)\to 0\) for the parametrized hybrid system \(\mathcal{H}_{\varepsilon}\), if for each given \(\Delta>\delta>0\), there exists a parameter \(\varepsilon_{1}^{*}\) such that for each \(\varepsilon_{1}\in\left(0,\varepsilon_{1}^{*}\right)\) there exists \(\varepsilon_{2}^{*}\left(\varepsilon_{1}\right)>0\) such that for each \(\varepsilon_{2}\in\left(0,\varepsilon_{2}^{*}\left(\varepsilon_{1}\right)\right)\)\(\ldots\) there exists \(\varepsilon_{k}^{*}\left(\varepsilon_{k-1}\right)>0\) such that for each \(\varepsilon_{k}\in\left(0,\varepsilon_{k}^{*}\left(\varepsilon_{k-1}\right)\right)\) it holds: 1. (Semi-global stability) for each \(R\geq\delta\), there exists \(r>0\), such that \(\left\|\phi(l,i)\right\|_{\mathcal{A}}\leq r\implies\left\|\phi(t,j)\right\|_ {\mathcal{A}}\leq R\) for \(l+i\leq t+j\) and each solution \(\phi\). 2. (Practical attractivity) for each \(R,r\) that satisfy \(\Delta\geq R\geq r\geq\delta\), there exists a period \(T(r,R)\geq 0\), such that \(\left\|\phi(l,i)\right\|_{\mathcal{A}}\leq R\implies\left\|\phi(t,j)\right\|_ {\mathcal{A}}\leq r\) for all \(t+j\geq T(r,R)+l+i\) and each solution \(\phi\). ## 2 Singular perturbation theory for hybrid systems We consider two different system setups, with the first case featuring a hybrid reduced system and a continuous boundary layer system. In the second, both the reduced system and the boundary layer system are hybrid. Despite the different scenarios, we require similar assumptions in all configurations. Notably, we provide the most comprehensive coverage of the first case. ### Continuous boundary layer dynamics We consider the following hybrid dynamical system, denoted by \(\mathcal{H}_{1}\): \[\dot{x}\in\begin{bmatrix}I_{n_{1}}&0\\ 0&\frac{1}{\varepsilon}I_{n_{2}}\end{bmatrix}F(x), \text{if }x\in\mathcal{X}_{1}\times\mathcal{X}_{2}, \tag{1a}\] \[x^{+}\in G(x), \text{if }x\in D_{1}\times D_{2}, \tag{1b}\] where \(x\coloneqq\operatorname{col}\left(x_{1},x_{2}\right)\in\mathcal{X}_{1}\times \mathcal{X}_{2}\subset\mathbb{R}^{n_{1}}\times\mathbb{R}^{n_{2}}\) are the system states, \(\varepsilon>0\) is small parameter used to speed up the \(x_{2}\) dynamics, \(\mathcal{X}_{1},D_{1}\subset\mathcal{X}_{1}\), \(\mathcal{X}_{2},D_{2}\subset\mathcal{X}_{2}\) are flow and jump sets for the slow states \(x_{1}\) and the fast states \(x_{2}\), respectively. Other than \(\varepsilon\), the system is implicitly parametrized by parameters \(\beta,\gamma\) and \(\tau\) i.e. \(F=F_{\beta,\gamma,\tau}\) and \(G=G_{\beta,\gamma,\tau}\). As it is common for hybrid dynamical systems, we postulate certain regularity assumptions that provide useful properties. **Assumption 1**.: _The hybrid dynamical system in (1) satisfies the basic regularity assumptions for hybrid systems [3, Assum. 6.5] for all parameters \(\beta\in(0,\overline{\beta}],\gamma\in(0,\overline{\gamma}]\), \(\tau\in(0,\overline{\tau}]\). The mapping \(G\) satisfies item [3, Assum. 6.5, A3] also for \(\beta=0,\gamma=0,\tau=0\). Furthermore, all of systems's solutions are complete. \(\Box\)_ Furthermore, we define two auxiliary systems in view of that in (1), the boundary layer system and the reduced system. The former, \(\mathcal{H}_{1}^{\rho}\), for any given constant \(\rho>0\), is defined as \[\dot{x}\in\begin{bmatrix}0&0\\ 0&I_{n_{2}}\end{bmatrix}F(x)\quad x\in((\mathcal{A}+\rho\mathbb{B})\cap \mathcal{X}_{1})\times\mathcal{X}_{2}, \tag{2}\] where \(\mathcal{A}\subset\mathbb{R}^{n}\) is the equilibrium set of a reduced system, to be introduced later on. Furthermore, the system dynamics are parametrized by a small parameter \(\beta\) which is used for tuning the desired convergence radius. In (2), the dynamics of \(x_{1}\) are frozen, i.e. \(\dot{x}_{1}=0\), thus they approximate the behavior of those in (1) when \(\varepsilon>0\) is chosen very small. Since the first state is constant, it is natural to assume that the equilibrium set, if it exists, contains all possible \(x_{1}\), i.e. the ones contained in the set \((\mathcal{A}+\rho\mathbb{B})\cap\mathcal{X}_{1}\), and that for every \(x_{1}\), there exists a specific set of equilibrium points \(x_{2}\). We characterize this dependence with the "steady-state" mapping \(H\), and assume that it satisfies certain regularity properties [16, Assum. 2], [14, Assum. 2]. **Assumption 2**.: _The set-valued mapping \(H:\mathcal{X}_{1}\rightrightarrows\mathcal{X}_{2}\),_ \[H(\overline{x}_{1})\coloneqq\{\overline{x}_{2}\mid F(\overline{x}_{1}, \overline{x}_{2})=0\} \tag{3}\] _is outer semicontinuous and locally bounded; for each \(\overline{x}_{1}\in\mathcal{X}_{1},H(\overline{x}_{1})\) is a non-empty subset of \(\mathcal{X}_{2}\). \(\Box\)_ Now, we can define the complete equilibrium set of the system in (2), the boundary layer manifold, as \[\mathcal{M}_{\rho}\coloneqq\{(x_{1},x_{2})\mid x_{1}\in(\mathcal{A}+\rho \mathbb{B})\cap\mathcal{X}_{1},\,x_{2}\in H(x_{1})\}. \tag{4}\] It is possible that the set \(\mathcal{M}_{\rho}\) contains some unbounded states corresponding to the logic states or reference trajectories of the boundary layer system. We denote the bounded states with \({x_{2}}^{\prime}\in\mathcal{X}_{2}^{\prime}\) and the unbounded states with \({x_{2}}^{\prime\prime}\in\mathcal{X}_{2}^{\prime\prime}\), \(\mathcal{X}_{2}^{\prime}\times\mathcal{X}_{2}^{\prime\prime}=\mathcal{X}_{2}\). Furthermore, we assume that these unbounded states only affect each other during jumps, and that the bounded states are a priori contained in a compact set. **Assumption 3**.: _The jump mapping \(G\) in (1b), and the steady-state mapping \(H\) in 3 are decomposed as follows:_ \[G(x) =\begin{bmatrix}G_{1}(x_{1},x_{2}^{\prime})\\ G_{2}^{\prime}(x)\end{bmatrix}, \tag{5}\] \[H(x_{1}) =H_{1}(x_{1})\times\mathcal{X}_{2}^{\prime\prime}, \tag{6}\] _where \(G_{1}:\mathcal{X}_{1}\times\mathcal{X}_{2}^{\prime}\rightrightarrows\mathcal{ X}_{1}\times\mathcal{X}_{2}^{\prime}\), \(G_{2}^{\prime}:\mathcal{X}\rightrightarrows\mathcal{X}_{2}^{\prime\prime}\), and \(H_{1}:\mathcal{X}_{1}\rightrightarrows\mathcal{X}_{2}^{\prime}\). \(\Box\)_ **Assumption 4**.: _The set \(\mathcal{X}_{2}^{\prime}\) in Assumption 3 is compact. \(\Box\)_ Furthermore, we assume that the set \(\mathcal{M}_{\rho}\) is SGPAS for boundary layer dynamics in Equation (2). **Assumption 5**.: _The set \(\mathcal{M}_{\rho}\) in (4) is SGPAS as \(\beta\to 0\) for the dynamics in (2). Let \(\Delta>\delta>0\) be given by the definition of SGPAS. For every \(\Delta>0\), the corresponding Lyapunov function is given by_ \[\frac{\alpha_{2,\rho}\left(\left\|x\right\|_{\mathcal{M}_{\rho}} \right)\leq V_{2,\rho}(x)\leq\overline{\alpha_{2,\rho}}\left(\left\|x\right\|_{ \mathcal{M}_{\rho}}\right)}{\sup\limits_{\left\lceil V_{2,\rho}(x)\mid\left. \left\lceil\frac{\rho}{\rho}\right.\right\rceil}\leq-\alpha_{2,\rho}(\left\|x \right\|_{\mathcal{M}_{\rho}})} \tag{7a}\] \[\begin{bmatrix}f_{1}\\ f_{2}\end{bmatrix}\in F(x)\] \[\text{for all $x$ such that }\left\|x\right\|_{\mathcal{M}_{\rho}}\geq\alpha_{\beta}( \beta),\] (7b) \[\begin{bmatrix}f_{2}\\ f_{2}\end{bmatrix}\in F(x)\] \[\text{for all $x$ such that }\left\|x\right\|_{\mathcal{M}_{\rho}}\leq\alpha_{\beta}(\beta)\] (7c) \[\nabla V_{2,\rho}(x)=0\text{ for all $x\in\mathcal{M}_{\rho}$}, \tag{7d}\] _where \(\underline{\alpha_{2,\rho}},\overline{\alpha_{2,\rho}},\alpha_{2,\rho},\alpha_{ \beta},\hat{\alpha}_{\beta}\) are functions of class \(\mathcal{K}\), where \(\overline{\alpha_{2,\rho}},\alpha_{\beta}\) are possibly parametrized by \(\Delta\). Furthermore, for each compact set \(K\in\mathcal{X}_{1}\), there exists \(M>0\), such that_ \[\sup\limits_{x\in K\times\mathcal{X}_{2}}\left\|V_{2,\rho}(x)\right\|+\left\| \nabla_{x_{1}}V_{2,\rho}(x)\right\|\leq M.\qquad\Box \tag{8}\] **Remark 1**.: _In Assumption 7, we allow the set \(\mathcal{X}_{2}\) to be unbounded. Nevertheless, the Lyapunov function is assumed to take bounded values, as in (8). \(\Box\)_ On the other hand, since the \(x_{2}\) dynamics are much faster than those of \(x_{1}\) in (1), from the time scale of the latter, it seems that the \(x_{2}\) dynamics are evolving on the manifold defined by the mapping \(H\). To characterize this behaviour, we can define the reduced system \(\mathcal{H}_{1}^{\prime}\) as: \[\dot{x}_{1}\in F_{\text{r}}(x_{1})\qquad\qquad\text{ if }x_{1}\in\mathcal{X}_{1} \tag{9a}\] \[x_{1}^{+}\in G_{\rm r}(x_{1})\qquad\qquad\qquad\mbox{if }x_{1}\in D_{1}, \tag{9b}\] where \(F_{\rm r}(x_{1})\coloneqq\overline{\rm co}\{v_{1}\mid(v_{1},v_{2})\in F(x_{1},x_ {2}),x_{2}\in H(x_{1})\}\), \(G_{\rm r}(x_{1})\coloneqq\{v_{1}\mid(v_{1},v_{2})\in G(x_{1},x_{2}),x_{2}\in H (x_{1})\}\). Furthermore, the system dynamics are parametrized by the parameter \(\gamma\), which is used for the tuning of the convergence radius to the attractor set, and the parameter \(\tau\) adjusts the minimum time interval between consecutive jumps, for _almost all_ jumps of the systems in (1) (consequently also the reduced system in (9)), as formalized next: **Definition 2** (\(\tau\)-regular jump): _A jump \(j\) in a solution trajectory \(\phi\) is a \(\tau\)-regular jump if it occurs after an interval of flowing greater or equal than \(\tau\), i.e. \(\tau_{j}\coloneqq\sup\{|t-t^{\prime}|:(t,j-1),(t^{\prime},j-1)\in{\rm dom}\, \phi\}\geq\tau\). Otherwise, the jump \(j\) is called \(\tau\)-irregular. \(\Box\)_ **Assumption 6**: _Let \(\phi\) be any solution of the system in (1) with \(\|\phi(0,0)\|_{\mathcal{A}\times\mathcal{X}_{2}}\leq\Delta\). Then, there exists a finite number of jumps \(N^{*}\) and finite time interval \(\,T^{*}\), such that \(\phi\) has at most \(N^{*}\)\(\underline{\sigma}(\tau)\)-irregular jumps, and they all occur before \(t\leq T^{*}\), where \(\underline{\sigma}\) is a function of class \(\mathcal{L}\), and \(\tau\) is the parameter of the system. \(\Box\)_ Differently from [16], where the reduced mapping is defined as \(G_{\rm r}(x_{1})\coloneqq\{v_{1}\mid(v_{1},v_{2})\in G(x_{1},x_{2}),x_{2}\in \mathcal{X}_{2}\}\), the mapping in (9b) only includes the jumps from the stead-state "pairs" \((x_{1},H(x_{1}))\) that belong to the manifold. Thus, our next assumption is weaker than [16, Assum. 4], as it requires that the jumps stabilize the set \(\mathcal{A}\) via a much more restricted set of dynamics. This is due to the fact that the reduced mapping \(G_{\rm r}\) does not contain all possible jumps from the set \(D_{1}\), but only those from the boundary layer manifold \(\mathcal{M}_{\rho}\). **Assumption 7**: _The set \(\mathcal{A}\) is SGPAS as \(\gamma\to 0\) for the reduced system in (9). Let \(\Delta>\delta>0\) be given by the definition of SGPAS. For every \(\Delta>0\), the corresponding Lyapunov function is given by_ \[\underline{\alpha_{1}}\left(\left\|x_{1}\right\|_{\mathcal{A}} \right)\leq V_{1}(x_{1})\leq\overline{\alpha_{1}}\left(\left\|x_{1}\right\|_{ \mathcal{A}}\right) \tag{10a}\] \[\sup_{f_{1}r\in F_{\rm r}(x_{1})}\left\langle\nabla V_{1}(x_{1})\mid f_{1 }r\right\rangle\leq-\hat{\sigma}_{\tau}(\tau)\hat{\alpha}_{\gamma}(\gamma) \alpha_{1}\left(\left\|x_{1}\right\|_{\mathcal{A}}\right)\] (10b) \[\sup_{g_{1}r\in G_{\rm r}(x_{1})}V_{1}(g_{r1})-V_{1}(x_{1})\leq-\hat{\alpha}_{ \gamma}(\gamma)\alpha_{1}\left(\left\|x_{1}\right\|_{\mathcal{A}}\right)\] (10c) \[\mbox{for }\left\|x_{1}\right\|_{\mathcal{A}}\geq\alpha_{\gamma}( \gamma), \tag{10d}\] _where \(\underline{\alpha_{1}},\overline{\alpha_{1}},\alpha_{1},\alpha_{\gamma},\hat {\alpha}_{\gamma}\) are functions of class \(\mathcal{K}\), where \(\alpha_{1},\alpha_{\gamma}\) are possibly parametrized by \(\Delta\), and \(\hat{\sigma}_{\tau}\) is a function of class \(\mathcal{L}\). \(\Box\)_ We claim that our original system in (1) renders the set \(\mathcal{A}\times\mathcal{X}_{2}\) practically attractive, if for almost all intervals of flow we allow the state of the system to converge to the neighborhood of the \(\mathcal{M}_{\rho}\) manifold. The intuition is that in the neighborhood of the manifold, "the jumps of the _reduced system_" also contribute to the stabilization. **Theorem 1**: _Let Assumptions 1--7 hold. Then the set \(\mathcal{A}\times\mathcal{X}_{2}\) is practically attractive as \((\gamma,\frac{1}{\tau},\varepsilon,\beta)\to 0\) for the hybrid system in (1). \(\Box\)_ [Proof.] See Appendix A. **Example 1**: _Consider the hybrid dynamical system_ \[\left\{\begin{array}{l}\dot{u}=\gamma\max\{0,1-\frac{\mathbb{I }u\mathbb{I}}{R}\}\\ \dot{v}=\frac{1}{\tau}\\ \dot{x}=-\frac{1}{\varepsilon}(x-u)\\ \end{array}\right. \tag{11a}\] \[\mbox{if }(u,v,x)\in[0,R]\times[0,1]\times[0,R];\] \[\left\{\begin{array}{l}u^{+}=\frac{\varepsilon}{2}\\ v^{+}=0\\ x^{+}=R\end{array}\right.\] \[\mbox{if }(u,v,x)\in[0,R]\times\{1\}\times[0,R], \tag{11b}\] _where \(\gamma,\tau,\varepsilon\) are tuning parameters, and \(R>0\) is the maximal trajectory radius. We show that the set \(\{0\}\times[0,1]\times[0,R]\) is practically attractive. First, we see that the boundary layer system reads as_ \[\left\{\begin{array}{l}\dot{u}=0\\ \dot{v}=0\\ \dot{x}=-(x-u)\\ \end{array}\right.\] \[\mbox{if }(u,v,x)\in[0,R]\times[0,1]\times[0,R], \tag{12}\] _while the reduced system is given by_ \[\left\{\begin{array}{l}\dot{u}=\gamma\max\{0,1-\frac{\|u\|}{R}\} \\ \dot{v}=\frac{1}{\tau}\\ \mbox{if }(u,v)\in[0,R]\times[0,1];\end{array}\right. \tag{13a}\] \[\left\{\begin{array}{l}u^{+}=\frac{u}{2}\\ v^{+}=0\\ \end{array}\right.\] \[\mbox{if }(u,v)\in[0,R]\times\{1\}. \tag{13b}\] _Assumptions 1--6 are satisfied. Regarding Assumption 7, let the Lyapunov function of the reduced system be \(V_{1}(u,v)=(2-v)u^{2}\). It follows that_ \[\dot{V}_{1}(u,v)\leq-\frac{1}{\tau}u^{2}+4\gamma aR,\] \[V_{1}(u^{+},v^{+})-V_{1}(u,v)\leq-\frac{1}{2}u^{2}. \tag{14}\] _Since the reduced system satisfies Assumption 7, in view of Theorem 1, practical attractivity is ensured. Unlike previous works [22], [16], and [21], our reduced jump mapping includes jumps only from the boundary layer, which allows us to establish stability results using jumps. In the aforementioned works, the reduced system jump mapping includes all possible jumps [16, Equ. 13], [21, Equ. 17], [22, Equ. 13], and and for our example, it is given by \(u^{+}\,\in\,[-\frac{R}{2},\frac{R}{2}]\). Thus, the assumption on the stability for reduced system dynamics [16, Assum. 4], [21, Thm. 2], [22, Thm. 2] does not hold. \(\Box\)_ We note that Theorem 1 gives us no guarantee on the stability of the state \(x_{1}\), due to the fact that jumps can move the state arbitrarily far away from any set in \(\mathcal{X}_{1}\) (also seen in Example 1 for \(u(0,0)=0,v(0,0)=0.99,x(0,0)=R\)). Under an additional assumption, it is possible to bound both the states \(x_{1}\) and \(x_{2}\) to a neighborhood of the set \(\mathcal{M}_{\mathcal{A}}\coloneqq\{(x_{1},x_{2})\mid x_{1}\in\mathcal{A},x_{ 2}\in H(x_{1})\}\). **Assumption 8**: _The jump mapping \(G\) in (1b) is such that \(G(\mathcal{M}_{\mathcal{A}})\subset\mathcal{M}_{\mathcal{A}}\). \(\Box\)_ Assumption 8 is sufficient to guarantee that for any neighborhood of the equilibrium set \(\mathcal{M}_{\mathcal{A}}+\overline{r}\mathbb{B}\), there exists a neighborhood \(\mathcal{M}_{\mathcal{A}}+\underline{r}\mathbb{B}\), such that jumps from the latter do not exit the former, i.e. \(G(\mathcal{M}_{\mathcal{A}}+\underline{r}\mathbb{B})\subset\mathcal{M}_{ \mathcal{A}}+\overline{r}\mathbb{B}\). Lastly, we do not need to assume the compactness of the set \(\mathcal{X}_{2}^{\prime}\), as the distance from the set \(\mathcal{M}_{\mathcal{A}}\) also bounds the values of the \(x_{2}^{\prime}\) state. **Theorem 2**: _Let Assumptions 1--3, 5--8 hold. Then the set \(\mathcal{M}_{\mathcal{A}}\) is SGPAS as \((\gamma,\frac{1}{\tau},\varepsilon,\beta)\to 0\) for the hybrid system in (1). \(\Box\)_ * See Appendix B. **Example 2**: _We consider a hybrid dynamical system similar to one in (11):_ \[\left\{\begin{array}{l}\dot{u}\,=\gamma\\ \dot{v}\,=\frac{1}{\tau}\\ \dot{x}\,=-\frac{1}{\varepsilon}(x-u)\\ \mbox{if }(u,v,x)\in\mathbb{R}\times[0,1]\times\mathbb{R};\end{array}\right. \tag{15a}\] \[\left\{\begin{array}{l}u^{+}\,=\frac{\pi}{2}\\ v^{+}\,=0\\ x^{+}\,=2x\end{array}\right.\] \[\mbox{if }(u,v,x)\in\mathbb{R}\times\{1\}\times\mathbb{R}, \tag{15b}\] _where \(\gamma,\tau,\varepsilon\) are tuning parameters. Differently from (11), the jump mapping is such that Assumption 8 is satisfied. Furthermore, as Theorem 2 does not require compactness of the set \(\mathcal{X}_{2}^{\prime}\) in Assumption 4, the flow and jump sets are both unbounded. The boundary layer system has the same dynamics as the system in (12), apart for the flow set which now reads as \(\mathbb{R}\times[0,1]\times\mathbb{R}\). The reduced system is given by_ \[\left\{\begin{array}{l}\dot{u}\,=\gamma\\ \dot{v}\,=\frac{1}{\tau}\end{array}\right. \tag{16a}\] \[\mbox{if }(u,v)\in\mathbb{R}\times[0,1];\] \[\left\{\begin{array}{l}u^{+}\,=\frac{u}{2}\\ v^{+}\,=0\end{array}\right.\] (16b) \[\mbox{if }(u,v)\in\mathbb{R}\times\{1\}.\] _Similarly to the previous example, all the Assumptions hold, thus due to Theorem 2, the set \(\{0\}\times[0,1]\times\{0\}\) is SGPAS as \((\gamma,\frac{1}{\tau},\varepsilon,\beta)\to 0\) for the dynamics in (16a). Differently from [22, Thm. 2], [16, Thm. 1], and [21, Thm. 2, Cor. 2 ] where the fast states are only a priori bounded to a compact set, here we can prove their convergence to the equilibrium set. \(\Box\)_ ### Hybrid boundary layer dynamics Theorems 1 and 2 assume a lower limit on the time between _all consecutive jumps_ that occur in the system in (1). However, under certain conditions, it is possible to make a distinction between consecutive jumps of \(x_{1}\), and the consecutive jumps of \(x_{2}\). This is useful when the convergence of the boundary layer system is in fact driven by jumps in \(x_{2}\), and imposing a high lower limit on the period between consecutive jumps slows down convergence. Consider the following hybrid dynamical system, denoted with \(\mathcal{H}_{2}\): \[\dot{x}\in\begin{bmatrix}I_{n_{1}}&0\\ 0&\frac{1}{\varepsilon}I_{n_{2}}\end{bmatrix}F(x),\mbox{ if }x\in\mathcal{X}_{1}\times \mathcal{X}_{2} \tag{17a}\] \[x^{+}\in\left\{\begin{bmatrix}x_{1}\\ G_{2}(x)\end{bmatrix},\mbox{ if }x\in\mathcal{X}_{1}\times D_{2}\\ \left[\begin{bmatrix}G_{1}\left(x\right)\\ x_{2}\end{bmatrix},\mbox{ if }x\in D_{1}\times\mathcal{X}_{2}\\ \left[\begin{bmatrix}x_{1}\\ G_{2}(x)\end{bmatrix}\cup\begin{bmatrix}G_{1}\left(x\right)\\ x_{2}\end{bmatrix},\mbox{ if }x\in D_{1}\times D_{2}.\end{array}\right. \tag{17b}\] In this formulation, the distinction between the jumps of states \(x_{1}\) and \(x_{2}\) are highlighted, because during the jumps of \(x_{1}\), \(x_{2}\) stays constant, and vice versa. Furthermore, we define the boundary layer system, \(\mathcal{H}_{2}^{\rho}\), as \[\dot{x}\in\begin{bmatrix}0&0\\ 0&I_{n_{2}}\end{bmatrix}F(x)\hskip 28.452756pt\mbox{ if }x\in\mathcal{X}_{1}\times\mathcal{X}_{2}, \tag{18a}\] \[x^{+}\in\begin{bmatrix}x_{1}\\ G_{2}(x)\end{bmatrix}\qquad\qquad\text{if }x\in\mathcal{X}_{1}\times D_{2}, \tag{18b}\] and the reduced system, \(\mathcal{H}_{2}^{r}\), as \[\dot{x}_{1}\in F_{\mathrm{r}}(x_{1}) \qquad\text{if }x_{1}\in\mathcal{X}_{1}, \tag{19a}\] \[x_{1}^{+}\in G_{\mathrm{r}}(x_{1}) \qquad\text{if }x_{1}\in D_{1}, \tag{19b}\] where \(F_{\mathrm{r}}(x_{1})\coloneqq\overline{\mathrm{co}}\{v_{1}\mid(v_{1},v_{2}) \in F(x_{1},x_{2}),x_{2}\in H(x_{1})\}\), \(G_{\mathrm{r}}(x_{1})\coloneqq\{v_{1}\mid v_{1}\in G_{1}(x_{1},x_{2}),\,x_{2} \in H(x_{1})\}\). Differently from the boundary layer system in (2), jumps are also included in this formulation, while the formulation of the reduced system is the same. Next, we pose analogous technical assumptions as for the system in (1) and in turn provide results analogous to Theorem 2. **Assumption 9**.: _The hybrid dynamical system in (17) satisfies the same conditions as in Assumption 1. \(\Box\)_ **Assumption 10**.: _The jump mapping \(G\) in (17b), and the steady-state mapping \(H\) in 3 are decomposed as in Equations (5) and (6). \(\Box\)_ **Assumption 11**.: _The set \(\mathcal{M}_{\rho}\) is SGPAS as \(\beta\,\to\,0\) for the dynamics in (18). Let \(\Delta>\delta>0\) be given by the definition of SGPAS. The corresponding Lyapunov function is given by (7), with the additional equation_ \[\sup_{g_{1}=x_{1},g_{2}\in G_{2}(x)}V_{2,\rho}(g)-V_{2,\rho}(x)\leq 0,\] _and for each compact set \(K\in\mathcal{X}_{1}\times\mathcal{X}_{2}^{\prime}\), there exists \(M>0\), such that_ \[\sup_{x\in K\times\mathcal{X}_{2}^{\prime\prime}}\|V_{2,\rho}(x)\|+\|\nabla_{x _{1}}V_{2,\rho}(x)\|\leq M.\qquad\Box \tag{20}\] **Assumption 12**.: _The set \(A\) is SGPAS as \(\gamma\to 0\) for the reduced system in (19). Let \(\Delta>\delta>0\) be given by the definition of SGPAS. For every \(\Delta>0\), the corresponding Lyapunov function is given by Equation (10) with the redefined mappings in (19). \(\Box\)_ **Definition 3** (\(\tau\)-regular jump in \(x_{1}\)): _A jump \(j\) in a solution trajectory \(\phi\) of the system in (17) is a \(\tau\)-regular in \(x_{1}\) jump if it occurs after an interval of flowing in the \(x_{1}\) state greater or equal than \(\tau\), i.e. \(\tau_{j}^{1}\coloneqq\min\{|t-t^{\prime}|:\phi_{1}(t,j+1)\in G_{1}(\phi(t,j)) ;\phi_{2}(t,j+1)=\phi_{2}(t,j);(t,j),(t,j+1)\in\mathrm{dom}\,\phi_{1}(t^{ \prime},j^{\prime}+1)\in G_{1}(\phi(t^{\prime},j^{\prime}));\phi_{2}(t^{ \prime},j^{\prime}+1)=\phi_{2}(t^{\prime},j^{\prime});(t^{\prime},j^{\prime}),(t^{\prime},j^{\prime}+1)\in\mathrm{dom}\,\phi;j^{\prime}>j\}\geq\tau\). Otherwise, if \(\tau_{j}^{1}\) exists and \(\tau_{j}^{1}<\tau\), jump \(j\) is called \(\tau\)-irregular in \(x_{1}\). \(\Box\)_ **Assumption 13**.: _Let \(\phi\) be any solution of the system in (17) with \(\|\phi(0,0)\|_{\mathcal{A}\times\mathcal{X}_{2}}\leq\Delta\). Then, there exists a finite number of jumps \(N^{*}\) and finite time interval \(\,T^{*}\), such that \(\phi\) has at most \(N^{*}\)\(\underline{\sigma}(\tau)\)-irregular jumps in \(x_{1}\), and they all occur before \(t\leq T^{*}\), where \(\underline{\sigma}\) is a function of class \(\mathcal{L}\). \(\Box\)_ **Corollary 1**.: _Let Assumptions 2, 8--13 hold. Then the set \(\mathcal{M}_{\mathcal{A}}\) is SGPAS as \((\gamma,\frac{1}{\tau},\varepsilon,\beta)\to 0\) for the hybrid system in (17). \(\Box\)_ **Proof.** The proof is analogous to the proofs of Theorems 1 and 2. An equivalent for Lemma 2 can be constructed with jumps of the \(x_{2}\) state. The rest of the proof is essentially the same. \(\blacksquare\) ## 3 Illustrative example In [17], the issue of connectivity control was approached as a Nash equilibrium problem. In numerous practical situations, multi-agent systems are constructed with the goal of maintaining specific connectivity as a secondary objective in addition to their primary objective. In the subsequent discussion, we consider a comparable problem in which each agent is responsible for detecting an unknown signal source while also preserving a certain level of connectivity. Unlike [17], both the robots and the controllers have hybrid dynamics in our example. Consider a multi-agent system consisting of unicycle vehicles, indexed by \(i\in\mathcal{I}\coloneqq\{1,\dots N\}\). Each agent is tasked with locating a source of a unique unknown signal. The strength of all signals abides by the inverse-square law, i.e. proportional to \(1/r^{2}\). Therefore, the inverse of the signal strength can be used as a cost function. Additionally, the agents must not drift apart from each other too much, as they should provide quick assistance to each other in case of critical failure. This is enforced by incorporating the signal strength of the fellows agents in the cost functions. Thus, we design the cost functions as follows: \[\forall i\in\mathcal{I}:h_{i}(u)=\|u_{i}-u_{i}^{s}\|^{2}+c\sum_{j\in\mathcal{I }_{-i}}\|u_{i}-u_{j}\|^{2}. \tag{21}\] where \(\mathcal{I}_{-i}\coloneqq\mathcal{I}\setminus\{i\}\), \(c,b>0\) and \(u_{i}^{s}\) represents the position of the source assigned to agent \(i\). Goal of each agent is to minimize their cost function, and the solution to this problem is a Nash equilibrium. ### Unicycle dynamics As the unicycles are dynamical systems, a reference tracking controller is necessary in order to move them to the desired positions. In our example, let each agent implement a hybrid feedback controller similar to one in [10] for trajectory tracking: \[\chi_{i}^{u}=\mathrm{col}\left(x_{i},y_{i},\theta_{i}^{e},\tau_{i},\theta_{i}, \hat{v}_{i},\hat{\omega}_{i}\right),\] \[\dot{\chi}_{i}^{u}=F_{i}^{u}(\chi_{i}^{u})\coloneqq\] \[\operatorname{col}\left(\dot{v}_{i}\cos\left(\theta_{i}\right), \dot{v}_{i}\sin\left(\theta_{i}\right),\omega_{\mathrm{r}}-\dot{\omega}_{i}, \frac{1}{\sigma_{i}},\dot{\omega}_{i},0,0\right)\] \[\quad\text{if }\chi_{i}^{u}\in C_{i}^{u}\coloneqq\mathbb{R}^{3} \times[0,1]\times\mathbb{R}^{3}, \tag{22a}\] \[\chi_{i}^{u+}=G_{i}^{u}(\chi_{i}^{u})\coloneqq\operatorname{col} \left(x_{i},y_{i},\theta_{i}^{e},0,\theta_{i},v_{i},\omega_{i}\right)\] \[\quad\text{if }\chi_{i}^{u}\in D_{i}^{u}\coloneqq\mathbb{R}^{3} \times\{1\}\times\mathbb{R}^{3}, \tag{22b}\] where \(u=c_{1}(x_{i}^{e}-c_{3}\omega_{i}y_{i}^{e})-c_{3}c_{2,i}(\omega_{\mathrm{r}}- \omega_{i})y_{i}^{e}+c_{3}\omega_{i}^{2}x_{i}^{e}\), \(x_{i}^{e}\coloneqq\cos(\theta_{i})(u_{i}^{1}-x_{i})+\sin(\theta_{i})(u_{i}^{2} -y_{i})\), \(y_{i}^{e}\coloneqq-\sin(\theta_{i})(u_{i}^{1}-x_{i})+\cos(\theta_{i})(u_{i}^{2 }-y_{i})\), \(\theta_{i}^{e}=\theta_{r}-\theta_{i}\), \(\omega_{i}\coloneqq\omega_{\mathrm{r}}+c_{2,i}\theta_{i}^{e}\), \(\dot{\theta}_{r}=\omega_{r}=const.\), \(c_{1},c_{2,i},c_{3}>0\) are tuning parameters, \(\sigma_{i}\) is the sampling period parameter, \(u_{i}^{1}\) and \(u_{i}^{2}\) are the reference positions. Differently from [10], the jumps are triggered by a timer, and the reference trajectory is that of a unicycle with a fixed position \((u_{i}^{1},u_{i}^{2})\) and constant rotational velocity \(\omega_{\mathrm{r}}\). Similarly to [10, Lemma 4., Thm. 5], it is possible to prove that the dynamics in (22) render the set \(\{\operatorname{col}\left(u_{i}^{1},u_{i}^{2},0\right)\}\times\widetilde{ \mathcal{T}}_{i}\times\mathbb{R}^{3}\) SGPAS as \(\sigma_{i}\to 0\). **Theorem 3**.: _For \(c_{2,i}=\sigma_{i}\), \(c_{3}=\frac{1}{3\omega_{\mathrm{r}}}\), \(c_{1}=\frac{1}{2c_{3}}\), the dynamics in (22) render the set \(\{\operatorname{col}\left(u_{i}^{1},u_{i}^{2},0\right)\}\times\widetilde{ \mathcal{T}}_{i}\times\mathbb{R}^{3}\) SGPAS as \(\sigma_{i}\to 0\). \(\Box\)_ [Proof.] See Appendix C. \(\blacksquare\) From the proof of Theorem 3, it follows that system in (22), for all \(i\in\mathcal{I}\), satisfies Assumptions 11. ### Nash equilibrium seeking reference controller To steer the reference positions towards the Nash equilibrium, we implement the following asynchronous zeroth-order controller: \[\chi^{\mathrm{c}}=\operatorname{col}\left(\mathbf{u},\mathbf{\xi},\mathbf{ \mu},\mathbf{t}\right),\] \[\dot{\chi}^{\mathrm{c}}=F^{\mathrm{c}}(\chi^{\mathrm{c}}) \coloneqq\operatorname{col}\left(\mathbf{0},\mathbf{0},\mathbf{0},\mathbf{\tau}^{-1}\right)\] \[\quad\text{if }\chi^{\mathrm{c}}\in C^{\mathrm{c}}\coloneqq\mathbb{R}^{m} \times\mathcal{N}\times\mathbb{S}^{m}\times[0,1]^{N}, \tag{23a}\] \[\quad\text{if }\chi^{\mathrm{c}}\in D^{\mathrm{c}}\coloneqq\mathbb{R}^{m} \times\mathcal{N}\times\mathbb{S}^{m}\times\mathcal{T}_{\mathrm{R}}, \tag{23b}\] where \(\mathbf{u}=\operatorname{col}\left(\left(u_{i}^{1},u_{i}^{2}\right)_{i\in\mathcal{I }}\right)\) is used as the reference position for the systems in (22), \(\mathbf{\xi}\) is the collective filter state bound in a compact set \(\mathcal{N}\subset\mathbb{R}^{N}\) chosen large enough to encompass all possible values of the state for all practical applications, \(\mathbf{\mu}\in\mathbb{S}^{2N}\) are oscillator states, \(\mathbf{t}\) are the timer states that control the sampling of each individual robot, \(\mathbf{\tau}^{-1}=\tau_{0}\operatorname{col}\left(\left(\tau_{i}^{-1}\right)_{i \in\mathcal{I}}\right)\) are the sampling periods that satisfy [6, Assum. 9], \(\mathbf{x}\) are the positions of the unicycles, \(\alpha,\beta>0\) are small time-scale separation parameters, \(\mathcal{R}\coloneqq\operatorname{Diag}\left((\mathcal{R}_{i})_{i\in\mathcal{I }}\right)\), \(\mathcal{R}_{i}\coloneqq\operatorname{Diag}\left(\left[\begin{smallmatrix} \cos(\omega_{i}^{j})-\sin(\omega_{i}^{j})\\ \sin(\omega_{i}^{j})\end{smallmatrix}\begin{smallmatrix}\cos(\omega_{i}^{j}) \\ \cos(\omega_{i}^{j})\end{smallmatrix}\right]_{j\leq m_{i}}\right)\), \(\omega_{i}^{j}>0\) for all \(i\) and \(j\) are rotational frequencies and they satisfy [6, Assum. 8], \(\mathbb{D}\in\mathbb{R}^{2N\times 4N}\) is a matrix that selects every odd row from the vector of size \(2N\), \(a_{i}>0\) are small perturbation amplitude parameters, \(A\coloneqq\operatorname{diag}\left((a_{i})_{i\leq m}\right)\), \(J(\mathbf{x})=\operatorname{Diag}\left((J_{i}(x_{i},\mathbf{x}_{-i})I_{m_{i}})_{i\in \mathcal{I}}\right)\), \(\mathcal{T}\subset[0,1]^{N}\) is a closed invariant set in which all of the times evolve and it excludes the initial conditions and their neighborhood for which we have concurrent sampling, \(\mathcal{T}_{\mathrm{R}}\coloneqq\left(\cup_{i\in\mathcal{I}}[0,1]^{i-1} \times\{1\}\times[0,1]^{N-i}\right)\cap\mathcal{T}\) is the set of timer intervals where one agent has triggered its sampling, \(S_{x}:\mathcal{T}\to\mathbb{R}^{m\times m}\) and \(S_{\tau}:\mathcal{T}\to\mathbb{R}^{N\times N}\) are continuous functions that output diagonal matrices with ones on the positions that correspond to states and timers of agents with \(t_{i}=1\), respectively, while other elements are equal to zero, when evaluating at \(\mathbf{t}\in\mathcal{T}_{\mathrm{R}}\). ### The full system We define the collective state \(\chi\coloneqq\operatorname{col}\left(\chi^{\mathrm{c}},(\chi_{i}^{u})_{i\in \mathcal{I}}\right)\), collective flow map \(F(\chi)\coloneqq\operatorname{col}\left(F^{\mathrm{c}}(\chi^{\mathrm{c}}), \frac{1}{\varepsilon}(F_{i}^{u}(\chi_{i}^{u}))_{i\in\mathcal{I}}\right)\), collective flow set \(C\coloneqq C^{\mathrm{c}}\times(C_{i}^{u})_{i\in\mathcal{I}}\), collective jump map \(G(\chi)\coloneqq\operatorname{col}\left(G^{\mathrm{c}}(\chi^{\mathrm{c}}),(G_{i} ^{u}(\chi_{i}^{u}))_{i\in\mathcal{I}}\right)\), collective flow set \(D\coloneqq(D^{\mathrm{c}}\times(C_{i}^{u})_{i\in\mathcal{I}})\cup(C^{\mathrm{c}} \times(D_{i}^{u})_{i\in\mathcal{I}})\), and the equilibrium set \(\mathcal{A}_{\chi}\coloneqq\{\mathbf{u}^{\star}\}\times\mathcal{N}\times\mathbb{S }^{N}\times\mathcal{T}\times\{\operatorname{col}\left(\mathbf{u}^{\star},\mathbf{0} \right)\}\times[0,1]^{N}\times\mathbb{R}^{3N}\). We see that the steady state mapping is given by \(H(\chi^{\mathrm{c}})=\operatorname{col}\left(\mathbf{u},\mathbf{0}\right)\times[0,1]^{N} \times\mathbb{R}^{3N}\). Hence, the restricted system is equivalent to the one in [6, Equ. 22]. To show that Assumption 12 is satisfied, we note that [6, Thm. 1] and [6, Equ. E.10] assure that the fully discrete-time zeroth-order variant of the algorithm in [6, Equ. 22], has a Lyapunov function of the form \[\underline{\alpha}_{\alpha}\left(\left\|\mathbf{z}-\mathbf{u}^{\star} \right\|\right)\leq V_{\mathrm{a}}(\mathbf{z})\leq\overline{\alpha}_{\alpha} \left(\left\|\mathbf{z}-\mathbf{u}^{\star}\right\|\right)\] \[V_{\mathrm{a}}(\mathbf{z}^{+})-V_{\mathrm{a}}(\mathbf{z})\leq-\hat{\alpha}_{ \alpha}\left(\alpha\right)\alpha_{\mathrm{a}}\left(\left\|\mathbf{z}-\mathbf{u}^{\star} \right\|\right)\] \[\quad\text{for }\left\|\mathbf{z}-\mathbf{u}^{\star}\right\|\geq\max\{\alpha_{ \beta}(\beta),\alpha_{\alpha}(\alpha)\},\] where \(\mathbf{z}\coloneqq Hence, it holds \[\underline{\alpha}_{\alpha}\left(\|\mathbf{z}-\mathbf{u}^{*}\|\right)\leq V _{1}(\mathbf{z})\leq\left(\overline{\alpha}_{\text{a}}+\alpha_{\text{a}}\circ \overline{\alpha}_{\text{a}}^{-1}\circ\underline{\alpha}_{\text{a}}\right)\left( \|\mathbf{z}-\mathbf{u}^{*}\|\right)\] \[\dot{V}_{1}(z)\leq-\tfrac{1}{2\tau_{0}}\sum_{i\in\mathcal{I}} \tau_{i}^{-1}\hat{\alpha}_{\alpha}\left(\alpha\right)\alpha_{\text{a}}\left( \overline{\alpha}_{\text{a}}^{-1}\left(\underline{\alpha}_{\text{a}}\left(\| \mathbf{z}-\mathbf{u}^{*}\|\right)\right)\right)\] \[V_{1}(\mathbf{z}^{+})-V_{1}(\mathbf{z})\leq-\tfrac{1}{2}\hat{\alpha}_{ \alpha}\left(\alpha\right)\alpha_{\text{a}}\left(\|\mathbf{z}-\mathbf{u}^{*}\|\right)\] \[\text{for }\|\mathbf{z}-\mathbf{u}^{*}\|\geq\max\{\alpha_{\beta}(\beta), \alpha_{\alpha}(\alpha)\},\] which satisfies Assumption 12. Furthermore, it is easy to show that Assumptions 1, 1, 1 hold as well. Since \(\tau_{0}\) can be considered a tuning parameter for jump periods in the timers states \(\mathbf{t}\) in (23), we can guarantee satisfaction of Assumption 13. Hence, we satisfy all the Assumptions of the Corollary 1, and for small enough parameters, the combined dynamics render the set \(\mathcal{A}_{\chi}\) SGPAS as \((\alpha,\beta,\max\tau_{i}^{-1},\varepsilon,\max\sigma_{i})\to 0\). For our numerical simulations, we choose the parameters: \(u_{1}^{s}=(-4,-8)\), \(u_{2}^{s}=(-12,-3)\), \(u_{3}^{s}=(1,7)\), \(u_{4}^{s}=(16,8)\), \((\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})=\operatorname{col}\left(2,3,4,2 \right)\times 10^{-3}\), \(c_{1}=\frac{1}{3}\), \(c_{3}=1.5\), \(\alpha=0.05\), \(\beta=0.003\), \(c_{2,i}=\sigma_{i}\), \(a_{i}=0.1\) for all \(i\), \(\mathbf{t}(0,0)=(0,0.002,0.004,0.006)\), the perturbation frequencies \(\omega_{i}^{j}\) were chosen as different natural numbers with added random numbers of maximal amplitude of \(0.5\), and the sampling of the Nash equilibrium seeking controller in (23) is five time slower than the sampling of the unicycle controller in (22a), i.e. \(\mathbf{\tau}=\operatorname{col}\left(1,1.5,2,1\right)\times 10^{-2}\). The numerical results are illustrated on Figures 1 and 2. We note that the trajectories converge to the neighborhood of the Nash equilibrium. ## 4 Conclusion The application of singular perturbation theory can be extended to systems where the restricted system evolves on the boundary layer manifold through both flows and jumps. Moreover, by introducing some mild technical assumptions, one can show convergence of the fast state components towards a restricted attractor set that does not encompass the complete space of fast variables. With this theoretical extension, we can examine control systems that employ hybrid plants, along with controllers that are "jump-driven" such as sampled controllers.
2305.19727
Unbalanced Low-rank Optimal Transport Solvers
The relevance of optimal transport methods to machine learning has long been hindered by two salient limitations. First, the $O(n^3)$ computational cost of standard sample-based solvers (when used on batches of $n$ samples) is prohibitive. Second, the mass conservation constraint makes OT solvers too rigid in practice: because they must match \textit{all} points from both measures, their output can be heavily influenced by outliers. A flurry of recent works in OT has addressed these computational and modelling limitations, but has resulted in two separate strains of methods: While the computational outlook was much improved by entropic regularization, more recent $O(n)$ linear-time \textit{low-rank} solvers hold the promise to scale up OT further. On the other hand, modelling rigidities have been eased owing to unbalanced variants of OT, that rely on penalization terms to promote, rather than impose, mass conservation. The goal of this paper is to merge these two strains, to achieve the promise of \textit{both} versatile/scalable unbalanced/low-rank OT solvers. We propose custom algorithms to implement these extensions for the linear OT problem and its Fused-Gromov-Wasserstein generalization, and demonstrate their practical relevance to challenging spatial transcriptomics matching problems.
Meyer Scetbon, Michal Klein, Giovanni Palla, Marco Cuturi
2023-05-31T10:39:51Z
http://arxiv.org/abs/2305.19727v1
# Unbalanced Low-rank Optimal Transport Solvers ###### Abstract The relevance of optimal transport methods to machine learning has long been hindered by two salient limitations. First, the \(O(n^{3})\) computational cost of standard sample-based solvers (when used on batches of \(n\) samples) is prohibitive. Second, the mass conservation constraint makes OT solvers too rigid in practice: because they must match _all_ points from both measures, their output can be heavily influenced by outliers. A flurry of recent works in OT has addressed these computational and modelling limitations, but has resulted in two separate strains of methods: While the computational outlook was much improved by entropic regularization, more recent \(O(n)\) linear-time _low-rank_ solvers hold the promise to scale up OT further. On the other hand, modelling rigidities have been eased owing to unbalanced variants of OT, that rely on penalization terms to promote, rather than impose, mass conservation. The goal of this paper is to merge these two strains, to achieve the promise of _both_ versatile/scalable unbalanced/low-rank OT solvers. We propose custom algorithms to implement these extensions for the linear OT problem and its Fused-Gromov-Wasserstein generalization, and demonstrate their practical relevance to challenging spatial transcriptomics matching problems. ## 1 Introduction Recent machine learning (ML) works have witnessed a flurry of activity around optimal transport (OT) methods. The OT toolbox provides convenient, intuitive and versatile ways to quantify the difference between two probability measures, either to quantify a distance (the Wasserstein and Gromov-Wasserstein distances), or, in more elaborate scenarios, by computing a push-forward map that can transform one measure into the other (Peyre and Cuturi, 2019). Recent examples include, e.g., single-cell omics (Bunne et al., 2021, 2022; Demetci et al., 2020; Nitzan et al., 2019; Cang et al., 2023; Klein et al., 2023), attention mechanisms (Tay et al., 2020; Sander et al., 2022), self-supervised learning(Caron et al., 2020; Oquab et al., 2023), and learning on graphs (Vincent-Cuz et al., 2023). **On the challenges of using OT.** Despite their long presence in ML (Rubner et al., 2000), OT methods have long suffered from various limitations, that arise from their statistical, computational, and modelling aspects. The _statistical_ argument is commonly referred to as the curse-of-dimensionality of OT estimators: the Wasserstein distance between two probability densities, and its associated optimal Monge map, is poorly approximated using samples as the dimension \(d\) of observation grows (Dudley et al., 1966; Boissard and Le Gouic, 2014). On the _computational_ side, computing OT between a pair of \(n\) samples involves solving a (generalized) matching problem, with a price of \(O(n^{3})\) and above (Kuhn, 1955; Ahuja et al., 1993). Finally, the original _model_ for OT rests on a mass conservation constraint: all observations from either samples must be accounted for, including outliers that are prevalent in machine learning datasets. Combined, these weaknesses have long hindered the use of OT, until a more recent generation of solvers addressed these three crucial issues. **The Entropic Success Story.** The winning approach, so far, to carry out that agenda has been entropic regularization methods (Cuturi, 2013). The computational virtues of the Sinkhorn algorithm when solving OT (Altschuler et al., 2017; Peyre et al., 2016; Solomon et al., 2016) come with statistical efficiency (Genevay et al., 2019; Mena and Niles-Weed, 2019; Chizat et al., 2020), and can also be seamlessly combined with _unbalanced_ formulations by penalizing - rather than constraint - mass conversation, both for the linear (Frogner et al., 2015; Chizat et al., 2018; Sejourne et al., 2022; Fatras et al., 2021) and quadratic (Sejourne et al., 2021) problems. These developments have all been implemented in popular OT packages (Feydy et al., 2019; Flamary et al., 2021; Cuturi et al., 2022). **The Low-Rank Alternative.** A recent strain of solvers relies instead on _low-rank_ (LR) properties of cost and coupling matrices (Forrow et al., 2018; Scetbon and Cuturi, 2020; Scetbon et al., 2021). Much like entropic solvers, these LR solvers have a better statistical outlook (Scetbon and Cuturi, 2022) and extend to GW problems (Scetbon et al., 2022). In stark contrast to entropic solvers, however, LR solvers benefit from linear complexity \(O(nrd)\) w.r.t sample size \(n\) (using rank \(r\) and cost dimension \(d\)) that can scale to ambitious tasks where entropic solvers fail (Klein et al., 2023). **The Need for Unbalanced Low-Rank Solvers.** LR solvers do suffer, however, from a major practical limitation: their inability to handle unbalanced problems. Yet, unbalancedness is a crucial ingredient for OT to be practically relevant. This is exemplified by the fact that unbalancedness played a crucial role in the seminal reference (Schiebinger et al., 2019), where it is used to model cell birth and death. **Our Contributions** We propose in this work to lift this last limitation for LR solvers to: * Incorporate unbalanced regularizers to define a LR linear solver (SS 3.1); * Provide accelerated algorithms, inspired by some of the recent corrections proposed by (Sejourne et al., 2022), to isolate translation terms that appear in dual subroutines (SS 3.2); * Carry over and adapt these approaches to the GW (SS 3.3) and Fused-GW problems (SS 3.4); * Carry out an exhaustive hyperparameter selection procedure within large scale OT tasks (spatial transcriptomics, brain imaging), and demonstrate the benefits of our approach (SS 4). ## 2 Reminders on Low-Rank Transport and Unbalanced Transport We consider two metric spaces \((\mathcal{X},d_{\mathcal{X}})\) and \((\mathcal{Y},d_{\mathcal{Y}})\), as well as a cost function \(c:\mathcal{X}\times\mathcal{Y}\to[0,+\infty[\). The simplex \(\Delta_{n}^{+}\) holds all positive \(n\)-vectors summing to \(1\). For \(n,m\geq 1,a\in\Delta_{n}^{+}\), and \(b\in\Delta_{m}^{+}\), given points \(x_{1},\ldots,x_{n}\in\mathcal{X}\) and \(y_{1},\ldots,y_{m}\in\mathcal{Y}\), we define two discrete probability measures \(\mu\) and \(\nu\) as \(\mu:=\sum_{i=1}^{n}a_{i}\delta_{x_{i}}\), \(\nu:=\sum_{j=1}^{m}b_{j}\delta_{y_{j}}\) where \(\delta_{z}\) is the Dirac mass at \(z\). **Cost matrices.** For \(q\geq 1\), consider first two square pairwise _cost_ matrices, each encoding the geometries of points _within_\(\mu\) and \(\nu\), and a rectangular matrix that studies that _across_ their support: \[A:=[d_{\mathcal{X}}^{\prime}(x_{i},x_{i^{\prime}})]_{1\leq i,i^{\prime}\leq n },\ B:=[d_{\mathcal{Y}}^{\prime}(y_{j},y_{j^{\prime}})]_{1\leq j,j^{\prime} \leq m}\,,\ C:=[c(x_{i},y_{j})]_{1\leq i,j\leq n,m}\,.\] **The Kantorovich Formulation of OT** is defined as the following linear program, defined by \(C\): \[\text{OT}(\mu,\nu):=\min_{P\in\Pi_{a,b}}\langle C,P\rangle\,,\quad\text{where} \quad\Pi_{a,b}:=\big{\{}P\in\mathbb{R}_{+}^{n\times m},\ \text{s.t.}\ P\mathbf{1}_{m}=a,\ P^{T}\mathbf{1}_{n}=b\big{\}}\,. \tag{1}\] **The Low-Rank Formulation of OT** is best understood as a variant of (1) that rests on a low-rank _property_ for cost matrix \(C\), and low-rank _constraints_ for couplings \(P\). More precisely, Scetbon et al. (2021) propose to constraint the set of admissible couplings to those, within \(\Pi_{a,b}\), that have a non-negative rank of \(r\geq 1\). That set can be equivalently reparamatized as \[\Pi_{a,b}(r)=\{P\in\mathbb{R}_{+}^{n\times m}|P=Q\operatorname{diag}(1/g)R^{T},\ Q\in\Pi_{a,g},\ R\in\Pi_{b,g},\ \text{ and }\ g\in\Delta_{r}^{+}\}.\] The low-rank optimal transport (LOT) problem simply uses that restriction in (1) to define : \[\text{LOT}_{r}(\mu,\nu):=\min_{P\in\Pi_{a,b}(r)}\langle C,P\rangle=\min_{Q\in \Pi_{a,g},R\in\Pi_{a,g},g\in\Delta_{r}^{+}}\langle C,Q\operatorname{diag}(g)R \rangle\,. \tag{2}\] Scetbon et al. (2021) propose and prove the convergence of a mirror-descent scheme to solve (2), and obtain linear time and memory complexities with respect to the number of samples, where each iteration in that descent scales as \((n+m)rd\), where \(d\) is the rank of \(C\). **The Unbalanced Formulation of OT** starts from (1) as well, but proposes to do without \(\Pi_{a,b}\) and its marginal constraints (Frogner et al., 2015; Chizat et al., 2018), and rely instead on two regularizers: \[\text{UOT}(\mu,\nu):=\min_{P\in\mathbb{R}_{+}^{n\times m}}\langle C,P\rangle+ \tau_{1}\text{KL}(P\mathbf{1}_{m}|a)+\tau_{2}\text{KL}(P^{T}\mathbf{1}_{n}|b). \tag{3}\] This formulation is solved using entropic regularization, with modified Sinkhorn updates (Frogner et al., 2015). _Proposing an efficient algorithm able to merge (2) with (3) is the first goal of this paper._ **Gromov-Wasserstein (GW) Considerations.** The GW problem (Memoli, 2011) is a generalization of (1) where the energy \(\mathcal{Q}_{A,B}\) is a quadratic function of \(P\) defined through inner cost matrices \(A\), \(B\): \[\mathcal{Q}_{A,B}(P)\!:=\!\!\sum_{i,j,i^{\prime},j^{\prime}}\!(A_{ii^{\prime}} -B_{jj^{\prime}})^{2}P_{ij}P_{i^{\prime}j^{\prime}}\!=\!\mathbf{1}_{m}^{T}P^{T }A^{\odot 2}P\mathbf{1}_{m}+\mathbf{1}_{n}^{T}PB^{\odot 2}P^{T}\mathbf{1}_{n}-2 \langle APB,P\rangle. \tag{4}\] To minimize (4), the default approach rests on entropic regularization (Solomon et al., 2016; Peyre et al., 2016) and variants (Sato et al., 2020; Blumberg et al., 2020; Xu et al., 2019; Li et al., 2023). Scetbon et al. (2022) adapted the low-rank framework to minimize \(\mathcal{Q}_{A,B}\) over low-rank matrices \(P\), achieving a linear-time complexity when \(A\) and \(B\) are themselves low-rank. Independently, (Sejourne et al., 2021) proposed an unbalanced generalization that also applies to GW and which can be implemented practically using entropic regularization. Finally, the minimization of a composite objective involving both \(\mathcal{Q}_{A,B}\) and \(\langle C,\cdot\rangle\) is known as the _fused_ GW problem (Vayer et al., 2018). ## 3 Unbalanced Low-Rank Transport ### Unbalanced Low-rank Linear Optimal Transport We incorporate unbalancedness to low-rank solvers (Scetbon et al., 2021, 2022), moving gradually from the linear problem to the more involved GW and FGW problem. Using the framework of (Frogner et al., 2015; Chizat et al., 2018), we can first extend the definition of LOT, introduced in (2), to the unbalanced case by considering the following optimization problem: \[\text{ULOT}_{r}(\mu,\nu):=\min_{P:\ \mathbf{\xi}_{k+}(P)\leq r}\langle C,P \rangle\!+\!\tau_{1}\text{KL}(P\mathbf{1}_{m}|a)+\tau_{2}\text{KL}(P^{T} \mathbf{1}_{n}|b), \tag{5}\] where \(\text{rk}_{+}(P)\) denotes the nonnegative rank of \(P\). Therefore by denoting \(\Pi_{r}:=\{(Q,R,g)\in\mathbb{R}_{+}^{n\times r}\times R_{+}^{m\times r}\times R _{+}^{\mp}\colon Q^{T}\mathbf{1}_{n}=R^{T}\mathbf{1}_{m}=g\}\), and using the repamatrization of low-rank couplings, we obtain the following equivalent formulation of ULOT: \[\text{ULOT}_{r}(\mu,\nu)=\min_{(Q,R,g)\in\Pi_{r}}\underbrace{\langle C,Q\ \text{diag}(1/g)R^{T}\rangle}_{\mathcal{L}_{C}(Q,R,g)}+\underbrace{\tau_{1} \text{KL}(Q\mathbf{1}_{r}|a)+\tau_{2}\text{KL}(R\mathbf{1}_{r}|b)}_{\mathcal{ G}_{a,b}(Q,R,g)}. \tag{6}\] We introduce slightly more compact notations for \(\mathcal{G}_{a,b}(Q,R,g)=F_{\tau_{1},a}(Q\mathbf{1}_{r})+F_{\tau_{2},b}(R \mathbf{1}_{r}),\) where \(F_{\tau,z}(s)=\tau\text{KL}(s|z)\) for \(\tau>0\) and \(z\geq 0\) coordinate-wise. To solve (6), and using this split, we move away from mirror-descent and apply instead proximal gradient-descent with respect to the KL divergence. At each iteration, we consider a linear approximation of \(\mathcal{L}_{C}\) where a KL penalization is added to the objective, as in the classical mirror descent, however, we leave \(\mathcal{G}_{a,b}\) intact at each iteration. Borrowing notations from (Scetbon et al., 2021), we must solve at each iteration the convex optimization problem: \[(Q_{k+1},R_{k+1},g_{k+1}):=\operatorname*{argmin}_{\boldsymbol{\zeta}\in\Pi_{r} }\frac{1}{\gamma_{k}}\text{KL}(\boldsymbol{\zeta},\boldsymbol{\xi}_{k})+\tau_{ 1}\text{KL}(Q\mathbf{1}_{r}|a)+\tau_{2}\text{KL}(R\mathbf{1}_{r}|b) \tag{7}\] where \((Q_{0},R_{0},g_{0})\in\Pi_{r}\) is an initial point, \(\boldsymbol{\xi}_{k}:=(\xi_{k}^{(1)},\xi_{k}^{(2)},\xi_{k}^{(3)})\) holds running costs matrices defined as \[\xi_{k}^{(1)}:=Q_{k}\odot e^{-\gamma_{k}CR_{k}\operatorname{diag}(1/g_{k})}, \xi_{k}^{(2)}:=R_{k}\odot e^{-\gamma_{k}C^{T}Q_{k}\operatorname{diag}(1/g_{k}) },\xi_{k}^{(3)}:=g_{k}\odot e^{\gamma_{k}\omega_{k}/g_{k}^{2}},\] with \([\omega_{k}]_{i}:=[Q_{k}^{T}CR_{k}]_{i,i}\) for all \(i\in\{1,\ldots,r\}\), and \((\gamma_{k})_{k\geq 0}\) is a sequence of positive step sizes. **Reformulation using Duality.** To solve (7), we follow (Scetbon et al., 2021) and apply Dykstra's algorithm (Dykstra, 1983). The iterations of this algorithm takes a very simple form that can be obtained as an alternating maximization on the dual formulation of (7), provided as follows. **Proposition 1**.: _The convex optimization problem defined in (7) admits the following dual:_ \[\begin{split}\sup_{f_{1},h_{1},f_{2},h_{2}}&\mathcal{D} _{k}(f_{1},h_{1},f_{2},h_{2}):=-F^{*}_{\tau_{1},a}(-f_{1})-\frac{1}{\gamma_{k}} \langle e^{\gamma_{k}(f_{1}\oplus h_{1})}-1,\xi_{k}^{(1)}\rangle\\ &-F^{*}_{\tau_{2},b}(-f_{2})-\frac{1}{\gamma_{k}}\langle e^{\gamma _{k}(f_{2}\oplus h_{2})}-1,\xi_{k}^{(2)}\rangle-\frac{1}{\gamma_{k}}\langle e^ {-\gamma_{k}(h_{1}+h_{2})}-1,\xi_{k}^{(3)}\rangle\end{split} \tag{8}\] _where \(h_{1},h_{2}\in\mathbb{R}^{r}\), \(f_{1}\in\mathbb{R}^{n}\), \(f_{2}\in\mathbb{R}^{m}\), \(F^{*}_{\tau,z}(\cdot):=\sup_{y}\{\langle y,\cdot\rangle-F_{\tau,z}(y)\}\) is the convex conjugate of \(F_{\tau,z}\). In addition strong duality holds and the primal problem admits a unique minimizer._ **Remark 1**.: _While we stick to KL regularizers in this work for simplicity, it is worth noting that this can be extended to more generic regularizers \(F_{\tau_{1},a}\) and \(F_{\tau_{2},b}\), as considered by Chizat et al. (2018)._ We use an alternating maximization scheme to solve (8). Starting from \(h_{1}^{(0)}=h_{2}^{(0)}=\mathbf{0}_{r}\), we apply for \(\ell\geq 0\) the following updates (dropping iteration number \(k\) in (7) for simplicity): \[\begin{split} f_{1}^{(\ell+1)}:=\arg\sup_{z}\mathcal{D}(z,h_{1}^ {(\ell)},f_{2}^{(\ell)},h_{2}^{(\ell)}),\,f_{2}^{(\ell+1)}:=\arg\sup_{z} \mathcal{D}(f_{1}^{(\ell+1)},h_{1}^{(\ell)},z,h_{2}^{(\ell)}),\\ (h_{1}^{(\ell+1)},h_{2}^{(\ell+1)}):=\arg\sup_{z_{1},z_{2}}\mathcal{ D}(f_{1}^{(\ell+1)},z_{1},f_{2}^{(\ell+1)},z_{2}).\end{split}\] These maximizations can all be obtained in closed form, to result in the closed-form updates: \[\begin{split}\exp(\gamma f_{1}^{(\ell+1)})=\left(\frac{a}{\xi^{( 1)}\exp(\gamma h_{1}^{(\ell)})}\right)^{\frac{\tau_{1}}{\tau_{1}+1/\gamma}}, \quad\exp(\gamma f_{2}^{(\ell+1)})=\left(\frac{b}{\xi^{(2)}\exp(\gamma h_{2}^ {(\ell)})}\right)^{\frac{\tau_{2}}{\tau_{2}+1/\gamma}}\\ g_{\ell+1}:=\left(\xi^{(3)}\odot(\xi^{(1)})^{T}\exp(\gamma f_{1}^{( \ell+1)})\odot(\xi^{(2)})^{T}\exp(\gamma f_{2}^{(\ell+1)})\right)^{1/3}\\ \exp(\gamma h_{1}^{(\ell+1)})=\frac{g_{\ell+1}}{(\xi^{(1)})^{T} \exp(\gamma f_{1}^{(\ell+1)})},\quad\exp(\gamma h_{2}^{(\ell+1)})=\frac{g_{ \ell+1}}{(\xi^{(2)})^{T}\exp(\gamma f_{2}^{(\ell+1)})}\end{split}\] When using "scaling" representations for these dual variables, \(\ell\geq 0\), \(u_{i}^{(\ell)}:=\exp(\gamma f_{i}^{(\ell)})\) and \(v_{i}^{(\ell)}:=\exp(\gamma h_{i}^{(\ell)})\) for \(i\in\{1,2\}\), we obtain a simple update, provided in the appendix (Alg. 6). **Initialization and Termination.** We use the stopping criterion proposed in (Scetbon et al., 2021) to terminate the algorithm, \(\Delta(\mathbf{\zeta},\mathbf{\tilde{\zeta}},\gamma):=\frac{1}{\gamma^{2}}(\mathrm{ KL}(\mathbf{\zeta},\mathbf{\tilde{\zeta}})+\mathrm{KL}(\mathbf{\tilde{\zeta}},\mathbf{\zeta}))\). Combined with practical improvements proposed in (Scetbon and Cuturi, 2022) to initialize the algorithm, and adapt the choice of \(\gamma_{k}\) at each iteration \(k\) of the outer loop, we can now summarize our proposal in Algorithm 1, which can be seen as an extension of (Scetbon et al., 2021, Alg.2). **Convergence and Complexity.** The proof of convergence of the Dykstra algorithm (Alg. 6) can be found for example in (Bauschke and Combettes, 2008)). In addition, (Scetbon et al., 2021) show the convergence of their scheme towards a stationary points w.r.t to the criterion \(\Delta(\cdot,\cdot,\gamma)\) for \(\gamma\) fixed along the iterations of the outer loop. In terms of complexity, given \(\mathbf{\xi}\), solving Eq. (7) requires a time and memory complexity of \(\mathcal{O}((n+m)r)\). However computing \(\mathbf{\xi}\) requires in general \(\mathcal{O}((n^{2}+m^{2})r)\) time and \(\mathcal{O}(n^{2}+m^{2})\) memory. In (Scetbon et al., 2021), the authors propose to consider low-rank approximation of the cost matrix \(C\) of the form \(C\simeq C_{1}C_{2}^{T}\) where \(C_{1}\in\mathbb{R}^{n\times d}\) and \(C_{2}\in\mathbb{R}^{m\times d}\): in that case computing \(\mathbf{\xi}\) can be done in \(\mathcal{O}((n+m)rd)\) time and \(\mathcal{O}((n+m)(r+d))\) memory. Such approximations can be obtained using the algorithm in (Indyk et al., 2019) which guarantees that for any distance matrix \(C\in\mathbb{R}^{n\times m}\) and \(\alpha>0\) it can outputs matrices \(C_{1}\in\mathbb{R}^{n\times d}\), \(C_{2}\in\mathbb{R}^{m\times d}\) in \(\mathcal{O}((m+n)\text{poly}(\frac{d}{\alpha}))\) algebraic operations such that with probability at least \(0.99\) that \(\|C-C_{1}C_{2}^{T}\|_{F}^{2}\leq\|C-C_{d}\|_{F}^{2}+\alpha\|C\|_{F}^{2}\), where \(C_{d}\) denotes the best rank-\(d\) approximation to \(C\). ### Improvements on the Unbalanced Dykstra Algorithm A well documented source of instability of unbalanced formulations of OT lies in capturing efficiently what optimal mass is targeted by such formulations. Sejourne et al. (2022) have proposed a technique to address this issue and lower significantly computational costs. They propose first a dual objective that is _translation_ invariant. We take inspiration from this strategy and adapt to our problem, to propose the following variant of (8): \[\sup_{\tilde{f}_{1},\tilde{h}_{1},\tilde{f}_{2},\tilde{h}_{2}}\left(\mathcal{D} _{\textsf{TI}}(\tilde{f}_{1},\tilde{h}_{1},\tilde{f}_{2},\tilde{h}_{2}):=\sup_ {\lambda_{1},\lambda_{2}\in\mathbb{R}}\mathcal{D}(\tilde{f}_{1}+\lambda_{1}, \tilde{h}_{1}-\lambda_{1},\tilde{f}_{2}+\lambda_{2},\tilde{h}_{2}-\lambda_{2})\right) \tag{9}\] It is clear from the reparametrization that both problems (8) and (9) have the same value and also that \((\tilde{f}_{1},\tilde{h}_{1},\tilde{f}_{2},\tilde{h}_{2})\) is solution of (9) if and only if \((\tilde{f}_{1}+\lambda_{1}^{*},\tilde{h}_{1}-\lambda_{1}^{*},\tilde{f}_{2}+ \lambda_{2}^{*},\tilde{h}_{2}-\lambda_{2}^{*})\) is solution of (8) where \((\lambda_{1}^{*},\lambda_{2}^{*})\) solves \(\mathcal{D}_{\textsf{TI}}(\tilde{f}_{1},\tilde{h}_{1},\tilde{f}_{2},\tilde{h} _{2})\). To solve (9), we show that the variational formulation of the translation invariant dual objective targeted inside (9) can be obtained in closed form. **Proposition 2**.: _Let \(\tilde{f}_{1}\in\mathbb{R}^{n}\), \(\tilde{f}_{2}\in\mathbb{R}^{m}\) and \(\tilde{h}_{1},\tilde{h}_{2}\in\mathbb{R}^{r}\), then the inner problem defined in (9) by \(\mathcal{D}_{\textsf{TI}}(\tilde{f}_{1},\tilde{h}_{1},\tilde{f}_{2},\tilde{h} _{2})\) admits a unique solution \((\lambda_{1}^{*},\lambda_{2}^{*})\) and we have that_ \[\lambda_{1}^{\star} :=\left(1-\frac{\tau_{1}\tau_{2}}{(1/\gamma+\tau_{1})(1/\gamma+ \tau_{2})}\right)^{-1}\left(\frac{\tau_{1}/\gamma}{1/\gamma+\tau_{1}}c_{1}- \frac{\tau_{1}/\gamma}{1/\gamma+\tau_{1}}\frac{\tau_{2}}{1/\gamma+\tau_{2}}c_ {2}\right) \tag{10}\] \[\lambda_{2}^{\star} :=\left(1-\frac{\tau_{1}\tau_{2}}{(1/\gamma+\tau_{1})(1/\gamma+ \tau_{2})}\right)^{-1}\left(\frac{\tau_{2}/\gamma}{1/\gamma+\tau_{2}}c_{2}- \frac{\tau_{1}/\gamma}{1/\gamma+\tau_{1}}\frac{\tau_{2}}{1/\gamma+\tau_{2}}c_ {1}\right) \tag{11}\] _where_ \[c_{1}:=\log\left(\frac{\langle\exp(-\tilde{f}_{1}/\tau_{1}),a\rangle}{\langle \exp(-\gamma(\tilde{h}_{1}+\tilde{h}_{2})),\xi^{(3)}\rangle}\right),\quad\text {and}\quad c_{2}:=\log\left(\frac{\langle\exp(-\tilde{f}_{2}/\tau_{2}),a\rangle }{\langle\exp(-\gamma(\tilde{h}_{1}+\tilde{h}_{2})),\xi^{(3)}\rangle}\right).\] We are now ready to perform an alternate maximization scheme on the translation invariant formulation of the dual \(\mathcal{D}_{\textsf{TI}}\). Indeed using Danskin's theorem (under the assumption that \(\lambda_{1}^{*},\lambda_{2}^{*}\) do not diverge), one obtains a variant of Algorithm 6, summarized in Algorithm 3. ``` Inputs:\(a,b,\xi^{(3)},u_{1},v_{1},u_{2},v_{2},\gamma,\tau_{1},\tau_{2}\) \(\tilde{u}_{1}\gets u_{1}^{-1/\gamma/\tau_{1}},\;\tilde{u}_{2}\gets u_{2} ^{-1/\gamma/\tau_{2}}\) \(c_{1}\leftarrow\log(\langle\tilde{u}_{1},a\rangle)-\log(\langle\xi^{(3)},v_{1}^ {-1}\odot v_{2}^{-1}\rangle),\;\;c_{2}\leftarrow\log(\langle\tilde{u}_{2},b \rangle)-\log(\langle\xi^{(3)},v_{1}^{-1}\odot v_{2}^{-1}\rangle)\) Result:\(\lambda_{1}^{*},\;\;\lambda_{2}^{*}\) as in (10), (11) ``` **Algorithm 2** compute-lambdas\((a,b,\xi^{(3)},u_{1},v_{1},u_{2},v_{2},\gamma,\tau_{1},\tau_{2})\) **Inputs:**\(a,b,\xi^{(3)},u_{1},v_{1},u_{2},v_{2},\gamma,\tau_{1},\tau_{2}\) \(\tilde{u}_{1}\gets u_{1}^{-1/\gamma/\tau_{1}},\;\tilde{u}_{2}\gets u_{2} ^{-1/\gamma/\tau_{2}}\) \(c_{1}\leftarrow\log(\langle\tilde{u}_{1},a\rangle)-\log(\langle\xi^{(3)},v_{1} ^{-1}\odot v_{2}^{-1}\rangle)\) Result:\(\lambda_{1}^{*},\;\;\lambda_{2}^{*}\) as in (10), (11) ``` Inputs:\(a,b,\mathbf{\xi}=(\xi^{(1)},\xi^{(2)},\xi^{(3)}),\gamma,\tau_{1},\tau_{2},\delta\) \(v_{1}=v_{2}=\mathbf{1}_{r}\), \(u_{1}=\mathbf{1}_{n}\), \(u_{2}=\mathbf{1}_{m}\) repeat \(\tilde{v}_{1}=v_{1},\ \tilde{v}_{2}=v_{2},\tilde{u}_{1}=u_{1},\tilde{u}_{2}=u_{2}\) \(\lambda_{1},\lambda_{2}\leftarrow\text{compute-lambdas}(a,b,\xi^{(3)},u_{1},v_{1 },u_{2},v_{2},\gamma,\tau_{1},\tau_{2})\) (Alg. 2) \(u_{1}=\left(\frac{a}{\xi^{(1)}v_{1}}\right)^{\frac{\gamma_{1}}{1+\gamma_{1}}} \exp(-\lambda_{1}/\tau_{1})^{\frac{\gamma_{1}}{1/\gamma+\tau_{1}}},\quad u_{2}= \left(\frac{b}{\xi^{(2)}v_{2}}\right)^{\frac{\gamma_{2}}{\gamma_{2}+1/\gamma} }\exp(-\lambda_{2}/\tau_{2})^{\frac{\gamma_{2}}{1/\gamma+\tau_{2}}},\) \(\lambda_{1},\lambda_{2}\leftarrow\text{compute-lambdas}(a,b,\xi^{(3)},u_{1},v_{ 1},u_{2},v_{2},\gamma,\tau_{1},\tau_{2})\) (Alg. 2) \(g=\exp(\gamma(\lambda_{1}+\lambda_{2}))^{1/3}\left(\xi^{(3)}\odot(\xi^{(1)})^ {T}u_{1}\odot(\xi^{(2)})^{T}u_{2}\right)^{1/3},\ v_{1}=\frac{g}{(\xi^{(1)})^{T} u_{1}},\ v_{2}=\frac{g}{(\xi^{(2)})^{T}u_{2}}\) until\(\frac{1}{\gamma}\max(\|\log(u_{i}/\tilde{u}_{i})\|_{\infty},\|\log(v_{i}/\tilde{v}_{i})\|_{ \infty})<\delta;\) Result:\(\operatorname{diag}(u_{1})\xi^{(1)}_{k}\operatorname{diag}(v_{1})\), \(\operatorname{diag}(u_{2})\xi^{(2)}_{k}\operatorname{diag}(v_{2})\), \(g\) ``` **Algorithm 3** ULR-TI-Dykstra\((a,b,\mathbf{\xi},\gamma,\tau_{1},\tau_{2},\delta)\) ### Unbalanced Low-rank Gromov-Wasserstein The low-rank Gromov-Wasssertein (LGW) problem (Svetbon et al., 2022) between the two discrete metric measure spaces \((\mu,d_{\mathcal{X}})\) and \((\nu,d_{\mathcal{Y}})\), written for compactness using \((a,A)\) and \((b,B)\), reads \[\text{LGW}_{r}((a,A),(b,B))=\min_{P\in\Pi_{a,b}(r)}\mathcal{Q}_{A,B}(P), \tag{12}\] Following SS 3.1, we introduce the unbalanced low-rank Gromov-Wasserstein problem (ULGW). There is, however, an important difference with (12): When \(P\) is constrained to be in \(\Pi_{a,b}\), the first two terms of the RHS in (12) simplify to \(a^{T}A^{\odot 2}a+b^{T}B^{\odot 2}b\). Hence, they are constant and can be discarded when optimizing. In an unbalanced setting, these terms vary and must be accounted for. \[\begin{split}&\text{ULGW}_{r}((a,A),(b,B))=\!\!\min_{(Q,R,g)\in \Pi_{r}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Inputs:**\(A,B,a,b,r,\gamma_{0},\tau_{1},\tau_{2},\delta\) \(Q,R,g\leftarrow\) Initialization as proposed in (Sceetbon and Cuturi, 2022) **repeat** \(\tilde{Q}=Q,\ \ \tilde{R}=R,\ \ \tilde{g}=g,\) \(\nabla_{Q}=4AQ\operatorname{diag}(1/g)R^{T}BR\operatorname{diag}(1/g)+2Q \mathbf{1}_{\tau}\mathbf{1}_{\tau}^{T},\) \(\nabla_{R}=4BR\operatorname{diag}(1/g)Q^{T}AQ\operatorname{diag}(1/g)+2R \mathbf{1}_{\tau}\mathbf{1}_{\tau}^{T},\) \(\omega\leftarrow\mathcal{D}(Q^{T}AQ\operatorname{diag}(1/g)R^{T}BR),\ \ \nabla_{g}=-\omega/g^{2},\) \(\gamma\leftarrow\gamma_{0}/\max(\|\nabla_{Q}\|_{\infty}^{2},\|\nabla_{R}\|_{ \infty}^{2},\|\nabla_{g}\|_{\infty}^{2}),\) \(\xi^{(1)}\gets Q\odot\exp(-\gamma\nabla_{Q}),\ \xi^{(2)}\gets R \odot\exp(-\gamma\nabla_{R}),\ \xi^{(3)}\gets g\odot\exp(-\gamma_{k}\nabla_{g}),\) \(Q,R,g\leftarrow\) ULR-TI-Dykstra\((a,b,\boldsymbol{\xi},\gamma,\tau_{1},\tau_{2},\delta)\) (Alg. 3) **until \(\Delta((Q,R,g),(\tilde{Q},\tilde{R},\tilde{g}),\gamma)<\delta\)**; **Result:**\(Q,R,g\) **Convergence and Complexity.** Similarly to linear ULOT, the unbalanced Dykstra algorithm is guaranteed to converge (Bauschke and Lewis, 2000). in addition, (Sceetbon et al., 2022) prove the convergence of their scheme to a stationary point of the problem. Concerning the complexity, as we use Algorithm 6 with the same complexity, we obtain therefore the exact same complexity in terms of time of memory to solve these inner problems. The slight variation in kernel \(\boldsymbol{\xi}\) compared to ULOT still retains the same \(\mathcal{O}((n^{2}+m^{2})r)\) time and \(\mathcal{O}(n^{2}+m^{2})\) memory complexities. However as in ULOT, we can take advantage of low-rank approximations of the costs matrices \(A\) and \(B\) in order to reach linear complexity. Indeed, assuming \(A\simeq A_{1}A_{2}^{T}\) and \(B\simeq B_{1}B_{2}\) where \(A_{1},A_{2}\in\mathbb{R}^{n\times d_{X}}\) and \(B_{1},B_{2}\in\mathbb{R}^{m\times d_{Y}}\), then the total time and memory complexities become respectively \(\mathcal{O}(mr(r+d_{Y})+nr(r+d_{X}))\) and \(\mathcal{O}((n+m)(r+d_{X}+d_{Y}))\). Again, when \(A\) and \(B\) are distance matrices, we use the algorithms from (Indyk et al., 2019). ### Unbalanced Low-rank Fused-Gromov-Wasserstein We finally focus on the increasingly popular (Klein et al., 2023) Fused-Gromov-Wasserstein problem, which merges linear and quadratic objectives (Vayer et al., 2018): \[\text{FGW}(\mu,\nu):=\min_{P\in\Pi_{a,b}}\alpha\langle C,P\rangle+\bar{\alpha} \mathcal{Q}_{A,B}(P) \tag{15}\] where \(\alpha\in[0,1]\) and \(\bar{\alpha}:=1-\alpha\) allows interpolating between the GW and linear OT geometries. This problem remains a GW problem, where one replaces the 4-way cost \(M[i,i^{\prime},j,j^{\prime}]:=(A_{i,i^{\prime}}-B_{j,j^{\prime}})^{2}\) appearing in (4) by a composite interpolated cost between the OT and GW geometries, redefined as \(M[i,i^{\prime},j,j^{\prime}]=\alpha C_{i,j}+\bar{\alpha}(A_{i,i^{\prime}}-B_ {j,j^{\prime}})^{2}\). Our proposed unbalanced and low-rank version of the FGW problem includes \(|P|:=\|P\|_{1}\) the mass of \(P\), to homogenize linear and quadratic terms, \[\text{ULFGW}_{r}(\mu,\nu):=\min_{P:\ \pi_{k_{+}}(P)\leq r}\!\!\!\!\!\!\!\!\!\!\!\!\! \alpha|P|\langle C,P\rangle+\bar{\alpha}\mathcal{Q}_{A,B}(P)+\tau_{1}\text{ KL}(P\mathbf{1}_{m}|a)+\tau_{2}\text{KL}(P^{T}\mathbf{1}_{n}|b)\,, \tag{16}\] which is expanded through the explicit factorization of \(P\), noticing that \(|P|=|g|:=\|g\|_{1}\): \[\text{ULFGW}_{r}(\mu,\nu):=\min_{(Q,R,g)\in\Pi_{r}}\alpha|g|\mathcal{L}_{C}(Q, R,g)+\bar{\alpha}\mathcal{Q}_{A,B}(Q,R,g)+\mathcal{G}_{a,b}(Q,R,g) \tag{17}\] Then by linearizing again \(\mathcal{H}:(Q,R,g)\rightarrow\alpha|g|\mathcal{L}_{C}(Q,R,g)+\bar{\alpha} \mathcal{Q}_{A,B}(Q,R,g)\) with an added KL penalty and leaving \(\mathcal{G}_{a,b}\) unchanged, we obtain at each iteration, the same optimization problem as in (14) where the kernel \(\boldsymbol{\xi}_{k}\) is now defined as \[\boldsymbol{\xi}_{k} :=(\xi_{k}^{(1)},\xi_{k}^{(2)},\xi_{k}^{(3)}),\] \[\xi_{k}^{(1)} :=Q_{k}\odot\exp(-\gamma_{k}\nabla_{Q}\mathcal{H}_{k}),\ \xi_{k}^{(2)}:=R_{k}\odot\exp(-\gamma_{k}\nabla_{Q}\mathcal{H}_{k}),\ \xi_{k}^{(3)}:=g_{k}\odot\exp(-\gamma_{k}\nabla_{g}\mathcal{H}_{k})\] \[\nabla_{Q}\mathcal{H}_{k} :=\alpha|g_{k}|CR_{k}\operatorname{diag}(1/g_{k})+\bar{\alpha} \left(2Q_{k}\mathbf{1}_{\tau}\mathbf{1}_{\tau}^{T}+4AP_{k}BR_{k}\operatorname{ diag}(1/g_{k})\right)\] \[\nabla_{R}\mathcal{H}_{k} :=\alpha|g_{k}|C^{T}Q_{k}\operatorname{diag}(1/g_{k})+\bar{\alpha} \left(2R_{k}\mathbf{1}_{\tau}\mathbf{1}_{\tau}^{T}+4BP_{k}^{T}AQ_{k} \operatorname{diag}(1/g_{k})\right)\] \[\nabla_{g}\mathcal{H}_{k} :=\alpha\left((C,P_{k})\mathbf{1}_{\tau}-|g_{k}|\omega_{k}^{\text{ lin}}/g_{k}^{2}\right)-4\bar{\alpha}\omega_{k}^{\text{quad}}/g_{k}^{2}\] \[[\omega_{k}^{\text{lin}}]_{i} :=[Q_{k}^{T}CR_{k}]_{i,i},\ \ [\omega_{k}^{\text{quad}}]_{i}:=[Q_{k}^{T}AP_{k}BR_{k}]_{i,i}\ \ \forall i\in\{1,\ldots,r\}\.\] **Remark 2**.: _Note again that here, we have in general a quadratic complexity both in time and memory, and as soon as we are provided a low-rank approximation of the matrices \(C,A,B\), our proposed algorithm scales linearly with respect to the number of points \(n\) and \(m\)._ ## 4 Experiments Our goal in **Exp. 1** is to compare unbalanced and low-rank (ULR) solvers to balanced and low-rank (LR) counterparts, in **Exp. 2**, compare ULR solvers to entropic (E) counterparts, and in **Exp. 3**, compare our ULR solvers to (Thual et al., 2022), which can learn a sparse transport map, in the unbalanced FGW setting. **Datasets.** We run the experiments on two real world datasets, described in B.1, that are large enough to showcase our solvers. In particular, they consist of both a shared feature space, used to compute the costs matrices for the linear term in the OT and FGW settings, as well as geometries specific to each source \(s\) and target \(t\) data, that are used to compute the costs matrices for the quadratic term in the GW and FGW settings. We leverage mouse brain STARmap spatial transcriptomics data from (Shi et al., 2022) for **Exp. 1** and **Exp. 2**. For **Exp. 3** we use data from the Individual Brain Charting dataset (Pinho et al., 2018), to recapitulate the settings of (Thual et al., 2022). **Metrics.** Following Klein et al. (2023), we evaluate maps by focusing on the two following metrics: (i) **pearson correlation**\(\rho\) computed between the source \(s\) feature matrix \(F^{s}\) and the barycentric projection of the target \(t\) to the source scaled by the target marginals \(b^{t}\): \(T_{t\to s}^{T}\left(F^{t}\ \frac{1}{b^{t}}\right)\); (ii) **F1 score** computed between the original source \(s\) labels \(l^{s}\) and the inferred source labels, computed by taking the \(\operatorname*{argmax}_{j}B_{i,j}\) of the barycentric projection of the target \(t\) one hot encoded labels \(L^{t}\), scaled by the target marginal \(b^{t}\), to the source \(T_{t\to s}^{T}\left(L^{t}\ \frac{1}{b^{t}}\right)\). ### Experiment 1: ULOT vs. LOT on gene expression / cell type annotation mapping Here, we evaluate the accuracy of ULOT solvers for a large-scale spatial transcriptomics task, using gene expression mapping and cell type annotation. We compare it to the balanced LR alternative using the Pearson correlation \(\rho\) as described in the metrics section. We leverage two coronal sections of the mouse brain profiled by STARmap spatial transcriptomics by (Shi et al., 2022). They consist of \(n\approx 40,000\) cells in both the source and target brain section. Each cell is described by 1000 gene features, in addition to 2D spatial coordinates. As a result \(A,B\) are \(\approx 40k\times 40k\), and the fused term \(C\) is a squared-Euclidean \begin{table} \begin{tabular}{l l l l l l l} \hline \hline **solver** & **mass pet** & **val**\(\rho\) & **test**\(\rho\) & **F1 macro** & **F1 weighted** \\ \hline LOT & 1.000 & 0.282 & 0.386 & 0.210 & 0.411 & 0.360 \\ ULOT & 0.899 & 0.301 & 0.409 & 0.200 & 0.425 & 0.363 \\ \hline LOW & 1.000 & 0.227 & 0.288 & 0.487 & 0.716 & 0.692 \\ ULLOW & 1.001 & 0.222 & 0.287 & 0.463 & 0.701 & 0.665 \\ \hline LFGW & 1.000 & 0.365 & 0.443 & 0.576 & 0.720 & 0.714 \\ ULLOW & 0.443 & **0.379** & **0.463** & **0.582** & **0.733** & **0.724** \\ \hline \hline \end{tabular} \end{table} Table 1: Results for spatial transcriptomics dataset (brain coronal section from Shi et al. (2022)). distance matrix on 30D PCA space computed on the gene expression space. We selected 10 marker genes for the validation and test sets from the _HPF_CA_ cluster. We run an extensive grid search as reported in B.2, we pick the best hyperparameters combination using performance on the \(10\) validation genes as a criterion, and we report that metric on the other genes in Table 1, as well as qualitative results in Figure 1 and Figure 2. Clearly, ULFGW is the best performing solver across all metrics. Interestingly, the ULOT does not consistently outperforms its balanced version, and unbalancedness seems to hurt performance for the LGW solvers. Nevertheless, both solvers display inconsistent performance across metrics, whereas the ULFGW and LFGW are consistently superior to the rest of the solvers. These results highlight how the flexibility given by the FGW formulation to leverage common and disparate geometries, paired with the unbalancedness relaxation, can provide state of the art algorithms for matching problems in large-scale, real world biological problems. ### Experiment 2: ULOT vs. UEOT In this experiment, we evaluate the performance of ULOT solvers to the unbalanced entropic alternative (UEOT). We use the same datasets as 4.1, but pick a smaller subset (Olfactory bulb), to avoid OOM errors for entropic UGW solvers, that cannot handle the \(40k\) sizes considered previously (see B.1). They consist of \(n\approx 20,000\) cells in the source and \(\approx 15,000\) cells in the target sections, and 1000 genes. Similar to **Exp.1**, the fused term \(C\) is a squared-Euclidean distance matrix on 30D PCA space computed on the gene expression space. As done in the previous experiment, we select 10 marker genes for the validation and 10 genes for the test set, from cluster _OB_1_. We run an extensive grid search as described for **Exp.2** in 4.1 and B.2. In Table 2, we see that ULFGW outperforms entropic solvers w.r.t. \(\rho\), but is worse when considering the F1 scores. On the other hand, ULFGW confirms its superiority compared to the balanced alternative LFGW. Taken together, these results suggest that while unbalanced LR solvers are on par with unbalanced entropic solvers in terms of performance on small data regimes, they unlock the applications of unbalanced OT to large scale datasets. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **solver** & **mass** & **val**\(\rho\) & **test**\(\rho\) & **F1-mac** & **F1-mic** & **F1-wel** \\ \hline UEOT & 1.012 & 0.368 & 0.479 & 0.511 & 0.763 & 0.751 \\ LOT & 1.000 & 0.313 & 0.440 & 0.511 & 0.760 & 0.751 \\ ULOT & 0.998 & 0.356 & 0.461 & 0.518 & 0.770 & 0.762 \\ \hline ULFGW & 1.015 & 0.343 & 0.475 & **0.564** & **0.839** & **0.831** \\ LFGW & 1.006 & 0.343 & 0.453 & 0.512 & 0.762 & 0.753 \\ ULFGW & 0.339 & **0.368** & **0.491** & 0.556 & 0.826 & 0.818 \\ \hline \hline \end{tabular} \end{table} Table 2: Results for spatial transcriptomics dataset (Olfactory bulb section from Shi et al. (2022)). Figure 1: Spatial visualization of the two mouse brain sections used in **Exp. 1** Figure 2: Visualization of measured and predicted tissue regions in the mouse brain in **Exp. 1** ### Experiment 3: ULOT to align brain meshes Thual et al. (2022) proposed a novel formulation for the unbalanced FGW problem. Their proposal is showcased to align brain anatomies, and their functional signal (FUGW). We compare it to our ULFGW solver, using the same experimental setting, see Table 3. #### Conclusion Recent practical successes of OT methods to natural sciences have demonstrated the relevance of OT to their analysis pipelines, but have also shown, repeatedly, that a certain degree of freedom to depart from the rigid assumption of mass conservation is needed in practice. On the other hand, and across the same range of applications, low-rank approaches can hold the promise of scaling OT methods to relevant sample sizes for natural sciences. This paper merges these two strains and demonstrate the practical relevance of these novel algorithms. Figure 3: Visualization of measured and predicted _right auditory click_ contrast map in **Exp.3**. \begin{table} \begin{tabular}{l c c c} \hline \hline **solver** & **mass** & **val**\(\rho\) & **test**\(\rho\) \\ \hline FUGW-sparse & 0.999 & 0.492 & 0.472 \\ LFGW & 1.000 & 0.513 & **0.663** \\ ULFGW & 0.981 & **0.533** & 0.643 \\ \hline \hline \end{tabular} \end{table} Table 3: Results on the brain anatomy with functional signal data from Pinho et al. (2018) in **Exp.3**.